Image 01

21st Century AI

A blog about achieving meaningful Artificial Intelligence

Posts Tagged ‘computer games’

AI for Dinosaurs!

Wednesday, May 15th, 2013

Dinosaur Island

Dinosaur Island

I’m starting work on the AI for a new (old) project called Dinosaur Island link here. The idea is to update a game I did in 1988 and port it to Windows and Xbox using XNA.  The original game, Designasaurus, was published by Britannica Software and sold about a gazillion units.

This is more than just a straight port and update. Of course the dinosaurs are going to be in 3D, but the AI is going to very complex. It’s really just now dawning on me how complex it’s going to be.

Anyway, I’m going to be posting my design notes on the AI over at the Dinosaur Island blog. Hope you stop by for a visit.

Cassandra

Tuesday, June 5th, 2012

Cassandra after the fall of Troy; nobody believed her.

Cassandra was a princess of Troy and was simultaneously blessed with the gift of prophecy and cursed because nobody would believe her predictions. I’m getting a real Cassandra complex.

From time to time I get a phone call from somebody at a company that has a problem. They are usually pretty vague about the specifics of their problem, but they would like me to fly out there and take a look. The first time this happened was about 12 years ago and the company was a medium-sized, public traded operation that was best known by the three initials of its corporate name. The company had been pretty successful for the last five or six years and wanted to expand into the ‘lucrative’ computer game business.

I can’t remember how much they told me about their ‘problem’ before I got there or how much I had to figure out when I was there but the bottom line was that they were coming up on a deadline and they were getting a little concerned about making it. This particular deadline involved passing a beta test for a mystery game involving the licensed likeness of Humphrey Bogart. After seeing where they were on the project, and how much they had left to do before they were going to make it to beta (which was about six weeks away) I reported to the VP in charge that I didn’t think their team had a prayer of making the deadline.

“You have to have faith in your people,” the VP insisted.

“Not if they can’t do the work,” I replied.

Long story short: the company blew the deadline, was sued into nonexistence by the game publisher who had the other end of the contract and the team that the VP had so much faith in was promptly thrown out of work. And, by the way, I never got paid for my consulting, either.

Now, let’s fast-forward to about 18 months ago. Again, I get a phone call from another company that is best known by the three initials of its corporate name and, again, they’ve got a deadline and a problem. They fly me out to where they are located, a limousine picks me up at the airport and, again, I discover what the real situation with their project is: they’ve got a quickly approaching milestone deadline and their program won’t scale and is slower than molasses in an Iowa winter.

At the end of the day I report to the project Architect that they’re not going to pass the milestone test. Their code is written in Java which is far too slow to ever accomplish what they need and that I suggest that they bring in a half a dozen veteran computer game programmers (and install Mountain Dew dispensers and put pizza delivery on speed dial) and rewrite the whole thing in C++ or C#.

The Architect thanked me for my advice and politely informed me that my suggestions were not going to be implemented, the limousine took me back to the airport, the company failed the milestone test and DARPA cancelled their $35 million contract. Again, I was not paid.

About two weeks ago I got a phone call from yet another company best known by the three initials of its corporate name asking would I please take a look at their ‘serious game’ that they were developing for a government agency and, yes, they were quickly approaching a beta test deadline. I took a look at their project and promptly wrote back to them that their scoring system wasn’t doing what they thought it was doing and that they weren’t going to pass their upcoming milestone.

A few days later I got an email thanking me for my predictions and I haven’t heard from them since.

Oh, I haven’t got paid from them, either.

 

Artificial Intelligence in Modern Military Games @ GameTech 2012

Friday, March 23rd, 2012

I will be MCing the “Artificial Intelligence in Modern Military Games” panel at GameTech 2012 next week (March 29, 2012) in Orlando. I am extremely honored to be joined on this panel by:

Dr. Scott Neal Reilly is currently Principal Scientist and Vice President of the Decision Management Systems Division at Charles River Analytics, an artificial intelligence R&D company in Cambridge, MA. Dr. Neal Reilly’s research focuses on modeling emotional and social behavior in artificial agents and he was Principal Investigator for the US Army’s Culturally Aware Agents for Training Environments (CAATE) program, which focused on developing easy-to-use tools for creating interactive, intelligent, social agents.  Dr. Neal Reilly has a Ph.D. in Computer Science from Carnegie Mellon University, where he developed the Em system to model emotions in broadly capable intelligent agents. Before joining Charles River, Dr. Neal Reilly was Vice President of Production and Lead Character Builder at Zoesis Studios, which developed advanced artificial intelligence techniques for creating animated, artificially intelligent agents.

 

James Korris is CEO and President of Creative Technologies Incorporated (CTI).  CTI, named as one of Military Training Technology’s 2011 Top 100 Simulation and Training companies, is at the forefront of immersive, cognitive simulation development for government and industry.  Recent work includes one of the first DoD augmented virtuality (AV) implementations, the Call For Fire Trainer – AV, along with novel mobile applications for the Fort Sill Fires Center of Excellence.  Korris is currently leading a CTI effort supporting the SAIC Sirius team with a desktop application to mitigate analyst cognitive bias for IARPA.

From its establishment in 1999 until October 2006, Korris served as Creative Director of the U.S. Army-funded Institute for Creative Technologies at the University of Southern California.  In this pioneering “serious gaming” environment, Korris led the development of Full Spectrum Warrior, the first military application developed for the Xbox, along with desktop applications Full Spectrum Command and Full Spectrum Leader.  Korris’ team captured the DoD 2006 Modeling & Simulation award for training with Every Soldier A Sensor Simulation.  In 2007, USJFCOM recognized another Korris-led effort, the Joint Fires & Effects Trainer System as the highest-rated Close Air Support simulation trainer in the world.  In 2008, Korris was appointed to the Naval Research Advisory Committee, advising the Secretary of the Navy on its research portfolio.  Korris came to the defense industry following work in Hollywood studio production, producing and writing.  He is a member of the writers’ branch of the Academy of Television Arts and Sciences, the Writers Guild of America, the Writers Guild of Canada and the Society of Motion Picture and Television Engineers.  His work was recognized in the 2006 Smithsonian Cooper-Hewitt National Design Triennial, Saul Wurman’s eg2006 conference and as a Visionary in Bruce Mau’s Massive Change exhibition.  Korris earned a BA from Yale University and an MBA with distinction at the Harvard Business School.”

 

Dr. Michael van Lent received a PhD at the University of Michigan in 2000. His expertise is in applying cognitive science approaches to military problems. Dr. van Lent is a recognized expert in the development of advanced simulation systems for military training. He has participated in the design and development of many immersive training applications including Full Spectrum Warrior, Full Spectrum Command, the Joint Fires and Effects Trainer System (JFETS), ELECT BiLAT, UrbanSim, Helping our Heroes and the Strategic Social Interaction Modules program.

 

Robert Franceschini is a vice president and Technical Fellow at Science Applications International Corporation (SAIC). He directs the Modeling and Simulation Center of Expertise, an organization that spans SAIC’s modeling and simulation capabilities.    Prior to SAIC, Dr. Franceschini held academic and research positions at the University of Central Florida (UCF) and its Institute for Simulation and Training.  He plays an active role in science, technology, engineering, and mathematics programs in central Florida.  He received both a BS and a Ph.D. in computer science at UCF.

 

I’m looking forward to meeting you in Orlando!

Bad Game AI / Good Game AI (Part 2)

Thursday, June 16th, 2011

So, we left the last post with the intention of finding other games that had good AI, or at least AI that didn’t suck. We’ve asked everybody we know, we scoured our game collection and we’ve still come up empty. You know what? Almost all game AI sucks; some just suck more than others.

Without naming names – but getting as close as we dare – here are some real stinkers:

Sports Games. I like sports games; especially ‘management’ sports games where you get to trade for players and call the plays. I’m not interested in actually being the quarterback or the batter and precisely timing when you press the ‘A’ button or whatever. I like sports games because the AI sucks and I can easily manage the Chicago Cubs to win back to back to back World Series championships (for my foreign readers the Chicago Cubs are the worst team in the history of professional baseball or maybe professional sports in general and, yes, I’m a Cubs fan).

To me, this especially galling because writing sports AI should be pretty easy; well, easier than writing wargame AI. First, baseball and football (the only sports that really excite me from a game AI perspective) are really well understood and there is a ton of statistics recorded since the beginning of these sports. Stats are very important in creating good AI. It allows us to create accurate models of what has happened and what will probably happen in the future. We can use this to our advantage.

A quick example: you’re calling the defensive plays in a football game. It is third down and the offense has to move the ball 25 yards for a first down. What do you think is going to happen? Well, most humans would know that the offense is going to call a passing play. What should the defense do? I’ll give you a hint: don’t expect a running play off tackle. Yet, most football games are pretty clueless in this classic ‘passing down’ situation. Indeed, sports games AI is clueless when it comes to knowing what happened in the past and what is likely to occur next. They don’t keep any stats for AI. Doing so would come in handy for unsupervised machine learning (I was going to link to a post below but, hey, just scroll down); a subject I plan on writing about a great deal more in the future.

And one more thing about sports games: they have no concept of what constitutes a good trade or a bad trade. Let’s say you want to trade for Babe Ruth (for our foreign readers: arguably the greatest baseball player of all time). At some level, the game has a ‘value’ associated with the ‘Babe Ruth’ data object. It could be a letter value, like ‘A’, or it could be a numerical value like 97. If you offer the AI a trade of ten worthless players, valued in the 10-20 range (or ‘D’ players) the AI will take the trade because it is getting more ‘value’ (100-200 ‘player points’ for 97 ‘player points) even though it’s a stupid decision. Yes, I know some games only allow you make three or four player trades, but the basic principle is the same: sport game AI usually makes bad trades. And the reason for this is that the AI is ‘rule based’ or ‘case based reasoning’. Again, I promise I’ll write more about this type of AI in the future, but for now just be aware that this type of AI sucks.

Real Time Strategy (RTS) Games (Wargames with Tech Trees). There are a lot of games that fall into this category and they all have serious AI problems. First, writing AI for wargames is very difficult (I do for a living, so, yeah, I am an expert on this). Second, RTS games can’t ‘afford’ to spend many clock cycles on AI because they have lots of animation going on the screen, polling for user input, etc. and this results in very shallow AI decisions. Lastly, the addition of a Tech Tree (should the AI ‘research’ longitude or crop rotation?) doesn’t make the AI decisions any easier.

If anybody out there knows of a RTS game where the AI doesn’t suck, please drop me a line. I would love to play it.

This, unfortunately, brings us to:

Civilization V. Well, so much for not using names. I haven’t even played this game but I just read this review on I Heart Chaos: “Speaking of “miscalculating”, (a polite word for “cheating”) there is a serious issue with Civilization V’s artificial intelligence. It is so

unbelievably unbalanced that the experience suffers for it.” (http://www.iheartchaos.com/post/6492357569/ihc-video-game-reviews-civilization-v).

CES 1989

(L to R) The game designer of the Civilization game series, the author of this blog and Johnny Wilson (game magazine writer) at the 1989 CES show. Couple of interesting observations: I had the #1 game at the time and we all (except Johnny Wilson) had more hair.

Well that sounds kinda mean-spirited of me, doesn’t it? I haven’t even played the game, but here I’m citing another review that says Civ 5 has lousy AI. Well, the problem is that whole Civ series (and I have played some of the earlier ones) all suffered from bad AI, or AI that just plain cheated. And that’s another problem; the game developer (who shall remain nameless) kinda has a history of using ‘cheating’ AI. That is to say, his AI often ‘sees through’ the fog of war (i.e. you, the player, can’t see your opponent’s units but your computer opponent can see all of yours), and, well, there’s just not a nice way to say this… the ‘dice rolls’ have a tendency to get fudged… in the favor of the computer.

So, there you have it: the current state of AI for computer games isn’t pretty. For the most part, it’s ‘rule based’ or ‘case based reasoning’ which is extremely inflexible (we sometimes use the phrase ‘brittle’ to indicate AI that is easily broken.

I am more convinced than ever that the solution is unsupervised machine learning. So, I will be returning to that topic in the next blog entry.

 

 

Bad Game AI / Good Game AI (Part 1)

Thursday, June 9th, 2011

Most game AI is bad AI. Let’s be honest; it’s not just bad, it sucks.

I’ve been writing and playing computer games since the early 1980s and I haven’t seen even a modest improvement in the quality of computer opponents. There are a few notable exceptions – and we’ll get to them shortly – but, the vast majority of commercial games that are released were developed with little thought, or budget, given to AI.

So, since it’s such a short list, let’s start with a few computer games that have good AI:

Computer Chess. Any computer chess program that is available today, including ‘freebie’ online Java applets will kick your ass. Back in the ‘70s I had an ‘electronic chess game’ that played as well as I did (I was about a 1600 level player at the time). The game had various levels of AI; but all that changed was how much time the machine was given to make a move. If you put it on the top level it would take forever contemplating the all the responses to the opening P-K4.

So, why was chess AI pretty good thirty-five years ago and even better now? There are a couple of reasons, the first being that chess can be divided into three ‘phases’: the opening, the middle and the endgame. Chess openings are very well understood and there are number of ‘standard’ texts on the subject such as Batsford Chess Openings Volume 1 and 2. These chess openings are available in various file formats and are easily integrated into a chess engine. So, until the program is ‘out of book’ the most important moves, the opening moves, are expertly played by the program without any AI at all. There are also books for endgame positions. So, really, the only difficult area for chess programs is the middlegame.

1st Chess problem solved by computer

1st Chess problem solved by computer by Dr. Dietrich Prinz with the Manchester Mark 1 in 1951 (White to mate in two. The solution is: R - R6, PxR. P - N7 Mate.)

There are dozens of very good articles, papers and books on evaluating chess positions using heuristic evaluation function. Here’s a pretty good page on the subject, even though it looks like all the picture links are broken: http://www.cs.cornell.edu/boom/2004sp/ProjectArch/Chess/algorithms.html ). And here’s a link to a series on building a chess engine: http://www.gamedev.net/page/resources/_/reference/programming/artificial-intelligence/gaming/c

hess-programming-part-vi-evaluation-functions-r1208 .

Chess was one of the first games to be implemented on computers. The first chess problem solved by a computer see picture) was done by Dr. Dietrich Prinz with the Manchester Mark 1 in 1951 (see picture, right).

Though I could be wrong, I think Dr. Prinz’s program simply employed brute force to solve the problem.

So, why is it comparatively easy to find/write good chess AI? Opening and endgame databases are readily available, evaluation functions for board positions are well understood and (I suspect I’ll get some flak for saying this) it’s a relatively easy game (at least to program, not to master). Also, there are not a lot of pieces, their moves are restricted, the rules of the game are simple and the board size is fixed.

Chris Crawford’s Patton vs. Rommel. Crawford’s Patton vs. Rommel was a wargame that came out in 1987. On the PC (remember this was before Windows) it ran in 640kb (and that included the operating system). The display was 640 x 200 x 2, if I remember correctly (see screen shot).

Chris Crawford's Patton vs. Rommel

Chris Crawford's Patton vs. Rommel (1987)

I haven’t played the game in over 20 years, but I remember being very impressed by the AI, specifically how the program had a ‘feel’ for the tactical situation. A very important part of the game was the ‘road net’. Units moved much faster on roads and it was easy to get your units caught up in traffic jams. When that happened the AI would warn the user. This really shocked me when I first played the game. Chris employed what he called ‘geometric AI’ in Patton vs. Rommel. He goes into more details in his book, “Chris Crawford on Game Design,” (http://www.amazon.com/Chris-Crawford-Game-Design/dp/0131460994).

 

There are plenty of great games out there, but that’s not what this post is about. The question is what games have good AI? I’m going to need to think about this and see if I can add some more titles to the ‘good AI’ list, because I sure have a ton for the ‘bad AI’ list.

Where’s the Artificial Intelligence we were promised?

Thursday, July 15th, 2010

Way back at the end of the last century we were all led to believe that wonderful ass-kicking AI would be standard issue in every computer game, office application and multibillion dollar DoD wargame. When we were writing these games in the 1980s and ’90s we were still dealing with limitations on available RAM and storage space (games shipped and ran on 3.5″ disks; early hard drives were awfully small). Still, what we did back in the ‘old days’ of computer games was, frankly, at least as good as what is shipping today.

I’ve written a lot of computer games and I’ve played even more. I would be very hard pressed to name a current game that has ‘pretty good’ AI. Based on the reviews I’ve read of most computer games, I’m not alone. AI, even just ‘acceptable’ AI, continues to be the biggest problem facing commercial computer games today.

On the ‘professional’ side large, expensive wargames (and here I can’t get into many details if I ever want to work in the industry again) had no AI whatsoever until fairly recently. For now, I’ll just say the results aren’t commensurate with the truck-loads of money that are being dumped on the problem.

My old friend, Mike Morton, urged me to start writing a blog about AI and this is the result. If there are any complaints about the blog, please write to me and I will send you Mike’s email address. He works for Google and probably has plenty of time to answer email.