Image 01

21st Century AI

A blog about achieving meaningful Artificial Intelligence

Posts Tagged ‘wargames’

Another shameless book plug

Friday, April 6th, 2012

The Theory of GamesMy first thriller, The Theory of Games, is now available! Here’s the background story:

Jakob Grant, formerly a top-selling computer game designer, is now down on his luck and barely hanging on teaching at an exclusive private college. His best friend is a 135-pound Rottweiler-German Shepherd dog named Bill who has been implanted with the CardioTronic 413 pacemaker (the ‘Dick Cheney’ model). When Jakob is fired from his teaching position he reluctantly accepts an unusual offer to work on a 3D military simulation; but there is something about the Major General that hired him that doesn’t appear to be right…

Here’s what New York Times bestselling author Jim Hougan (AKA John Case) has to say about it:

 “The Theory of Games” is an off-the-Scoville Scale debut thriller from sometime-Pentagon consultant and fulltime bluesman Ezra Sidran.  By turns (and, often, all at once) funny, smart and scary, Sidran takes us on a broken-field run through the underbelly of the military-industrial complex – where even the wargames – especially the wargames – are played for keeps.

-      Jim Hougan (a/k/a John Case, “The Genesis Code“, “The First Horseman“, “The Murder Artist“, “Ghost Dancer“, “The Eighth Day“, “The Syndrome“, also under his own name, “Secret Agenda“,  “Kingdom Come” and “Spooks“)

You can download the 1st chapter for free here and then click on the link and buy the book on Amazon.

Artificial Intelligence in Modern Military Games @ GameTech 2012

Friday, March 23rd, 2012

I will be MCing the “Artificial Intelligence in Modern Military Games” panel at GameTech 2012 next week (March 29, 2012) in Orlando. I am extremely honored to be joined on this panel by:

Dr. Scott Neal Reilly is currently Principal Scientist and Vice President of the Decision Management Systems Division at Charles River Analytics, an artificial intelligence R&D company in Cambridge, MA. Dr. Neal Reilly’s research focuses on modeling emotional and social behavior in artificial agents and he was Principal Investigator for the US Army’s Culturally Aware Agents for Training Environments (CAATE) program, which focused on developing easy-to-use tools for creating interactive, intelligent, social agents.  Dr. Neal Reilly has a Ph.D. in Computer Science from Carnegie Mellon University, where he developed the Em system to model emotions in broadly capable intelligent agents. Before joining Charles River, Dr. Neal Reilly was Vice President of Production and Lead Character Builder at Zoesis Studios, which developed advanced artificial intelligence techniques for creating animated, artificially intelligent agents.

 

James Korris is CEO and President of Creative Technologies Incorporated (CTI).  CTI, named as one of Military Training Technology’s 2011 Top 100 Simulation and Training companies, is at the forefront of immersive, cognitive simulation development for government and industry.  Recent work includes one of the first DoD augmented virtuality (AV) implementations, the Call For Fire Trainer – AV, along with novel mobile applications for the Fort Sill Fires Center of Excellence.  Korris is currently leading a CTI effort supporting the SAIC Sirius team with a desktop application to mitigate analyst cognitive bias for IARPA.

From its establishment in 1999 until October 2006, Korris served as Creative Director of the U.S. Army-funded Institute for Creative Technologies at the University of Southern California.  In this pioneering “serious gaming” environment, Korris led the development of Full Spectrum Warrior, the first military application developed for the Xbox, along with desktop applications Full Spectrum Command and Full Spectrum Leader.  Korris’ team captured the DoD 2006 Modeling & Simulation award for training with Every Soldier A Sensor Simulation.  In 2007, USJFCOM recognized another Korris-led effort, the Joint Fires & Effects Trainer System as the highest-rated Close Air Support simulation trainer in the world.  In 2008, Korris was appointed to the Naval Research Advisory Committee, advising the Secretary of the Navy on its research portfolio.  Korris came to the defense industry following work in Hollywood studio production, producing and writing.  He is a member of the writers’ branch of the Academy of Television Arts and Sciences, the Writers Guild of America, the Writers Guild of Canada and the Society of Motion Picture and Television Engineers.  His work was recognized in the 2006 Smithsonian Cooper-Hewitt National Design Triennial, Saul Wurman’s eg2006 conference and as a Visionary in Bruce Mau’s Massive Change exhibition.  Korris earned a BA from Yale University and an MBA with distinction at the Harvard Business School.”

 

Dr. Michael van Lent received a PhD at the University of Michigan in 2000. His expertise is in applying cognitive science approaches to military problems. Dr. van Lent is a recognized expert in the development of advanced simulation systems for military training. He has participated in the design and development of many immersive training applications including Full Spectrum Warrior, Full Spectrum Command, the Joint Fires and Effects Trainer System (JFETS), ELECT BiLAT, UrbanSim, Helping our Heroes and the Strategic Social Interaction Modules program.

 

Robert Franceschini is a vice president and Technical Fellow at Science Applications International Corporation (SAIC). He directs the Modeling and Simulation Center of Expertise, an organization that spans SAIC’s modeling and simulation capabilities.    Prior to SAIC, Dr. Franceschini held academic and research positions at the University of Central Florida (UCF) and its Institute for Simulation and Training.  He plays an active role in science, technology, engineering, and mathematics programs in central Florida.  He received both a BS and a Ph.D. in computer science at UCF.

 

I’m looking forward to meeting you in Orlando!

MATE (Machine Analysis of Tactical Environments)

Sunday, October 30th, 2011

MATEI’ve been working on this project since about 2003 (you could reasonably argue that I actually started development in 1985 when I began work on UMS: The Universal Military Simulator) and I’m finally in a position to share some of this work with the world at large. TIGER (for Tactical Inference GenERator) was my doctoral research project and it was funded, in part, by a DARPA (Defense Advanced Research Project Agency) ‘seedling-grant’.

After I received my doctorate, DARPA funded my research on computational military reasoning. Since DARPA was already funding a project called TIGR (Tactical Ground Reporting System) my TIGER was renamed MATE.

MATE was created to quickly arrive at an optimal tactical solution for any scenario (battlefield snapshot) that is presented to the program. It also designed to facilitate quickly entering unit data (such as location, strength, morale, etc.) via a point and click graphical user interface. (GUI). There are three main sections to MATE’s decision making process:

  1. Analysis of the terrain and opposing forces (REDFOR and BLUFOR) location on the terrain. This includes the ability to recognize certain important tactical situations such as ‘anchored’ or ‘unanchored flanks’, ‘interior lines of communication’, ‘restricted avenues of attack’, ‘restricted avenues of retreat’ and the slope of attack.
  2. Ability to implement the five canonical offensive maneuvers: turning maneuver, envelopment, infiltration, penetration and frontal assault. This includes the ability to determine flanking goals, objectives and optimal route planning (including avoiding enemy line of sight and enemy fire).
  3. Unsupervised Machine Learning which allows MATE to classify the current tactical situation within the context of previously observed situations (including historically critiqued battles).

I wished to test MATE with an actual tactical situation that occurred recently in either Afghanistan or Iraq. Even though my research was supported by DARPA I did not have access to recent ‘after action’ reports. However, when I saw the HBO documentary, “The Battle for Marjah,” I realized that enough information was presented to test MATE.

The clip, below, from the HBO documentary, shows the tactical situation faced by Bravo Company, 1/6 Marines February 13, 2010:

It took only a few seconds to enter RED and BLUE unit locations to MATE (the map was downloaded from Google Earth):

The Battle for Marjah as shown on MATE (actual screen capture).

Screen capture of MATE showing the Battle for Marjah tactical situation. Click on image to see full-size.

After clicking on the ‘Calculate AI’ icon, the ‘Analyze and Classify Current Situation’ button and the, ‘Generate HTML Output and Launch Browser’ button, MATE’s analysis of the tactical situation was displayed. Total elapsed time was less than 10 seconds (on a Windows XP system, or 5 seconds on a Windows 7 system).

MATE then automatically generated HTML pages of its recommendations including graphically displaying optimal paths for an envelopment maneuver that encircled enemy positions:

MATE output for Envelopment Maneuver COA.

MATE output for Envelopment Maneuver COA. Click on image to see full-size.

MATE automatically produced HTML pages of its analysis and optimal course of action (COA) routes and instructions and launched the default browser on the computer.

To see the actual HTML output of MATE’s analysis of, “The Battle for Marjah” situation click here (opens in a new window).

For more information about MATE contact sidran [at] RiverviewAI.com.

Bad Game AI / Good Game AI (Part 2)

Thursday, June 16th, 2011

So, we left the last post with the intention of finding other games that had good AI, or at least AI that didn’t suck. We’ve asked everybody we know, we scoured our game collection and we’ve still come up empty. You know what? Almost all game AI sucks; some just suck more than others.

Without naming names – but getting as close as we dare – here are some real stinkers:

Sports Games. I like sports games; especially ‘management’ sports games where you get to trade for players and call the plays. I’m not interested in actually being the quarterback or the batter and precisely timing when you press the ‘A’ button or whatever. I like sports games because the AI sucks and I can easily manage the Chicago Cubs to win back to back to back World Series championships (for my foreign readers the Chicago Cubs are the worst team in the history of professional baseball or maybe professional sports in general and, yes, I’m a Cubs fan).

To me, this especially galling because writing sports AI should be pretty easy; well, easier than writing wargame AI. First, baseball and football (the only sports that really excite me from a game AI perspective) are really well understood and there is a ton of statistics recorded since the beginning of these sports. Stats are very important in creating good AI. It allows us to create accurate models of what has happened and what will probably happen in the future. We can use this to our advantage.

A quick example: you’re calling the defensive plays in a football game. It is third down and the offense has to move the ball 25 yards for a first down. What do you think is going to happen? Well, most humans would know that the offense is going to call a passing play. What should the defense do? I’ll give you a hint: don’t expect a running play off tackle. Yet, most football games are pretty clueless in this classic ‘passing down’ situation. Indeed, sports games AI is clueless when it comes to knowing what happened in the past and what is likely to occur next. They don’t keep any stats for AI. Doing so would come in handy for unsupervised machine learning (I was going to link to a post below but, hey, just scroll down); a subject I plan on writing about a great deal more in the future.

And one more thing about sports games: they have no concept of what constitutes a good trade or a bad trade. Let’s say you want to trade for Babe Ruth (for our foreign readers: arguably the greatest baseball player of all time). At some level, the game has a ‘value’ associated with the ‘Babe Ruth’ data object. It could be a letter value, like ‘A’, or it could be a numerical value like 97. If you offer the AI a trade of ten worthless players, valued in the 10-20 range (or ‘D’ players) the AI will take the trade because it is getting more ‘value’ (100-200 ‘player points’ for 97 ‘player points) even though it’s a stupid decision. Yes, I know some games only allow you make three or four player trades, but the basic principle is the same: sport game AI usually makes bad trades. And the reason for this is that the AI is ‘rule based’ or ‘case based reasoning’. Again, I promise I’ll write more about this type of AI in the future, but for now just be aware that this type of AI sucks.

Real Time Strategy (RTS) Games (Wargames with Tech Trees). There are a lot of games that fall into this category and they all have serious AI problems. First, writing AI for wargames is very difficult (I do for a living, so, yeah, I am an expert on this). Second, RTS games can’t ‘afford’ to spend many clock cycles on AI because they have lots of animation going on the screen, polling for user input, etc. and this results in very shallow AI decisions. Lastly, the addition of a Tech Tree (should the AI ‘research’ longitude or crop rotation?) doesn’t make the AI decisions any easier.

If anybody out there knows of a RTS game where the AI doesn’t suck, please drop me a line. I would love to play it.

This, unfortunately, brings us to:

Civilization V. Well, so much for not using names. I haven’t even played this game but I just read this review on I Heart Chaos: “Speaking of “miscalculating”, (a polite word for “cheating”) there is a serious issue with Civilization V’s artificial intelligence. It is so

unbelievably unbalanced that the experience suffers for it.” (http://www.iheartchaos.com/post/6492357569/ihc-video-game-reviews-civilization-v).

CES 1989

(L to R) The game designer of the Civilization game series, the author of this blog and Johnny Wilson (game magazine writer) at the 1989 CES show. Couple of interesting observations: I had the #1 game at the time and we all (except Johnny Wilson) had more hair.

Well that sounds kinda mean-spirited of me, doesn’t it? I haven’t even played the game, but here I’m citing another review that says Civ 5 has lousy AI. Well, the problem is that whole Civ series (and I have played some of the earlier ones) all suffered from bad AI, or AI that just plain cheated. And that’s another problem; the game developer (who shall remain nameless) kinda has a history of using ‘cheating’ AI. That is to say, his AI often ‘sees through’ the fog of war (i.e. you, the player, can’t see your opponent’s units but your computer opponent can see all of yours), and, well, there’s just not a nice way to say this… the ‘dice rolls’ have a tendency to get fudged… in the favor of the computer.

So, there you have it: the current state of AI for computer games isn’t pretty. For the most part, it’s ‘rule based’ or ‘case based reasoning’ which is extremely inflexible (we sometimes use the phrase ‘brittle’ to indicate AI that is easily broken.

I am more convinced than ever that the solution is unsupervised machine learning. So, I will be returning to that topic in the next blog entry.

 

 

Pathfinding, Academia and the Real World

Thursday, July 15th, 2010

“When game developers look at AI research, they find little work on the problems that interest them, such as nontrivial pathfinding, simple resource management and strategic decision-making, bot control, behavior-scripting languages, and variable levels of skill and personality — all using minimal processing and memory resources. Game developers are looking for example “gems”: AI code that they can use or adapt to their specific problems. Unfortunately, most AI research systems are big hunks of code that require a significant investment of time to understand and use effectively.”
- John Laird, “Bridging the Gap Between Developers & Researchers”

Pathfinding is a least-weighted path graph problem. There, I said it.

My plan for this blog was to keep it at a fairly high level of abstraction and not to use terms that would scare away computer game designers who, for the most part, feel about math the way vampires feel about garlic (or silver or crosses depending on which particular form of vampire fiction you’re reading this week). But, yeah, pathfinding (which is an important part of RPGs, wargames and even sports games) can be reduced to well researched and well understood mathematical problem.

Computer game designers (or programmers) usually work in a grid; that is to say their world (or battlefield, or playing field) can be represented by a two-dimensional matrix. Though we often think of a graph as a tree-like object, a two-dimensional matrix in which every cell is connected to its neighbors (e.g. a battlefield, RPG world, football field, etc.), can also be represented as a graph.

So, as a computer game programmer, why do we care that RPG worlds can be represented as a graph? Well, it turns out that there is an algorithm that can be mathematically proven to return the shortest path between two points in a graph. Now, if you’re not a computer game programmer you’re probably saying, “well, if they’ve got this whole shortest path thing worked out, why is that half the time in a game when I tell my player to go from Point A to Point B it gets stuck behind a wall or a rock or something trapped like a doofus?”  And the answer to that is: because the programmer didn’t use a guaranteed least weighted path algorithm.

The reason that we refer to this type of an algorithm as a “least weighted path” algorithm is because we can give a ‘weight’ representing the ‘cost’ of moving from one cell (or square) to the next cell (or square). For example, we might say that moving ‘north’ through ‘clear’ terrain costs 1, and moving northeast through a swamp costs 3 and moving east through a forest costs 2, etc. So we’re not really talking about a ‘fastest’ path or a ‘shortest’ path, but, technically, the path with the lowest ‘movement costs’ or weight. Does that make sense? If it doesn’t, drop me an email and I’ll explain it, hopefully more clearly, in another post.

And now let’s look at the reasons why the programmer didn’t use a guaranteed least weighted path algorithm:

  1. Speed. The oldest guaranteed least weighted path algorithm is commonly known as Dijkstra’s algorithm. It was first discovered by Edsger Dijkstra in 1959 and there is a very good explanation of how it works (complete with animation) here. Dijkstra’s algorithm is taught in every undergrad AI class. It’s not very complicated and it’s easy to implement (in fact, it’s usually an assignment by the middle of the semester). So if the “fastest path from Point A to Point B in a computer game” problem has been solved why doesn’t everybody use Dijkstra’s algorithm? Good question. The reason that computer game programmers don’t use Dijkstra’s algorithm is that it is an “exhaustive search” (which means that it has to look at every possible path) and then determines the fastest route. In other words, it’s sloooooow. Really, really slow.
  2. Time. I’m not talking game development time – though often the problem is simply that the game publisher is forcing the game ‘out the door’ before it’s really completed – I mean ‘clock cycles’ or there isn’t enough time for the computer to both render all these polygons that represent explosions and mountains and whatever and for the computer to do AI calculations. AI is constantly getting short shrift when the CPU budget is being determined.
  3. Ignorance. There’s another solution to the “least weighted path problem” known as A*; pronounced ‘A star’ (I once had the temerity to ask Nils Nilsson, one of the authors of A* what the ‘A’ stood for and he replied, “algorithm;” and now we all know).  Thankfully, Amit Patel has done a great job of explaining A* (with lots of graphics) here, so I don’t have to. Again, I’m not going to get into the math behind A* but I am going to leave you with this mind-blowing statement, “the guaranteed worst-case runtime of A* is the best-case runtime of Dijkstra’s algorithm.” What did he just say? That Dijkstra’s algorithm will never return a least weighted path in less time than A*. So, you should be asking, “well, why doesn’t everybody  use A* instead of Dijkstra’s algorithm?” The answer, as crazy as this sounds is, A* is rarely, if ever, taught in academia. What?!?!? Yeah, there is a weird prejudice against it because Dijkstra solved the problem way back in ’59, it is a mathematically proven solution, so why bother with anything else?

You see in Academia, especially in computer science which is usually a department inside the division of mathematical science, RAM and runtime are often thought of as ‘infinite’ commodities. I suspect that this way of thinking about problems stems, in part, from Turing’s theoretical machine which had infinite RAM (an infinite strip of paper) and an infinite amount of time to solve a problem. Unfortunately, those of us that live and work in the ‘Real World’ have neither infinite RAM or time. We need algorithms that return solutions fast.

Lastly I would like to mention  Dimdal’s work on optimizing A* in “Real World Terrain” situations. I wrote a paper about it which is available here. I use a modified version of Dimdal’s modification of Nilsson’s A* for my own pathfinding routines in TIGER. The result is an extremely fast least weighted path algorithm that takes a fraction of the time to calculate as Dijkstra’s algorithm or A*.

And now you know the difference between academia and the real world and why your character in an RPG gets stuck behind a bunch of rocks acting like a doofus.

Where’s the Artificial Intelligence we were promised?

Thursday, July 15th, 2010

Way back at the end of the last century we were all led to believe that wonderful ass-kicking AI would be standard issue in every computer game, office application and multibillion dollar DoD wargame. When we were writing these games in the 1980s and ’90s we were still dealing with limitations on available RAM and storage space (games shipped and ran on 3.5″ disks; early hard drives were awfully small). Still, what we did back in the ‘old days’ of computer games was, frankly, at least as good as what is shipping today.

I’ve written a lot of computer games and I’ve played even more. I would be very hard pressed to name a current game that has ‘pretty good’ AI. Based on the reviews I’ve read of most computer games, I’m not alone. AI, even just ‘acceptable’ AI, continues to be the biggest problem facing commercial computer games today.

On the ‘professional’ side large, expensive wargames (and here I can’t get into many details if I ever want to work in the industry again) had no AI whatsoever until fairly recently. For now, I’ll just say the results aren’t commensurate with the truck-loads of money that are being dumped on the problem.

My old friend, Mike Morton, urged me to start writing a blog about AI and this is the result. If there are any complaints about the blog, please write to me and I will send you Mike’s email address. He works for Google and probably has plenty of time to answer email.