Image 01

21st Century AI

A blog about achieving meaningful Artificial Intelligence

Archive for June, 2011

Learning without Supervision (Part 2): Objects & Attributes

Tuesday, June 21st, 2011

As I mentioned in my previous posting, I’m reading the Dalai Lama’s My Spiritual Journey. Actually, I’m reading it for the second time and I will probably read it again at least once more. It’s not hard to understand. It’s well written. Ninety percent of what the Dalai Lama writes I understand perfectly and I agree with completely. What throws me for a loop is when he writes, “Phenomena, which manifest to our faculties of perception, have no ultimate reality.”

Is the Dalai Lama speaking metaphorically? I don’t think so; though apparently there are two schools of Buddhist thought on the subject of reality (see here).

This is especially confusing for me because I am a scientist. There are days that I have a hard time believing this, myself, but I’ve hung my diplomas in my office so whenever I have a moment of self-doubt I can gaze at the expensively framed certificates. The signatures look real. I doubt if the President of the Board of Regents signs the undergrad degrees but his signature on my doctorate looks pretty legit.

My doctoral research, and the research that I do now, involves unsupervised machine learning (UML) and it is very much tied to reality; or at least our perceptions of reality. Curiously, the early research in what would become UML was conducted by psychologists who were interested in visual perception and how the human brain categorizes perceived objects.

So, let’s proceed as if this universe and everything in it are objects that can be described. Our perceptions may be completely wrong – they may have no ultimate reality – but I know the technique of unsupervised machine learning works; at least in this illusion that we call ‘reality’.

For a machine to make intelligent decisions about the world it has to understand the world. It has to grok the world.

Key concept: The world is made up of objects. Objects are described by attributes.

Therefore, the world can be described and the world can be understood within the context of a previously observed similar situation(s). In essence, the machine says, “I can categorize this new situation. I have seen something similar (or many similar things) before. I can put this new situation in the context of previously observed things. This is what happened previously. These are the things that I need to be aware of. This is what I need to look at now.”

So, let’s talk about objects and attributes.

Let’s start by selecting some random object on our desk. Okay, a coffee cup. What attributes can we use to describe the coffee cup object? Well, it’s a cylinder, closed at one end and open at the other and it has height and a radius and weight and a handle. It also has a cartoon on it. It also has some fruit juice in it. So we can describe the object with these attributes. However, some of these attributes are very important while others (the fact that the coffee cup contains fruit juice or that it has a cartoon on it) are irrelevant to the object being a coffee cup.

Now let’s look at some other random objects on my desk. I’ve got five plastic paper clip containers on it. They are also cylinders, closed at one end and open at the other, but none of them have handles. Also on my desk there are a couple of cylinders closed at both ends that are plastic barrels of Juicy Fruit gum.

So, if we were to take these eight objects found on my desk (yes, I have a very large desk) and start ‘feeding’ them into our machine we would end up with three ‘clusters’ of objects: one cluster would contain five objects described as cylinders, closed at one end and open on one end, one cluster would contain two objects described as cylinders closed at both ends and one cluster would contain one object described as a cylinder closed at one end, open at the other end and with a handle.

Now I just went and got a cup of coffee and put it on my desk. The machine asks itself (and we’ll get into how this is done in the next installment) does this new object belong with the paper clip holders? No, it has a handle. Does it belong with the Juicy Fruit gum barrels? No, one end of the cylinder is open and it has a handle. The machine then compares it with the other ‘coffee cup’ object and sees that they’re very similar and places the new object with the previously observed and categorized coffee cup. Voilà!

How does the machine ‘see’ where to place new objects? It involves a category utility function and this will be the subject of the next blog post.

How do you know which attributes are important for classification and which are irrelevant (like the cartoon on the coffee cup)? This involves humans; specifically Subject Matter Experts (SMEs). In my doctoral research I showed that it is crucial to include SMEs throughout the development process. We conduct blind surveys with SMEs to determine:

  1. If there is a consensus among the SMEs that specific objects can be defined by specific attributes.
  2. What those attributes are.
  3. To validate algorithms that return ‘real world values’ that describe these attributes.
  4. To validate the machine’s output.

A three handled cup called a 'tug'. The Greeks called 3 handled cups 'hydras'.

Algorithms that describe attributes must return ‘real world values’ which is just a fancy way of saying numbers with a decimal point. For example, an algorithm that returns a value for ‘number of closed ends of a cylinder’ would return either 0, 1.0 or 2.0. And an algorithm that returns a value for ‘number of handles’ would return 0, 1.0, 2.0, 3.0… What, you say, a three handled cup? Yup, such beasts exist (see picture at the right).

Okay, the next episode involves some math. So first have a lie down and think cool thoughts until the panic subsides.

 

Philosophical Cybernetics & the Kurzweil Singularity

Saturday, June 18th, 2011

Many years ago, when I was an undergrad, I had to take a religion course. This was something that I flat out refused to do and I discovered, as I so often do, a loophole in the system ripe for exploitation: certain philosophy courses counted as a ‘religion credit’ and one of these, Philosophical Cybernetics, was being offered that semester.

This class had two titles; the philosophical one and Introduction to Artificial Intelligence. Same class, same professor, same credits, but depending on how you signed up for it, it would count as the required religion class.

I haven’t thought about the phrase, “Philosophical Cybernetics” in a long time. Because the professor (who shall remain nameless for reasons soon to be obvious) used the terms ‘philosophical cybernetics’ and ‘artificial intelligence’ interchangeably I always assumed that they were tautological equivalents. It’s a good thing that I checked before writing today’s blog because, like a lot of things, this professor was wrong about this, too.

Today I learned (from http://www.pangaro.com/published/cyber-macmillan.html):

Artificial Intelligence and cybernetics: Aren’t they the same thing? Or, isn’t one about computers and the other about robots? The answer to these questions is emphatically, No.

Researchers in Artificial Intelligence (AI) use computer technology to build intelligent machines; they consider implementation (that is, working examples) as the most important result. Practitioners of cybernetics use models of organizations, feedback, goals, and conversation to understand the capacity and limits of any system (technological, biological, or social); they consider powerful descriptions as the most important result.

The professor that taught Philosophical Cybernetics had a doctorate in philosophy and he freely admitted on the first day of class that he didn’t know anything about AI and that, “we were all going to learn this together.” I actually learned quite about AI that semester; though obviously little of it was in that class.

My Spiritual Journey

My Spiritual Journey by H. H. The XIV Dalai Lama

Anyway, the whole point of titling today’s blog as, “Philosophical Cybernetics” was going to be this clever word play on the philosophy of AI. This has come about because I’ve been reading, “My Spiritual Journey” by the Dalai Lama and I’ve been thinking about what I do for a living and how it relates to an altruistic, compassionate and interconnected world. Short answer: it doesn’t.

However, it did get me thinking about the power of AI and – hold on to your hats because this is what you came here to read – how AI will eventually kick a human’s ass in every conceivable game and subject.  This event – the day when computers are ‘smarter’ than humans – is commonly referred to as the Kurzweil Singularity and the Wiki link about it is: http://en.wikipedia.org/wiki/Technological_singularity .

Let’s backtrack for a second about Ray Kurzweil. He pretty much invented OCR (Optical Character Recognition) and, as I understand it, made a ton of money selling it to IBM. Then he invented the Kurzweil digital keyboard. This was the first digital keyboard I ever encountered and I can’t tell you how wonderful it was and how, eventually, digital keyboards gave me a new lease on playing piano.

Here are some links to me playing digital keyboards (I actually play an Oberheim, not a Kurzweil, but Kurzweil created most of the technology):

Nicky (an homage to Nicky Hopkins)

Boom, boom, boom! Live with Mojo Rising

Looking Dangerous (with Jerry Brewer)

Old 65

A boogie (with Jason Stuart)

When I first heard Ray Kurzweil talk about ‘The Singularity’ I remember him saying that it was going to happen during his lifetime. Well, Ray is six years older than me and my response was, “that’s not likely unless he lives to be about 115.”

NEWSFLASH: Well, this is embarrassing, Ray Kurzweil was just on the Bill Maher Show (AKA Real Time with Bill Maher) last night and I vowed that this blog would never be topical or up to date). Anyway Kurzweil did clarify a couple of important issues:

  1. Kurzweil was going to live practically forever (I can’t remember if it was him or Maher that used the phrase ‘immortal’) and he takes 150 pills a day to achieve this goal. So, I’m thinking, “well, this explains how he expects the Singularity to happen during his lifetime; he’s going to live for thousands of years!” And then he drops this:
  2. The Singularity will occur by 2029!

I think Ray Kurzweil is a brilliant guy but I am dubious that the Singularity will occur by 2029 much less during my lifetime. I would like to live as long as Ray Kurzweil thinks he’s going to live but the actuarial tables aren’t taking bets on me after another 20 years or so.

Alan Turing, the most brilliant mind of the 21st century.

Alan Turing, in my opinion, had the most brilliant mind of the 20th century. He is one of my heroes. He also wrote the following in 1950:

“I believe that in about fifty years’ time it will be possible, to programme computers, with a storage capacity of about [10^9 bytes], to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, “Can machines think?” I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” (Emphasis added)

- Turing, A.M. (1950).

Computing machinery and intelligence. Mind, 59, 433-460.

Okay, so my hero, Alan Turing got the whole ‘computers will be intelligent’ within 50 years completely wrong. I think that Kurzweil’s prediction of it occurring within 18 years to be just as unlikely.

But I do think that AI will eventually be everything that Turing and Kurzweil imagined. When do I think this will happen? I dunno, let’s say another 50 years, maybe longer; either way it will be after I have shuffled off this mortal coil. I like to think that my current research will play a part in this happening.  It is my opinion that the Kurzweil Singularity, a computer passing Turing’s Test, or a computer displaying, “human level intelligence” will not occur without unsupervised machine learning. Machine learning is absolutely crucial for AI to achieve the level of results that we want. ‘Supervised’ machine learning is good for some simple parlor tricks, like suggesting songs or movies, but it doesn’t actually increase a computer’s ‘wisdom’.

The last subject that I wanted to briefly touch upon was why, ultimately, AI will kick a human’s ass in any game: it’s because AI has no compassion, doubt, hesitation and it doesn’t make mistakes. I suppose you could program AI to have compassion, doubt, hesitation, and make mistakes, but it would certainly be more trouble than it’s worth.

So, someday computer AI will be the greatest baseball manager of all time. I look forward to this day. I hope I live long enough to see that day. Because that day the Chicago Cubs will finally win the World Series.

Bad Game AI / Good Game AI (Part 2)

Thursday, June 16th, 2011

So, we left the last post with the intention of finding other games that had good AI, or at least AI that didn’t suck. We’ve asked everybody we know, we scoured our game collection and we’ve still come up empty. You know what? Almost all game AI sucks; some just suck more than others.

Without naming names – but getting as close as we dare – here are some real stinkers:

Sports Games. I like sports games; especially ‘management’ sports games where you get to trade for players and call the plays. I’m not interested in actually being the quarterback or the batter and precisely timing when you press the ‘A’ button or whatever. I like sports games because the AI sucks and I can easily manage the Chicago Cubs to win back to back to back World Series championships (for my foreign readers the Chicago Cubs are the worst team in the history of professional baseball or maybe professional sports in general and, yes, I’m a Cubs fan).

To me, this especially galling because writing sports AI should be pretty easy; well, easier than writing wargame AI. First, baseball and football (the only sports that really excite me from a game AI perspective) are really well understood and there is a ton of statistics recorded since the beginning of these sports. Stats are very important in creating good AI. It allows us to create accurate models of what has happened and what will probably happen in the future. We can use this to our advantage.

A quick example: you’re calling the defensive plays in a football game. It is third down and the offense has to move the ball 25 yards for a first down. What do you think is going to happen? Well, most humans would know that the offense is going to call a passing play. What should the defense do? I’ll give you a hint: don’t expect a running play off tackle. Yet, most football games are pretty clueless in this classic ‘passing down’ situation. Indeed, sports games AI is clueless when it comes to knowing what happened in the past and what is likely to occur next. They don’t keep any stats for AI. Doing so would come in handy for unsupervised machine learning (I was going to link to a post below but, hey, just scroll down); a subject I plan on writing about a great deal more in the future.

And one more thing about sports games: they have no concept of what constitutes a good trade or a bad trade. Let’s say you want to trade for Babe Ruth (for our foreign readers: arguably the greatest baseball player of all time). At some level, the game has a ‘value’ associated with the ‘Babe Ruth’ data object. It could be a letter value, like ‘A’, or it could be a numerical value like 97. If you offer the AI a trade of ten worthless players, valued in the 10-20 range (or ‘D’ players) the AI will take the trade because it is getting more ‘value’ (100-200 ‘player points’ for 97 ‘player points) even though it’s a stupid decision. Yes, I know some games only allow you make three or four player trades, but the basic principle is the same: sport game AI usually makes bad trades. And the reason for this is that the AI is ‘rule based’ or ‘case based reasoning’. Again, I promise I’ll write more about this type of AI in the future, but for now just be aware that this type of AI sucks.

Real Time Strategy (RTS) Games (Wargames with Tech Trees). There are a lot of games that fall into this category and they all have serious AI problems. First, writing AI for wargames is very difficult (I do for a living, so, yeah, I am an expert on this). Second, RTS games can’t ‘afford’ to spend many clock cycles on AI because they have lots of animation going on the screen, polling for user input, etc. and this results in very shallow AI decisions. Lastly, the addition of a Tech Tree (should the AI ‘research’ longitude or crop rotation?) doesn’t make the AI decisions any easier.

If anybody out there knows of a RTS game where the AI doesn’t suck, please drop me a line. I would love to play it.

This, unfortunately, brings us to:

Civilization V. Well, so much for not using names. I haven’t even played this game but I just read this review on I Heart Chaos: “Speaking of “miscalculating”, (a polite word for “cheating”) there is a serious issue with Civilization V’s artificial intelligence. It is so

unbelievably unbalanced that the experience suffers for it.” (http://www.iheartchaos.com/post/6492357569/ihc-video-game-reviews-civilization-v).

CES 1989

(L to R) The game designer of the Civilization game series, the author of this blog and Johnny Wilson (game magazine writer) at the 1989 CES show. Couple of interesting observations: I had the #1 game at the time and we all (except Johnny Wilson) had more hair.

Well that sounds kinda mean-spirited of me, doesn’t it? I haven’t even played the game, but here I’m citing another review that says Civ 5 has lousy AI. Well, the problem is that whole Civ series (and I have played some of the earlier ones) all suffered from bad AI, or AI that just plain cheated. And that’s another problem; the game developer (who shall remain nameless) kinda has a history of using ‘cheating’ AI. That is to say, his AI often ‘sees through’ the fog of war (i.e. you, the player, can’t see your opponent’s units but your computer opponent can see all of yours), and, well, there’s just not a nice way to say this… the ‘dice rolls’ have a tendency to get fudged… in the favor of the computer.

So, there you have it: the current state of AI for computer games isn’t pretty. For the most part, it’s ‘rule based’ or ‘case based reasoning’ which is extremely inflexible (we sometimes use the phrase ‘brittle’ to indicate AI that is easily broken.

I am more convinced than ever that the solution is unsupervised machine learning. So, I will be returning to that topic in the next blog entry.

 

 

Bad Game AI / Good Game AI (Part 1)

Thursday, June 9th, 2011

Most game AI is bad AI. Let’s be honest; it’s not just bad, it sucks.

I’ve been writing and playing computer games since the early 1980s and I haven’t seen even a modest improvement in the quality of computer opponents. There are a few notable exceptions – and we’ll get to them shortly – but, the vast majority of commercial games that are released were developed with little thought, or budget, given to AI.

So, since it’s such a short list, let’s start with a few computer games that have good AI:

Computer Chess. Any computer chess program that is available today, including ‘freebie’ online Java applets will kick your ass. Back in the ‘70s I had an ‘electronic chess game’ that played as well as I did (I was about a 1600 level player at the time). The game had various levels of AI; but all that changed was how much time the machine was given to make a move. If you put it on the top level it would take forever contemplating the all the responses to the opening P-K4.

So, why was chess AI pretty good thirty-five years ago and even better now? There are a couple of reasons, the first being that chess can be divided into three ‘phases’: the opening, the middle and the endgame. Chess openings are very well understood and there are number of ‘standard’ texts on the subject such as Batsford Chess Openings Volume 1 and 2. These chess openings are available in various file formats and are easily integrated into a chess engine. So, until the program is ‘out of book’ the most important moves, the opening moves, are expertly played by the program without any AI at all. There are also books for endgame positions. So, really, the only difficult area for chess programs is the middlegame.

1st Chess problem solved by computer

1st Chess problem solved by computer by Dr. Dietrich Prinz with the Manchester Mark 1 in 1951 (White to mate in two. The solution is: R - R6, PxR. P - N7 Mate.)

There are dozens of very good articles, papers and books on evaluating chess positions using heuristic evaluation function. Here’s a pretty good page on the subject, even though it looks like all the picture links are broken: http://www.cs.cornell.edu/boom/2004sp/ProjectArch/Chess/algorithms.html ). And here’s a link to a series on building a chess engine: http://www.gamedev.net/page/resources/_/reference/programming/artificial-intelligence/gaming/c

hess-programming-part-vi-evaluation-functions-r1208 .

Chess was one of the first games to be implemented on computers. The first chess problem solved by a computer see picture) was done by Dr. Dietrich Prinz with the Manchester Mark 1 in 1951 (see picture, right).

Though I could be wrong, I think Dr. Prinz’s program simply employed brute force to solve the problem.

So, why is it comparatively easy to find/write good chess AI? Opening and endgame databases are readily available, evaluation functions for board positions are well understood and (I suspect I’ll get some flak for saying this) it’s a relatively easy game (at least to program, not to master). Also, there are not a lot of pieces, their moves are restricted, the rules of the game are simple and the board size is fixed.

Chris Crawford’s Patton vs. Rommel. Crawford’s Patton vs. Rommel was a wargame that came out in 1987. On the PC (remember this was before Windows) it ran in 640kb (and that included the operating system). The display was 640 x 200 x 2, if I remember correctly (see screen shot).

Chris Crawford's Patton vs. Rommel

Chris Crawford's Patton vs. Rommel (1987)

I haven’t played the game in over 20 years, but I remember being very impressed by the AI, specifically how the program had a ‘feel’ for the tactical situation. A very important part of the game was the ‘road net’. Units moved much faster on roads and it was easy to get your units caught up in traffic jams. When that happened the AI would warn the user. This really shocked me when I first played the game. Chris employed what he called ‘geometric AI’ in Patton vs. Rommel. He goes into more details in his book, “Chris Crawford on Game Design,” (http://www.amazon.com/Chris-Crawford-Game-Design/dp/0131460994).

 

There are plenty of great games out there, but that’s not what this post is about. The question is what games have good AI? I’m going to need to think about this and see if I can add some more titles to the ‘good AI’ list, because I sure have a ton for the ‘bad AI’ list.