“AI is the future”, says Sony

I must admit to being a bit puzzled by this Eurogamer article on a presentation by Sony Computer Entertainment UK boss Ray Maguire, at BAFTA headquarters.

“We are no longer interested in graphics per se, because graphics chips can do that for us,” he commented, “but the central processors of all the new games machines are about making games more compelling by adding in artificial intelligence.”

“The Cell chip is so powerful it can do 256 million calculations per second… That means one thing for us in the videogames industry: artificial intelligence.”

Does it? I mean, I’m all for it. But do the people focussing all of their effort on better graphics technologies know this? What’s with all the people proclaiming physics are the future? Was there some video games industry communiqué I missed?

And, um, if the Cell is so powerful that it only means artificial intelligence, then what’s with the story that the Cell is unsuited for branch-heavy code? I know, to some degree that has to be FUD from the boring PR war between Sony and Microsoft. But yet… I can’t shake the impression the Cell really is a so-so PowerPC core with 7 DSPs tacked on. Powerful, when you make it do the right thing, sure. But I wouldn’t call it “built for AI”. (Warning: I have never programmed a Cell processor nor anything similar, although I’ve talked with friends who do.)

He showed off a proof of concept video showing a rendered female character auditioning for a movie role, and talking through her relationship with her husband from meeting him, through to her decision to murder him after discovering that he was having an affair.

This is obviously Quantic Dream’s Heavy Rain video – I don’t know why this wasn’t explicitly mentioned. Now, charming as this video is, what does it really show? It shows an entirely predetermined video sequence. Nothing more. It’s a very nice video, and I will take Quantic Dream’s word for it that it’s running in real-time. But still, it’s about as interactive as a brick. There are a billion reasons for why this is entirely unimpressive for games, going from surface issues such as the fact that you can massively optimize your rendering (not to mention your modeling) if you know exactly where your camera is pointing at any time, to deep, deep issues such as: what player actions will make an NPC cry, believably? (And this mention of “the first woman crying in real time 3D” – which I strongly doubt – makes me think of this whole obsession with making players cry… this became a cliché real fast.) All that video proves, so far, is that Quantic Dream can make really, really nice images in real-time 3D. Which is hard, and I respect them for it, but the only thing really revolutionary about it that I can see is that they’re showing a domestic drama scene, as opposed to heavily-armored babes killing aliens.

And how was all that related to AI again?…

But, well, in the end Mr. Maguire’s presentation was all about saying something that is understandable to people outside of the industry (or should I say, non-developers), and that is fitting within the context of BAFTA, which is, after all, a movie-related organization. In the end, even though I doubt the thoughts behind it, I cannot disagree with:

“We’re not talking about graphics any more,” Maguire concluded. “We’re talking about performance and we’re talking about art.”

Comments 5

  1. Blastimographerishly wrote:

    What? No report on Leipzig? And after you were gone for a month?

    Look – I’m a David Cage fan. Yes, his work is heavily constrained, but at least he’s trying, and apparently getting positive feedback for his efforts. (I haven’t played Fahrenheit.)

    The larger question is whether the games biz has any idea what the AI requirements are for producing believable narratives, and I remain 100% convinced that they don’t and never will. So the best that can be hoped for is good AI in highly constrained virtual spaces, which isn’t awful, but it’s not what’s being promised. Prepare to watch another $500 million get flushed on pie-in-the-sky development deals.

    As for making players cry, that’s easy. I did it five years ago (or is it more now?), and Meretz did it at least five years before that. Add pictures, the right music, a few sound effects (is that a lost kitten?!) and there you go: tears by the buckets.

    As for the final quote you posted, was there ever a better marker of how interactivity has been buckled under the weight of the narrative demand? Storytelling is omnipresent. It’s hardwired into the human mind. It cannot be denied.

    Lights.

    Curtain.

    Action.

    Posted 02 Sep 2006 at 17:27
  2. fluffy wrote:

    Sprung had a rudimentary AI engine to try to make the conversations more realistic (rather than just being a conversation tree). Unfortunately it was pretty hard for the writers to wrap their minds around it and so they ended up just making it a conversation tree (albeit a context-sensitive one).

    I think it’s important to make authoring tools that fit the people who are using them. That’s the first step to making good games. (And the thing that our development team lead refused to understand.)

    Posted 02 Sep 2006 at 18:15
  3. Aubrey wrote:

    I’m with you on this one, Jurie. I got no problem with machinima (except, in this case, for the obvious uncanny valley shit – but that needn’t be an issue with the right art approach), but machinima posing as a game… it winds me up.

    I won’t jump on David Cage’s writing ability, but I really don’t think he does enough to engage the audience with probably the most powerful tool at his disposal: interactivity. The last game they put out felt about as interactive as a DVD menu.

    I get people’s hesitation when it comes to I.S. I get that stories – good ones, are often brittle structures, and not easily thrown up from the machinations of a machine, or easily altered without them snapping.

    But we live in a system which is throwing up interesting stories every day. For that reason alone, I can’t believe that I.S. can only be successful through almost total constraint, or has to be a constant train of false choices and psychological tricks, forcing you (at best, intuitively) down the “right path”.

    Who the fuck gets to say what the “right path” in life is, anyway? Surely that’s arrogance?

    I also question this general movie fetishism. It’s nothing new, ofcourse, but recently, Ebert’s all “Games can’t be art, because they’re not books or film!”, and some people in games seem to feel that they won’t be taken seriously until games are art on some old codger’s terms. As a result, they don’t really want to engage games on their own terms. So they try to mitigate the only factor that they feel is messing up their wonderful story: the player. They’re trying to gain artistic recognition by using old media as a crutch. Not that that’s wrong – any path is as good as the next. It just feels like there’s so much more to explore in this medium… trying to ape old ones seems… misguided? Justifyable, but… bah.

    Posted 06 Sep 2006 at 1:47
  4. Steven Rokiski wrote:

    It is possible to reduce branching in code by producing a huge math equation that will factor in the same information as all the if/ands/buts of a section of complex branching code, and then examining the result (involving some branching) to figure out what state changes need to be perpetuated to the next frame. Running several of these in parallel is a good match for MMX/SSE/Altivec-type SIMD operations, and perhaps an even better match for Cell’s SPEs.

    Has this technique ever been used for AI? Not that I know of. The only place I have seen or heard of techniques like this is the world of DSPs used for complex realtime signal processing and encryption. While somewhat complex, the sorts of branching code this emulates is far simpler than the types of branching code commonly used for AI. The data access pattern for game AI is also a bad for this technique (and from what I’ve read, the Cell as well). Is this what Sony is talking about? I doubt it, this is marketing talking (Emotion Engine).

    As for anti-Cell MS FUD, remember that they ditched speculative execution and complex branch prediction to get 3 3.2ghz cores on a single die, presumably for math performance (3 “so-so” PowerPC970 cores vs. 1 “so-so” PowerPC970 core+7 SPEs). The Cell architecture is just another way of getting maximum theoretical math performance from a given amount of die area.

    P.S. Off the top of my head, I can’t remember what this technique is called. A follow-up post may be made.

    Posted 19 Sep 2006 at 23:28
  5. Jurie Horneman wrote:

    Oo I really should write some more responses here, such nice comments… I am just going to say thanks Steven – I now remember MS got criticized as well for making their CPU not quite as suitable for gameplay / AI code as it could be. Nevertheless, the main problem here (and it’s not a huge deal in the end) is PR distorting the true capabilities of the X360 and PS3.

    Posted 20 Sep 2006 at 10:28