For no particular reason, and without claiming to be an expert at all, I thought I’d list some anime series I like. In no particular order:
I’ve just played Bound, the game for the PlayStation 4 developed by Plastic and published by Sony. Apart from some images and a brief look at a trailer, I knew nothing about it. If you want to play it, I recommend you try to find out as little as possible too. Seeing the tag line and the briefest of descriptions while finding the previous link gave me information I would have preferred gathering myself.
Here are some reasonably non-spoilery thoughts on it, based on having played it for an hour or so:
One module contains a pure functional implementation of the actual robot / program execution logic. It exposes two functions: one that takes a level description and returns an initial game state, and one that takes the level description and a game state and produces a new game state.
The next module contains a Flux store (just something that holds data) which holds the level descriptions, the current game state, and can run a timer. If you tell the store to start, it starts the timer, runs the robot logic every tick, and emits an event when its data changes. (It also handles a whole bunch of other state, but that’s irrelevant for this description.)
Then there is a React component for the robot (obviously one of many components). React components are, ideally, pure functions of their inputs. They’re best written declaratively: you use React to declare what you want the user interface to be like. This usually works like a charm.
But game logic can be hard to fit into this model.
As you can see from the project’s readme file, I have future plans for this engine. Writing an actual parser, porting it to desktop and mobile, experimenting with different game types, better tools, and new procedural generation / narrative AI techniques: these are all things that have come a step closer.
I don’t recommend it for general use yet, but I do welcome feedback, and if you want to build a game with it, let me know how I can help.
I did some more thinking after yesterday’s blog post on tagging, and had an interesting discussion with Mike Cook and Chris Martens about how they approach similar problems. So here are some more thoughts on the subject.
The procedural content generation in Mainframe uses a very simple mechanism, which is both more powerful and trickier to implement and use than I expected at the outset. That mechanism is tagging. You tag bits of content, and then somewhere else you say you want something with a given set of tags.
One of my theses about interactive storytelling is that selecting, adapting, and combining bits of authored content is an approach that is powerful, underexplored, and pragmatic, in that it offers a smooth learning curve from simple and known to, I hope, complex and new. Mainframe is, among other things, an experiment with this approach.
Tagging is one of the more interesting ways to select content. I first saw it used in 2008 as the interface between the AI and the audio system in LMNO. Back then I was mostly impressed by how it reduced the production dependency between AI and audio.
In 2010 and 2011, I worked on an unreleased Diablo-like that used tagging to procedurally generate levels. I did a lot of work on the level design and tool chain. At GDC in 2012, I saw Elan Ruskin’s talk about the dynamic dialog system used at Valve, which used an advanced tagging approach to allow writers to create dialogues. In 2012, we used tagging to select texts in a mobile game. I remember vividly how the actual tagging logic consisted of one line of code, but it took three of us a day to write that line. (It was a LINQ expression in C#, if you’re curious.)
The system in Mainframe is really simple. The core logic is this function:
Last night I innocently tweeted:
Does procedural content generation inherently imply repetition or is it just a really strong connection?
— Jurie Horneman (@jurieongames) November 21, 2015
And then I watched a movie (Pi! still great) and 90 minutes later I checked Twitter and suddenly I had 50 replies.
My tweet was a bit gnomic and lacked context, as tweets often do. I wasn’t talking about repetitiveness: I was talking about how PCG is used by game designers, beyond “hey look we can create a billion dungeons / planets”. Not that there’s anything wrong with that, but I am interested in using it in other ways.
In Mainframe, the IF game Liz England and I made for Procjam, one of my basic tenets was that we should design the game around repetition, because we were using procedural content generation (PCG). When you play it, you will quickly find the element of the game that repeats. Whether it feels repetitive is a different question, although we don’t claim it is not.
This tenet was something I intuitively picked as a design heuristic, but afterwards I started wondering how core it actually is. Is it essence or accident?
I had a lot of fascinating discussions about this last night, and I’ve thought about it some more, and I now think that using PCG as a game designer will always involve repetition in some way. (Again, I am not talking about repetitiveness. This is not about procedural content generation being “bad”.)
Last year I wanted to do something for PROCJAM, Mike Cook’s “make something that makes something” one week game jam, but circumstances conspired against me. This year, I collaborated with Liz England, and we made a horror interactive fiction game called Mainframe. It’s about a spaceship and its mainframe and something is wrong and it needs help.
Making it was a great experience. I learned a lot, and I’m going to be talking about it more.
The texts, assets, and source code can be found on GitHub.
A few days ago, Julian Togelius tweeted this:
Exhibit 1A: The mindset of someone who has no interest in artificial intelligence. https://t.co/uE8BdWb4dS
— Julian Togelius (@togelius)
I replied that I have an interest in AI, and I agree with the quote. Julian didn’t reply, but I was still motivated to write this small blog post. (Thanks Alex :))
I don’t, in fact, know the context of the slide in question, or Julian’s position. But it strongly reminded me of a problem I have encountered a number of times in my career: the conflict between game design and AI programming.
I’ve had a couple of requests for the slides of my talk on the dark side of game development which I gave last year at ENJMIN, and last week at the Northern Game Summit in Kajaani.
After some thought, I’ve decided not to publish them. The actual slides are very spare, often showing just a single image or a few words, so they’re not very helpful. I don’t have detailed presenter notes (as I found to my dismay when I started rehearsing it again about two weeks ago).
One of the images is a portrait of a personal friend, and I feel weird enough using his photo as it is. Some of those words are “stress” and “depression” and “sexism” and “homophobia”.
Because I talked about sexism and mentioned some of the female friends who helped me with that section, I want to reduce the contact surface in case Internet idiots come across the slides.
This is a very personal talk, and I feel that only the actual performance – all the words I said, the way I said them, and the questions I answered afterwards – comes close to saying what I meant to say. This talk only works when I fully believe everything I say, it’s not a dry technical talk at all. Writing slides with full presenter notes that say the same thing would be very hard, perhaps impossible.
And both times the talk wasn’t recorded. Sorry!
If you want to get an impression of what I talked about, the resource post may be helpful.