Skip to main content

Some game-designer-y thoughts on Bound

I've just played Bound, the game for the PlayStation 4 developed by Plastic and published by Sony. Apart from some images and a brief look at a trailer, I knew nothing about it. If you want to play it, I recommend you try to find out as little as possible too. Seeing the tag line and the briefest of descriptions while finding the previous link gave me information I would have preferred gathering myself.

Here are some reasonably non-spoilery thoughts on it, based on having played it for an hour or so:

Unusual combinations of skill sets can lead to unusual products, and competitive advantage. 

Bound is beautiful, and shows a rare combination and integration of visuals, audio, and programming. I believe Plastic has roots in the demo scene, and it shows. It has demo aesthetics but with a bit more sense and coherence than one usually sees in demos.

The animations seem to have been captured from a trained dancer. They're beautiful and striking.

For how unusual skill combinations lead to competitive advantage, see my article on innovation

I am not a huge fan of metaphorical stories.

Bound mostly takes place in a very non-naturalistic world, but there are obvious signs that this all symbolizes something in our world. The story is heavily metaphorical, or symbolical, or allegorical. (Let me know if you have a well-founded opinion on the differences between these, and which one Bound would be.)

It's a way to tell stories that I think works well in visual media, and games can obviously be a visual medium. It works because it gives an excuse to visualize concepts that one would otherwise need words to convey. Also, gameplay actions can be a metaphor for more abstract struggles.

Still, it fills me with weariness. Spending hours trying to deciper the story will occupy my brain, but I'm not sure if I enjoy it. At least, since the meaning is outside what I am seeing, and hidden from me, it's "okay" that my actions appear to have little meaning in themselves.

Games like Bound don't fit common models for interactivity and computer games well.

Bound evokes a lot of emotions, but they're not the emotions we typically talk about (to the degree that we do) when we use the typical lens of systems, meaningful choices, resources, etc. Bound is filled with discovery, joy, beauty, and it's far from the only game like it. But if there are meaningful choices I've not seen any yet. That does not in any way mean that I believe Bound should not have been developed, or should not have been marketed as a "game".

Beware of relying on a limited set of tools and theories. Be aware of your focus and specialization. Work on your mental toolbox.

Finally, Bound reminds me of a game I've worked on in the past, but since that game has not been release yet, and I don't know how it has changed in the 3 years since I worked on it, I will keep my comments until later.

Game animation logic in React

I've been working on a small minigame written in JavaScript using React. It's similar to Robo Rally: you write a small program for a robot, it executes it, if it gets to the end without dying, you win. The actual robot logic is turn-based, the display isn't. It's split up like this:

One module contains a pure functional implementation of the actual robot / program execution logic. It exposes two functions: one that takes a level description and returns an initial game state, and one that takes the level description and a game state and produces a new game state.

The next module contains a Flux store (just something that holds data) which holds the level descriptions, the current game state, and can run a timer. If you tell the store to start, it starts the timer, runs the robot logic every tick, and emits an event when its data changes. (It also handles a whole bunch of other state, but that's irrelevant for this description.)

Then there is a React component for the robot (obviously one of many components). React components are, ideally, pure functions of their inputs. They're best written declaratively: you use React to declare what you want the user interface to be like. This usually works like a charm.

But game logic can be hard to fit into this model.

The approach I've taken is to say that the robot logic knows nothing about animations, and neither does the data store. When the robot makes a move, the store emits a change event, and the robot component gets new data from its parent component (a standard approach in React). This includes its position and orientation, but also an 'action'. This action is like an instruction in a stage play. For movements, actions are "move in from the left", say. The robot component then goes and plays an animation on itself for this action. So it goes "OK, I am here, I'm moving in from the left, so at t=0 I am one square to the left, and at t=1 I am over here". Then it runs a timer, and can calculate exactly where it needs to be at any given time.

Another approach could have been to store the robot's current coordinates locally inside the component (in this.state, in React terms), and to compare them to new incoming coordinates, and animate based on that. The problem with that is distinguishing between 'you just made a move' and 'the entire board was just reset'. The action-based approach solves that: I can just send an action called 'none' or something.

So far, so good. A problem occurs when the data store emits a change event that is not tied to a robot logic tick, because that makes the robot go "oh you want me to come in, alright then" and it resets its timer and does the move all over again. And that looks like a bad glitch.

What I expect I need to do is intercept the incoming data (in the shouldComponentUpdate part of the React life cycle), see that its the same as what I had already, and only start a new animation when the data has changed.

This way I can distinguish 'a tick happened and you should move' from 'something changed so let's re-render everything because *shrug* React'. (The main selling point of virtual DOM libraries like React is that they can make re-rendering everything very fast.)

Does that make sense? Are there problems or solutions I'm not seeing? Is this a terrible way of going about things? I have a slightly bad feeling about inferring events from data changes. But as far as I can tell this would be the React way, and I don't see a better way that would permit the same level of encapsulation.

Announcing Choba, an experimental interactive storytelling engine

Over the last few weeks I have taken the interactive fiction engine inside Mainframe, the IF game Liz England and I made for Procjam last year, and have rewritten it in JavaScript. I've called it Choba, short for CHOice BAsed, and I've put it on GitHub and npm. It's open source, just like Mainframe.

I've also backported it into Mainframe itself, so if you play it now, you're no longer seeing a Python program running on Heroku, but a JavaScript program running inside your browser. (Doing just that has already been worth it. It means I no longer have to pay Heroku, it makes the game much easier to deploy, and I've already learned a lot starting a new JavaScript project.)

As you can see from the project's readme file, I have future plans for this engine. Writing an actual parser, porting it to desktop and mobile, experimenting with different game types, better tools, and new procedural generation / narrative AI techniques: these are all things that have come a step closer.

I don't recommend it for general use yet, but I do welcome feedback, and if you want to build a game with it, let me know how I can help.

More thoughts on tagging

I did some more thinking after yesterday's blog post on tagging, and had an interesting discussion with Mike Cook and Chris Martens about how they approach similar problems. So here are some more thoughts on the subject.

Namespaces might be interesting for tags. Right now tags are global, and that means a tag defined in one place for one context could interfere with content somewhere else. I've already briefly hit situations where I had to rename tags to avoid that. This is another thing that works in small projects but will cause trouble when you scale up. A very similar problem exists with CSS in large scale web projects, and CSS is similar to tagging.

If instead of "give me something with tags 'corridor' and 'spooky'" you could say "give me something from the namespace 'spaceship' with tags 'corridor' and 'spooky'", you can avoid getting your spooky castle corridors into your spaceship. Setting a namespace could be done both explicitly ("this scene has the 'spaceship' namespace") as well as implicitly ("everything read from the castle folder has the 'castle' namespace"). I kind of did this with the Diablo-like I mentioned by having an implicit setting tag, although that's not quite the same as a namespace, perhaps. Or is it?

Is a namespace like a meta-tag? Is a namespace like a normal tag? Could the desired namespace be a variable? Could you have a hierarchy of namespaces? Is that the same as just having a bunch of tags? Does madness lie this way? Perhaps.

Tag types could also be interesting. Instead of saying 'castle' or 'spaceship', you could say 'location:castle'. That's already possible now, because 'location:castle' is a valid tag. So by imposing a scheme on yourself you can make your life easier and avoid the problems of tags being global. (This is a solution that people use for in CSS, e.g. BEM, a naming scheme for HTML elements.)

The difference between a naming convention and something stricter is that if the computer understands things like tag types or namespaces, it can give you better feedback and do better analysis. Putting knowledge into the computer is almost always a good idea.

Take it one step further and you could say "location = castle". And then you have a conditional expression, which can query any state variable. And, with a bit of work, any attribute of any entity, object, location in your world model. Mainframe allows you to use 'PC_job' to refer to the job title of the player character (which is randomly picked from a list, based on the story act). But a nicer version is to say 'PC.job', because then you can easily extend it to other attributes.

So then you could say "give me a scene that matches the current setting and mood," and the engine will find scenes with setting = castle and mood = spooky, say. Or you could just say "give me a scene", and it's the scene that knows which attributes it applies to. Is that confusing, not being able to explicitly specify what you want when you want it? Perhaps, but with good tools that shouldn't be a problem, and you'd need that anyway to scale this up. There's also the question of loss of flexibility when you move from a super-simple tag system to something more complex.

In any case, continue along this route and perhaps you'll approach what Mike and Chris are doing, which is to use predicates and formal logic. I could understand the example Mike gave on Twitter from one of his jam projects. Chris has taken this to dizzying heights: her work on Ceptre makes me regret having dropped out of computer science.

Conclusion: I don't know yet. The thing I want to do now that I have some more results from working with tags is to look at it from multiple points of view and look for "if you look at this as [rules/variable names/logic] then things become simpler and/or more powerful" insights. This is why I'm interested in programming languages all of a sudden. More on that in a future blog post.

Procedural content generation in Mainframe

The procedural content generation in Mainframe uses a very simple mechanism, which is both more powerful and trickier to implement and use than I expected at the outset. That mechanism is tagging. You tag bits of content, and then somewhere else you say you want something with a given set of tags.

One of my theses about interactive storytelling is that selecting, adapting, and combining bits of authored content is an approach that is powerful, underexplored, and pragmatic, in that it offers a smooth learning curve from simple and known to, I hope, complex and new. Mainframe is, among other things, an experiment with this approach.

Tagging is one of the more interesting ways to select content. I first saw it used in 2008 as the interface between the AI and the audio system in LMNO. Back then I was mostly impressed by how it reduced the production dependency between AI and audio.

In 2010 and 2011, I worked on an unreleased Diablo-like that used tagging to procedurally generate levels. I did a lot of work on the level design and tool chain. At GDC in 2012, I saw Elan Ruskin's talk about the dynamic dialog system used at Valve, which used an advanced tagging approach to allow writers to create dialogues. In 2012, we used tagging to select texts in a mobile game. I remember vividly how the actual tagging logic consisted of one line of code, but it took three of us a day to write that line. (It was a LINQ expression in C#, if you're curious.)

The system in Mainframe is really simple. The core logic is this function:

def tags_are_matched(_desired_tags, _available_tags):
    for desired_tag in _desired_tags:
        if desired_tag not in _available_tags:
            return False
    return True

(Python nerds: I know this can be written in one line.)

All it does is check for a given thing whether that thing has all the tags we want. Very simple. For the Diablo-like, we added a "nice to have" qualifier, and I really wanted a "NOT this tag" qualifier. But in Mainframe this was perfectly sufficient.

Liz could write something like:

<injectOption tags='option, containers' />

and the engine will look through all of the scenes for one with the tags 'option' and 'containers', and will then inject a link leading to that scene into the current scene.

A nice little feature in Mainframe which made a big difference is that a desired tag can be a reference to a variable, as well as just a literal tag. So this line:

<injectOption tags="computer_talk, $flesh_act" />

makes the engine look for a scene that has the tags 'computer_talk' and whatever the current value of the 'flesh_act' variable is. (To understand why it's called 'flesh_act' you have to play the game...) This allowed us to change the game depending on the act of the main storyline the player is in.

Instead of scenes the engine can also inject 'blocks'. Liz mainly used this to inject flavor text depending on game state, but we also used it to factor out common logic, like this:

<!-- Used to init variables when the player respawns. -->
<block tags="pc_init">
<!-- reset all values other than main story act & total data -->
<action act="set $has_mcguffin 0" />
<action act="set $is_fed 0" />
<action act="gen_data" />
<action act="set $sacrifice 0" /> <!-- player has not sacrificed body parts for data -->
<action act="set $injury none" /> <!-- player's current injury, in case we do others -->
<action act="set $commands 0" /> <!-- used for PC to use computer 3x before needing more data -->

You can see the 'action' element there, used to modify the game state.

Despite me having used tagging before, I still learned a couple of interesting things during the development of Mainframe.

The subtleties of picking

"Pick a scene with the right tags" sounds easy, but there are a lot of subtleties. We didn't want the content to be picked in the order it was defined in, nor did we want every player to see the same content order every time. So we needed randomness.

Our requirements for random picking, combined with the fact that Mainframe is a web-based game without a database, made for a ton of subtle errors. I spent more time debugging and rewriting that part than anything else, and it required cryptographic techniques plus me actually cracking open Knuth's Art of Computer Programming, probably for the first time in my life. I have a draft for a blog post on that lying around, so I won't go into great detail here.

I wrote a class called TaggedCollection, which contains a set of tagged items, be they scenes, blocks, names for the data you find in the game, or whatever. When the get_item_by_tags() method is called, I create a list of items that have the desired tags, then I shuffle it, and pick the next item. I can't store that list (long story), so I regenerate it every time. (It's a game jam, who cares about performance, the lists are very short.) I store a random seed and an index per desired tag set, per player.

Shuffling is OK but not great. The chances of getting the same item twice in a row are low but not zero. If you have three injectOptions in a scene, like we do, you can sometimes see the same item twice, because the list is exhausted, reshuffled, and an item at the end is now at the start. The solution I am currently planning to implement for this is to turn those three injectOptions into a dedicated command, at a higher level of abstraction. This would also give us some other advantages, like being able to control death (some scenes kill the player).

The most intriguing effect of this approach is that rendering the scene modifies state, because of those indices that get increased.

(So what happens when the player reloads the game in their browser? Fun. Fun is what happens.)

I don't know any other game engine that does that: you always want to separate rendering from updating, and in general you want to keep a firm grip on your mutable state. My day job has made me dig deep into Facebook's Flux pattern and immutable data structures, so I am very conscious of state mutation these days.

This is really the most interesting thing I learned about tagging, and I haven't yet decided what I want to do about it, if anything.

Tagging as a programming language feature

Internally - I intend to describe this in more detail in a future blog post - the engine data structures resemble an Abstract Syntax Tree or AST. So it is possible to imagine all the data as a program that produces an interactive fiction game, and the engine is the runtime for that program.

Now, if you follow that train of thought, blocks and scenes become like procedures, because they can contain logic, including logic that modifies game state. Tagging then leads to a programming model where adding or removing a procedure can indirectly affect the behavior of the entire program. If you look at the data as a program that is very strange!

Tagging in production

This was something I had already encountered in the 2011 Diablo-like, and back then I wrote a pretty elaborate tool that read in all the content and analyzed it to make sure all tag demands could be fulfilled. It emulated the server level generation algorithm to make sure we could never break the server through bad tagging. I hooked it into the continuous integration server, because I take it as a personal insult when a bug occurs that is hard to find for a human, but easy for a computer.

What all of this means, apart from that I have minor OCD, is that you need special tools to guarantee correctness when you use tagging. It's not witchcraft, but it does take some effort. I would want to expand that to show not just unfulfilled demands, but also unused content and places that are overly sparse or dense.

Another big issue with this system, and with procedural content in general, is test coverage and state manipulation during development. There is a bunch of stuff in Mainframe that I added but have never been able to really test, because of the random element and the lack of any functionality to directly pick content and affect state. The game is simple enough that I don't think there are any real errors, and we ran spellcheckers over the raw text, but this is a weakness that would cause trouble when scaling to bigger games, and properly implementing this testing and development support functionality is not trivial.

Tagging as story generation

The way we pick tagged content in Mainframe is just one of many. It can be interesting to pick scenes in order rather than shuffled. It can be interesting to stop picking scenes once they've all been shown. It depends on the content. Lots of patterns are possible.

There is an element that can be found in many board games that I like a lot: the event deck. They are simply decks of cards, shuffled at the start of a game, and under certain conditions the top card gets revealed and whatever is on there "happens": players get items, monsters get introduced, stuff blows up, etc. There are lots of variations, and many games have multiple decks.

Event decks are story engines. They represent the progress of time, external events that keep happening, mounting tension, advancing plots. By being shuffled they introduce a random element, and through their design or through other rules, they give the designer some control over the experience.

Tagging is like an event deck. In Mainframe, whenever the player chooses to go on a mission for data, the top three cards from the event deck are revealed (links to the next three scenes tagged "mission" are injected). Because this tagging is done based on the current act, there are effectively five decks. Some scenes are tagged with multiple acts, so they appear in multiple decks. It is very simple conceptually, but quite powerful.

Because the tag can come from a dynamic variable, it is possible to imagine more complex decks. It is also possible, with the higher level injection command I described above, to generally use more complex logic. Because really what is happening is that we look at the player input, at the current state (representing what the player has done), and at the content we have available, and we pick the most appropriate thing to show next. That logic is core to interactivity (it's part of Chris Crawford's definition of interactivity), and, in a game like Mainframe, it heavily involves storytelling logic. We want certain things to come up sooner, others later. We want things to come to the fore or recede to the background, based on what the player has done. We want the odds of certain things to increase based on state - for instance, in Mainframe, it would have been nice to control the odds of the player dying, or the odds of the player encountering... certain things.

All of that starts with a five line function, something that can be understood by analogy with a deck of cards. That is why I get so excited by content selection algorithms, tagging, and event decks, and I hope to dig deeper into them.

Repetition and procedural content generation

Last night I innocently tweeted:

And then I watched a movie (Pi! still great) and 90 minutes later I checked Twitter and suddenly I had 50 replies.

My tweet was a bit gnomic and lacked context, as tweets often do. I wasn't talking about repetitiveness: I was talking about how PCG is used by game designers, beyond "hey look we can create a billion dungeons / planets". Not that there's anything wrong with that, but I am interested in using it in other ways.

In Mainframe, the IF game Liz England and I made for Procjam, one of my basic tenets was that we should design the game around repetition, because we were using procedural content generation (PCG). When you play it, you will quickly find the element of the game that repeats. Whether it feels repetitive is a different question, although we don't claim it is not.

This tenet was something I intuitively picked as a design heuristic, but afterwards I started wondering how core it actually is. Is it essence or accident?

I had a lot of fascinating discussions about this last night, and I've thought about it some more, and I now think that using PCG as a game designer will always involve repetition in some way. (Again, I am not talking about repetitiveness. This is not about procedural content generation being "bad".)

My reasoning is that when you make a procedural content generator, you will always want to call it more than once. (See below for a few very rare exceptions.) Because otherwise, what's the point? PCG is more effort than manually authored content, for one single instance of "content".

This is not the case with manually authored content. While we make a lot of repeating content by hand, we also make a lot of unique assets and set pieces.

So my argument is based on the purpose of PCG, and the trade-off between using PCG or manual authoring. This means that if your interest is in PCG for its own sake, you may not be impressed. And that's fine! I am looking at it as a game designer: I want to provide a certain experience, and I want to figure out how to do so effectively. (I am also looking at it as a programmer who finds it fascinating to work on PCG.)

If you call a procedural content generator more than once, there will be repetition, because the generator exists to generate a certain kind of content, and calling it more than once will create more than one instance of this kind of content, ergo repetition, QED.

To put it more plainly, if you have a tree generator, and you generate a bunch of trees because your game needs a forest, you end up with a bunch of trees, no matter how different and unique those trees are. That may sound painfully obvious, but think of it another way. Let's say we could have a chase scene generator. It takes the current situation and generates a chase. That only makes sense in a game where you get a lot of chases.

This is what happened with Mainframe. We knew we wanted to use PCG (because otherwise why make something for Procjam?), so we needed to come up with a design that incorporated repetition. And we did so by picking a base-mission structure. You're in a base, you go on a mission, you return to the base. This is a very versatile pattern that fits both games and stories very well, which is why I'm a fan of it. Many TV series use a base-mission structure. And this hopefully shows why I don't think repetition is bad per se. It is not, it creates rhythm and structure. Repetitiveness is something else. And of course the challenge for game designers is to avoid repetitiveness when using repetition. This is a core challenge whether we use PCG or not!

Liz applied a gameplay loop to Mainframe, a common game design approach. I would argue that IF doesn't necessarily always have a gameplay loop, but it fits well with the repeating pattern, and it was very useful.

(Regarding chase scenes: notice how some games that involve cars sometimes have a chase mode, where they change some rules and behaviors? That's kind of a chase scene generator! Whether that is PCG is a different, albeit equally fascinating discussion.)

Let's talk about a couple of possible counter-arguments to my thesis that PCG implies repetition.

  • What if you generate the entire game, a la ANGELINA or Game-o-matic? I think this falls outside the premise of my argument. I am thinking of this as a game designer who wants to provide a certain experience, and wants to do so effectively. While it is possible to imagine a game creating system that would create the game with me on a very high level where I don't have to think about repetition, I think that is also the point where I am no longer a game designer in the sense I am now.
  • What if you generate the entire content for the game, like Dwarf Fortress? Again, looking at it like a game designer, if I'm going to build something like that, it just means piling generators on top of generators, and each generator implies repetition.
  • What about the generate-and-test approach many procedural content generators use, where content is generated by a piece of code, and either the user or a second piece of code, or both, evaluate it and reject or adapt it? Is that repetition? For the purposes of this discussion I consider that an internal detail of the procedural content generator. Mainframe, for what it's worth, does not use a generate-and-test approach.

There are only two exceptions I can see:

  1. You make a game that requires only one tree (say), and you generate that tree. A sensible use of PCG, and there's no repetition.
  2. You build a game that uses PCG without repetition because it's a constraint you set yourself, for conceptual or other artistic reasons. You can always break a rule on purpose. Jason Rohrer wrote an abstract board game generator, ran it only once (for all intents and purposes), fabricated the resulting game, and hid it in a location unknown even to him. Obviously this was a conceptual stunt. But, yes, PCG without repetition.

A final point about what I said in the beginning: the "hey look we can create a billion dungeons / planets" approach is putting PCG front and center, creating an experience around it. And, like I said, it makes sense, it is effective, there's nothing wrong with it. But I am interested in exploring how we can use PCG in different ways. That may lead me away from what people think of as PCG (that was already a goal I had for Mainframe). But it's a fascinating journey.

Anyway, those are my thoughts. I hope they will trigger as many interesting discussions as my original tweet did.

Mainframe, the procgen horror IF game Liz England and I made

Last year I wanted to do something for PROCJAM, Mike Cook's "make something that makes something" one week game jam, but circumstances conspired against me. This year, I collaborated with Liz England, and we made a horror interactive fiction game called Mainframe. It's about a spaceship and its mainframe and something is wrong and it needs help.

Making it was a great experience. I learned a lot, and I'm going to be talking about it more.

The texts, assets, and source code can be found on GitHub.

The conflict between game design and AI programming

A few days ago, Julian Togelius tweeted this:

I replied that I have an interest in AI, and I agree with the quote. Julian didn't reply, but I was still motivated to write this small blog post. (Thanks Alex :))

I don't, in fact, know the context of the slide in question, or Julian's position. But it strongly reminded me of a problem I have encountered a number of times in my career: the conflict between game design and AI programming.

This is not a juicy conflict with battle lines between people in different disciplines. It is, in fact, an inner conflict. I have worked as a game designer, and while I cannot claim to have done any real AI programming, I have worked as a game programmer, and I've been following AI for 25 years, so allow me to pretend I have a shiny, if unused, 'AI programmer' hat here.

When I think about how to approach certain gameplay situations, as an AI programmer I go 'OK, I could use this and that and such and such a technique'. But then, as a game designer, I go 'Or I could use a trick here and do something there to make the entire problem go away'. And of course game AI programmers are very, very aware you can do this, which is why there are sessions like 'The Simplest AI Trick in the Book' during the AI summit at GDC.

Here is a fun example from Dene Carter's Gamasutra post on 'punk AI':

Did you know that villagers [in Fable 1] get into bed? I don't mean just lie on top of the beds. I mean actually fold back the sheets and get into bed. No? That's because nobody ever went into houses at night because they were locked. Oh, you could break down the door but nobody ever did, and you'd get arrested if you were spotted anyway. We actually discouraged players from seeing it. (Note: we dropped the entire behaviour set for Fable 2, along with many others such as children's school days. Nobody noticed)

So I have a kind of mental trap I fall into. If I approach the problems I occasionally chew on as a game designer, I come up with tricks to go around them, because as a working game designer, that's what you do. But that leads to coming up with the same designs as everyone else. If I approach the problems as an AI programmer, I end up wandering around with a hammer, looking for nail-like objects. Or focusing too much on the hammer. (Sorry for being vague about these 'problems' - think virtual dungeon master / narrative AI / social interactions type stuff.)

And that is what the quote on the slide made me think of, and what I wanted to briefly write about. It's not a problem that blocks me, but something that occasionally slows me down.

Why I'm not publishing the slides to my dark side of game development talk

I've had a couple of requests for the slides of my talk on the dark side of game development which I gave last year at ENJMIN, and last week at the Northern Game Summit in Kajaani.

After some thought, I've decided not to publish them. The actual slides are very spare, often showing just a single image or a few words, so they're not very helpful. I don't have detailed presenter notes (as I found to my dismay when I started rehearsing it again about two weeks ago).

One of the images is a portrait of a personal friend, and I feel weird enough using his photo as it is. Some of those words are "stress" and "depression" and "sexism" and "homophobia".

Because I talked about sexism and mentioned some of the female friends who helped me with that section, I want to reduce the contact surface in case Internet idiots come across the slides.

This is a very personal talk, and I feel that only the actual performance - all the words I said, the way I said them, and the questions I answered afterwards - comes close to saying what I meant to say. This talk only works when I fully believe everything I say, it's not a dry technical talk at all. Writing slides with full presenter notes that say the same thing would be very hard, perhaps impossible.

And both times the talk wasn't recorded. Sorry!

If you want to get an impression of what I talked about, the resource post may be helpful.