Game Development by Sean

My GDC '17 Talk Retrospective

Table of Contents

I spent more of this year’s GDC at talks than I have in any previous year. The talks I attended were quite good; I took something valuable away from almost every talk. Whether it was changing my mind about certain techniques I previously disliked, or providing evidence that some “crazy” idea we’ve had back in the office is already working at scale in other studios, or showing me an entirely new technique, or just clearly organizing information that lay scattered in my brain, the talks are worth watching.

My note-taking skills have never been particularly excellent, but I tried to jot down notes on each talk and its big take-away for me.

Listed below are many of talks that I found either valuable to myself or that I thought my be valuable to my colleagues who couldn’t attend GDC this year. Note in particular that I obviously couldn’t attend every talk and there’s most absolutely going to be a number of excellent presentations missing from this list simply because I haven’t seen them yet.

This is going to be most useful has a mini-guide to which talks readers should watch on the GDC Vault once the 2017 talks are available; it’s not possible to condense an hour-long talk into a paragraph or three without losing something of value, of course.

Monday, February 27th

Math for Game Programmers - Noise Based RNG

Squirrel Eiserloh presented strong evidence for using noise functions in place of more traditional RNGs. Random number generators typically are forward-only mutating algorithms; they can’t be threaded easily, can’t be rewound or skipped forward efficiently, and may have expensive seed or copy operations. Noise functions are stateless and so solve most of those issues. The most eye-opening bit of the talk were a set of benchmarks and tests showing that noise can outperform even well-regarded RNGs like Mersenne Twister by orders of magnitude, and these fast noise functions can produce a higher quality of randomness! Plus an RNG can be built from a noise function just by adding a generation value, which itself can be used from multiple threads safely via atomic increments, so a noise-based RNG is a very compelling option for game devs.

It all seems obvious in retrospect, but Squirrel’s talk made a case that perhaps really needed to be made.

AI Arborist: Proper Cultivation and Care for Your Behavior Trees

This talk was a set of three speakers (Bobby Anguelov, Ben Weber, and Mika Vehkala) providing varying sets of advice and tips on using Behavior Trees.

The key points I took away included the value of really thinking through the visualization of the trees and the value of behavior modifier nodes. The modifier nodes allow behaviors in the tree to be, well, modified, which can reduce the number of unique node types that a programmer must implement, as well as allowing designers more flexibility.

Visualizing modifiers and the nodes themselves in a concise and meaningful fashion will help designers write more complex trees quickly and with fewer bugs, naturally. Definitely worth a watch if AI is in your wheelhouse.

Math for Game Programmers: Harmonic Functions and Mean-Value

I’ll start off by noting that this talk went a fair bit outside my area of expertise, and I don’t think I absorbed as much from it as I had hoped I would. The speaker, Nicholas Vining, gave a talk at last year’s GDC that may be near-required watching to get the most out of this talk.

The main point as I understood it is that harmonic functions allow for a parameterization over a mesh. Think triangle rasterization but for a 3D model. The utility of this seems most useful to me for tools and editors, allowing artists or designers to apply textures or other effects to a model after selecting a few key points (like applying a gradient texture to the mesh).

Tuesday, February 28th

Taking Back What’s Ours: The AI of ‘Dishonored 2’

I really liked this talk, and it being so early in the week, I’m extra annoyed at how light my notes were. I’ll have to watch this one again myself.

Xavier Sadoulet and Laurent Couvidou talked about various topics regarding Dishonored 2’s AI. The parts I recall finding particularly valuable were their rule system for AI (simple but flexible) and their group combat AI. They also talked about spatial reasoning which may be of great interest to folks working on similar sorts of games.

Board Game Design Day: Board Game Design and the Psychology of Loss Aversion

Like so many other Gen X’ers and Millenials, I like board games, so this year I went to a couple board gaming talks. I wasn’t disappointed.

In this talk, Geoffrey Engelstein explains a great deal of the psychology around how players react to gains and losses in games. “Loss Aversion” is by no means a new topic to me, but Geoffrey’s talk greatly expanded by understanding of the topic and some related effects.

This definitely expands into digital games as well. A particularly big takeaway for me was how important wording can be to players’ emotional responses. One example the talk gave: mechanics may exist such that a player losing 1 point or their opponents each gaining 1 point are mathematically equivalent, but each has fairly significant impacts on players’ perceptions of the game; which approach the design should take depends on the emotional response that the designers want to elicit.

Can You See Me Now? Building Robust AI Sensory Systems

Eric Martel provided some advice on building vision systems for AI agents. I didn’t personally find much new or novel information here. However, the presentation was solid and clear, and I felt that referring others to the talk may provide a lot of benefit.

Bringing Hell to Life: AI and Full Body Animation in ‘DOOM’

I really liked this talk. Jake Campbell presented the technical and design choices in DOOM’s AI and animation systems.

The main idea is that they relied heavily on full-body animations with relatively little additive blending, outside of a few really keys items that made the demons feel really reactive and lifelike. Jake also demonstrated some animation tricks that gave the animators more control over IK and the like.

The animations in DOOM were excellent, and for anyone looking to replicate that style, this talk is a must see.

Wednesday, March 1st

Continuous World Generation in ‘No Man’s Sky’

Innes McKendrick demonstrated how No Man’s Sky spatial system worked along with procedural generation. The really neat trick they used is to take advantage of other coordinate systems besides just the usual Cartesian.

The talk will be of particular interest to anyone working on larger planet-spanning games that need to work with big spherical terrains.

Cold, Hard Cache: Insomniac’s Cache Simulator

The always excellent Andreas Fredriksson of Insomniac wrote a tool called ig-cachesim used to measure GPU cache misses. Think Cachegrind but fast enough to be usable for games and which works on Windows.

The talk delved into the details of writing the tool. The most important bit of the talk for most will be that Andreas and Insomniac released ig-cachesim on GitHub at https://github.com/insomniacgames/ig-cachesim. I’m really looking to trying it out with our project when I get back to the office!

‘Overwatch’ Gameplay Architecture and Netcode

One of several excellent talks about the technology in Overwatch. In this talk, Timothy Ford presented various aspects of the Overwatch gameplay code.

Perhaps the biggest impact on me personally is that it actually convinced me that the ECS architectural pattern isn’t entirely just an over-hyped fad. I still have some reservations, but many of the problems I’ve observed with ECS in publicly-available articles or GitHub repros are solved by Overwatch’s approach. It’s simple, light on template and metaprogramming bloat, and with Overwatch’s gameplay design is able to play heavily to the strengths of ECS without sacrificing all to the altar of data-oriented design. The ECS itself was a fairly small portion of the talk, but it definitely impacted me fairly heavily.

The meat of the talk focused on the state-based gameplay code, how the network interacts with states, and how prediction and server correction interact with gameplay code. It’s not especially novel, but it was exceptionally well-presented and made a very strong case for state-based gameplay. Their architecture just won’t work for many types of games (there’s no silver bullet) but it definitely seems to have been the right choice for Overwatch.

I highly, highly recommend watching this talk if you’re interesting in game networking or multiplayer gameplay code.

Networking Scripted Weapons and Abilities in ‘Overwatch’

This talk by Dan Reed is in many ways the Part 2 of the previous talk. Dan showed the state-based scripting system used in Overwatch. Combined with the state replication netcode, this system allowed Overwatch’s designers to quickly build out abilities and events that Just Worked(tm) with Overwatch’s clientside prediction and server state replication systems, all while using a minimum of bandwidth.

Essentially their system is a node-based graph system with state nodes. A state node is essentially a condition that can be triggered into on or off states. Scripts can also read and set variables. A variable or state can be replicated to the client; a server might need to evaluate a number of variables and conditions to select a state, while the replication system might need only copy down a couple of variables and some state values to the client for the client to deterministically replication the script’s actions.

Because it’s state-based, client mispredictions can be rolled back without messy and error-prone triggers in script code. All in all, it’s a really clever design, and is a key part of making Overwatch’s state-based netcode work so elegantly and efficiently.

The Data Building Pipeline of ‘Overwatch’

David Clyde described the tools and asset pipeline Blizzard developed for Overwatch in this talk.

The meat was that they managed to get the time to checkout a new branch down to several minutes, and kept the time to update developers’ local branches to even less. Another cool trick they had is a playtest browser that allows anyone in the company to jump into a test organized by another developer, even if that test requires custom binaries or local assets to play.

These are both huge problems keep coming up year after year at GDC. I really liked elements of Blizzard’s solution.

In particular, Blizzard used the concept of a local asset server that the game would connect to in its filesystem abstraction layer. This allowed a game client to convert-on-demand any assets it needed as they were requested. This removes the need to run a conversion over all assets just to test a particular hero on a single map, for example, which is definitely something we’ve struggled with in the past (e.g., why should I have to wait many minutes to generate terrain data for dozens of maps if I’m iterating on terrain converters and have a single complex test map I’m using).

A part of their solution requires keeping hashes of asset contents (both source and converted) as well as a set of metadata about content versions, which also required them to build a custom revision control system for their content. I’m a bit less fond of that part of their talk; building an asset pipeline and local asset server is one thing, but requiring an entire custom infrastructure (including both development, maintenance, and ops!) for versioning is something else entirely.

Overall though, Blizzard solved a problem that many other studios (even ones with much larger teams and many more development resources!) are still struggling with. Excellent talk.

Thursday, March 2nd

Insomniac’s Web Tools: A Postmortem

Andreas Fredriksson gave a talk describing some of the problems they had with their Web-based tools architecture which they are now moving away from in favor of Qt.

I’m a little conflicted about this talk. Most of the information and evidence presented by Andreas was excellent, but a lot of what I got from the talk was that Insomniac’s Web toolchain was just before its time.

One of the big problems mentioned by Andreas, for example, was that building the tools on Chrome inside the browser made the tools subject to Google’s update schedule and deprecations, and trying to avoid that huge problem broke many developers’ Web browsing environments. At the time, of course, projects like Electron didn’t exist, and even Chromium Embedded Framework was still in its infancy. Today’s Web platform is also a lot more capable than what was available in 2010; building tools inside a browser is a lot more feasible using newer HTML5 APIs. Were tools like Insomniac’s Web Tools built from the ground up from today’s technologies, we’d have a very different postmortem analysis.

Even Andreas’ point about the clash of cultures between C++ and Web development could be lessened, due to the ability to compile C++ directly into JavaScript code with tools like Emcripten, though I’m not sure that’s the best way to build Web UIs. It certainly does present a lot of additional options.

Overall, there’s a lot of valuable insight to glean from this talk for anyone considering a Web-based pipeline or using HTML5 to build tools UIs. At the very least, there’s a lot of mistakes or pitfalls one should be aware of, and in this talk Andreas outlines many of them very clearly.

Creating a Tools Pipeline for ‘Horizon: Zero Dawn’

Overall good talk on building tools for larger open-world games by Dan Sumaili and Sander Van der Steen. I unfortunately took particularly bad notes for this talk and don’t remember a lot of the details.

A portion focused on their engineering practices and approach to making large refactors to their existing pipeline from concept through late production of Horizon. I found that somewhat useful given the industry’s tendency to avoid risk or tech investment in production projects.

‘Rainbow Six Siege’: Optimizing Servers for the Cloud

Jalal El Mansouri presented some insights on getting the best performance out of game servers, particularly in the wild west environments of cloud servers where compute power/time may not be fully under the developers’ control.

If you’re already familiar with game network architectures but not yet familiar with the realities of cloud computing, this is a good talk to watch.

Stop Killing Our Servers!

This talk by Sela Davis and Jennie Lees presented a number of warnings and tips for getting clients and server to play nice. In particular, insights were provided on how to architect clients to avoid causing excessive server load that might cause a collapse on launch day.

There wasn’t much novel about the talk, but a lot of very very good points were clearly communicated. I’d highly recommend this talk for developers for online, both on the server and client teams.

Avoiding server death isn’t purely in the hands of the server’s developers. The clients can do a lot of dumb things that might bring down a server, and a good degree of UI flow and client-side engineering is required to be a good server citizen.

Examples provided included things like the value of login queues (avoid a stampede of logins after a server crash or release), client-side caches and tolerance to stale data (don’t poll the server constantly for data the client doesn’t need or which hasn’t changed), and the value of monitoring and telemetry on server health and behavior (including client logs that can be easily correlated to server logs for bug hunting).

Friday, March 3rd

Deterministic vs. Replicated AI: Building the Battlefield of ‘For Honor’

Xavier Guilbeault and Frederic Doll presented For Honor’s approach to networking hundreds of AI agents: make them all deterministic and don’t network them at all!

Players already require a lot of fancy tricks for smooth networking, including prediction, correction, and the handling of network prediction. AI typically just reacts to the world state and player actions. If a client’s avatar is mispredicted and rolled back, the AI can react to that rollback in a deterministic fashion. This allows the game to “just” network player inputs.

For Honor also has AI-controlled heroes who do get their own player-style networking rather than the purely deterministic behavior of the AI mooks. The rest of the talk concentrated on how For Honor’s netcode dealt with deterministically resolving inputs across the network and how this allowed the team to easily add AI to all player heroes.

I liked the talk a lot.

Modify Everything! Data-Driven Dynamic Gameplay Effects on ‘For Honor’

A nice talk by Aurelie Le Chevalier about For Honor’s buff (aka modifier) system. There’s wasn’t much too novel about their approach, but Aurelie did touch on some of the trickier design issues with stackable modifiers and how their team solved those problems.

One bit I did really like - which reminded me of the Overwatch netcode talk - was how their system dealt with actions attached to modifiers (like the spawning of particle effects, UI interactions, etc.). I wish I took a better notes on this so I could explain it accurately without misrepresenting their solution; the gist I recall is that it focused on keeping modifier records on characters that be soft-rolled-back and replayed when handling network mispredictions while still allowing for the effects to be fully reverted when the modifier is removed.

Given how often I’ve seen questions about how RPGs and the like implement these kinds of systems, and how well presented the talk was, I consider this talk recommended watching.