This is the beginning of a series on resource pipelines in games. Specifically, this is about the tooling and related topics of getting content from authors into a shipping game build.
A few different sub-topics will be covered. The primary topic will be around the tooling used to convert so-called source resources (like Maya files) into shipping resources (usually engine-optimized custom formats), and doing so as efficiently and reliably as possible.
Secondary topics will include resource packaging concerns, how to structure resource loading in an engine to ensure smooth tooling and packaging support, and how to handle resource dependencies for multi-platform games.
When Pipelines Hurt
Let’s step back in time to the distant mist-ensconced summer of last year (2017). My project at the time was of a decent size, maybe 150 people, mostly artists and designers. Our game was moving well past our initial vertical slice and we were scaling out to production levels of content - half a dozen maps, several dozen playable roles/classes, all art-complete.
Our repository of source resources was hundreds of gigabytes, not far off from a terabyte. That itself wasn’t a problem; the more source resources the better, really. the problem was that the final compressed builds of our game were over 100GB and growing, and we had maybe only a quarter of our content built at that point. Making a clean build with all content took multiple days of processing time.
After our team did an analysis of the problems, we identified a few key problems. Some of these were underlying problems with the engine we used, some were self-inflicted wounds caused by our misuse of the engine, and some were novel new problems specific to our project.
First, we had no idea which content was actually in use by the game; we processed all our source content, and packaged all our source content into our final builds. Aside from the many various problems that can arise when unfinished or unintended content is shipped with a game, this primarily meant that our 100GB builds would actually be a lot smaller if we stripped out unused content.
Second, deeply related to the first problem, was that we couldn’t easily break our content up into separate packs. If we wanted to make a particular map into DLC, for example, we had no good automated way to identify the content required by that map but not also required by the base maps. That is, we couldn’t automatically figure out how to pack the exact set of resources necessary to make a working small DLC package.
Third, getting to the subject of build speed instead of size, we were often rebuilding the same content. Our workflow involved a lot of Perforce branches, and builds couldn’t share workspaces for reliability reasons. An expensive piece of content checked into one branch would need to be converted, and then that same content was converted again after it was merged into another branch. Platform support exacerbated this; some content needed to be converted differently for each platform, but most content did not, yet we reconverted it for each platform all the same.
Fourth, our content conversion pipeline was essentially all single-threaded and single-process, which meant that we couldn’t really scale it out by adding hardware. More build nodes just meant more builds in flight at a time. Individual conversions, like generating lightmaps, might be parallel, but the process as a whole was not.
A few other problems existed, but those four were our biggest issues.
It’s worth noting that the tech stack we were using was battle-tested and had been used to ship multiple AAA games in the past. These weren’t problems unique to indie developers, first-time developers, nor to brand new incomplete engines.
Using an existing engine and tech stack did help in that some problems had been solved reasonable well in the core engine and pipeline code. The dependency graph for resources had been solved, for example, though primarily for the purpose of pre-loading resources during level transition.
Thus we had a lot of the necessary infrastructure for dependencies, and that infrastructure was actually pretty good.
The key bit offered by the engine was a built-in resource reference type. Essentially a wrapper type around an resource id (in that engine’s case, an resource path and hash thereof).
Also important was the reflection-based serialization of those types. This is handy because it avoids the need to deserialize source resources just to locate references in files.
Were reference data just output as a string, tools would have no way to differentiate which strings were references to resource and which aren’t.
Resource pipelines aren’t trivial. Putting effort up front to get a game’s resource pipeline into excellent shape well before entering production can save the entire team a lot of heartache and lost time.
In future installments, we’ll look at some of the individual topics of building a production resource conversion pipeline.