Game Development by Sean

Table of Contents

The Gold submissions at DigiPen were due at the end of last week. Our game Subsonic made a strong impression, but not everything was rosy perfection. Here’s a look at what went right and what went wrong, and how you can avoid our mistakes in your own projects. As a recap, our game is a top-down stylized stealthy puzzle game set in a 1960’s Spy-Tech world, where the player’s main weapon is a sound gun that can be used to distract, confuse, and disrupt enemy agents and guards. The short version of all that is that Subsonic is an AI puzzle game where you can see sound.

What Went Right

Perhaps the biggest success was the gameplay idea itself. Unlike many other teams, we found that original idea carried through relatively unchanged from initial conception all the way through Gold. While there certainly was a strong need for additional mechanics to really flesh out the sound gun ability, overall we made very few changes to the gameplay and design. Unfortunately, I can’t give any real advice on how to emulate this success; we hit on a good idea and were lucky for it.

Our stylization and graphics helped to really carry the game idea forward. Instead of aiming for a realistic or even cartoony character-rich visual style, we aimed for a tactical read-out type of display that displayed the environment, player, and characters in a very abstract vector style. This freed us from the need to create or tweak a large amount of art assets. What we lacked in raw art we made up for in effects and polish, from the way the sound waves bounced off walls to the virtual scanlines and CRT-style blur applied to the game screen. Level transitions included blur and saturation effects, menu items animated on mouse over, and the HUD made heavy use of shaders to display a very gorgeous charge meter for the sound gun. Given the overall simplicity of the graphics they still managed to look quite impressive and beautiful, especially for a Sophomore game.

The third major success was the strength of the core engine. Overall, relatively little in the engine needed to be replaced or rewritten during the second semester. There were certainly some cleanups, feature enhancements, and some refactoring, but overall the engine was flexible and easy to use to the point that everything we wanted to do was easy to pull off with a minimal of fuss. I attest this to the clean use of components and messages.

What Went Wrong

By far the biggest issue we had was the relative weakness of our in-game editor. This was attested largely due to the lack of a solid GUI framework and my reluctance to write such a framework as anything less than a full application-level GUI framework on the lines of a slimmed down GTK+ or Qt. By not simply putting something together that did what it needed even if not using the best possible technology, the editor never really materialized into much more than a basic tile editor. This limited the flexibility and productivity of the designers to an unacceptable degree.

A second big failing was the lack of attention paid to play testers during the second half of the semester. During the first half a lot of feedback was taken from testers and used to tweak and enhance the gameplay. Towards the end, however, many of the levels turned out to be too difficult for players, but the feedback was often ignored; sometimes even ridiculed by the designers. While the game managed to retain high marks for fun despite the difficulty and balance issues, by far the biggest complaint we have repeatedly received is that the game ramps up the difficult too fast and that many of the puzzles are impossible to solve by any means other than permutation (trying every solution until the particular one the designer had in mind is found). This is the complete opposite of the problem we had at the end of the first semester where the game was considered too easy as any puzzle could be bypassed simply by running fast enough. We never found the perfect balance, and this can be attested almost completely to the failure to really take in play testers’ feedback and tweak the levels appropriately based on how the actual players approached the puzzles.

The final failing, which has little effect on the game itself but which certainly attributed to grading, was a snafu with the final submission. One of the final tweaks to the game made to comply with the TCRs (Technical Certification Requirements) used for grading was never checked into our Subversion repository. When I did my final run through the TCRs, I checked off that requirement (the presence of a simple help screen showing player controls) as I knew the feature had been written and tested. What I failed to do was to actually check the RTM build I made to ensure that the feature was actually present. Because of my lack of thorough testing, I did not catch that the feature was not committed and hence not present on the RTM build machine and our final submission lacked the critical requirement. By virtue of having to make a second submission after the deadline when the instructors notified me of the mistake our final grade for the project dropped a full 10%. The moral of this story is treat your RTM as a Release Candidate (the real kind of RC, not the “beta test” RCs many companies release). Make the final build and then fully and completely run through and tests and certifications; it it passes, great, and if not, fix and build a new RC and then completely run through the tests and certifications again.

Summary

Overall, I consider Subsonic a great success. Our team plans on doing some more light refinement and work on the project in preparation for submission to IGF and IGC.</div>