Quantcast
Channel: Fatal Abstraction » ColorIt
Viewing all articles
Browse latest Browse all 3

ColorIt – Post-Mortem

$
0
0

History

ColorIt is a project I worked on together with four other students of the Master’s Program for Game Engineering and Simulation at the UAS Technikum Wien.

My colleague Martin Nieratschker came up with the idea of a game in which the player fights for territory by manipulating a canvas and applying color to it. As part of a design class, we were tasked with prototyping an idea for the game project we were supposed to deliver by the end of the third semester. I quickly cooked up a simple 2D-prototype that let four players paint the canvas using an XBox 360 pad that allowed us to evaluate our ideas. Even though the prototype was buggy as hell on the gameplay side, we quickly realized that there was something cool about the concept and kept going.

In the course of a few weeks, I hunched down and built a simple 3D rendering framework that would suffice our needs. After those awkward weeks where no playable version was available, we could finally test how well our initial design had translated to a 3D-splitscreen multiplayer game. That was when we first started killing holy cows and began truly iterating on the concept.

What went right

  • Prototyping – Helped build confidence in our coding abilities
  • Scope – Intentionally aiming low made any progress all the more motivating
  • Constraints – Deciding early on what we did not want to do paid off in droves
  • Iterating – The best design decisions were made halfway through the dev cycle
  • Cutting – Even though we aimed low, some things had to go without any tears
  • Visuals – For a game almost entirely made of programmer art, it looks pretty rad,  if you ask me
  • Build Organization – Having a clean system and neat build output made testing so much easier
  • Model Converter – Transforming geometry to vertex-buffer format allowed keeping the FBX-SDK out of the game application
  • Simple Engine Systems – An engine where every single module can be rewritten in a day or two if need be empowers creative freedom
  • In-game Console – Making it the central controlling organ of the game made testing that much easier
  • Zealous Assertions – Hardly ever was there an instance where the game crashed and we didn’t instantly know why

What went wrong

  • Team size – We were not free to decide on this but five people is just too much for a project of this small scope
  • My hesitance to delegate work – Progress was held back by me not wanting to give up responsibility
  • Prematurely ruling out skeletal animation – Even though flexible, our own hard-coded system was just too hard on the art department
  • Not patching the model converter - Converter output had the wrong vertex order; not fixing it in the tool resulted in artists having to work around
  • Not considering automated testing enough - Even though we did use unit testing, most of the systems were not designed with testability in mind
  • Not solving seemingly hard problems up-front

Selected Details on what went right

Scope

There’s one advice I have heard pretty much all professionals give when asked about tips for aspiring game developers: Don’t over-scope. Keep stuff small and basic. Don’t try to compete with AAA productions. We took this advice by heart. This resulted not only in a game design that was simple enough to almost completely re-arrange mid-project but that was also simple enough to not rely on top-notch visuals. No matter how much blood you put into your pet-project shooter, it will always look mediocre in comparison- But try to make something different and competition will be much more forgiving.

I’m not sure whether I should point this out or not, but aiming so low also resulted in a very relaxed semester. While other teams were literally working their asses of, I’ve had plenty of time that semester, which came in quite handy when my father got very, very sick. Still, we’ve got something to show and it’s not too shabby for what was  basically a 2.5 people project.

Constraints

It was pretty clear we did not have the capacity to add anything but an incredibly half-assed AI or networked multiplayer mode, so we decided to make a split-screen multiplayer game very early on. That makes it hard to show off and a tough sell for the PC but in the end, this project was about education and getting something fun done. Focussing on local multiplayer also resulted in the decision to require game pads which makes the game an even tougher sell but it also sparked the development of the radial menu that  many testers explicitly stated they loved and a simple yet effective control scheme that would not have been possible with a mouse/keyboard combo.

Iterating

Early on, outposts were designed to not only fortify  but also spread your color. Iteration made us recognize that this concept was way too fuzzy to make for interesting trade-offs. This may sound like a no-brainer and more experienced designers might have spotted this sooner, but for us it was vital to just try alternatives and instantly see what works and what doesn’t.

Another system that only became as good as it is today is the controlling of the player cursor. Initially, we had a scheme were the left stick controlled the cursor and the right stick made the view pan as it would be in classical PC RTS games. Constant iteration gave us the chance to play around and come up with the final scheme that just works so much better.

The bridge and nuke abilities were only introduced when we were already halfway through development and test games got longer and longer. They now add the necessary spice to quickly  topple stalemates and situations in which players are entrenched.

Cutting

The initial design draft included not only strategic points to be captured but also ability points that would grant the occupying player stat increases and abilities. What sounded pretty cool on paper was clearly a lot of work and leaving that feature out made for much tighter and more focused gameplay. Throughout the dev cycle it was tempting to re-introduce the concept to provide some strategic trade-offs and more meta-game but having external persons play the game quickly revealed that increased complexity would be out of the question for the initial release.

Build Organization

Having the code build depend on shader-compilation, texture compression, geometry-transformation and archive packing made for nice and clean build output and builds where assets never got stale or outdated. It was a breeze just having the build process publish the outpot to a dropbox directory and wait for tester feedback. I was tempted to set up a Jenkins server but that probably would have been overkill. Also, having the game’s SVN-revision appear on screen and in the console logs made tracking open/fixed issues in Trac pretty easy.

Model Converter

Initially, I didn’t even consider this worth noting, but the fact that all the other student projects loaded their models from AutoDesk *.fbx using the FBX-SDK made me really appreciate our approach. We just rolled a quick and dirty command line application that reads an FBX file, strips some unecessary vertices and generates a file with a byte stream that could be directly fread() into a vertex buffer. Not only did that result in faster loading times and simpler run-time code, it also eliminated the need for a leaky 6MB dll next to our neat and tidy 200KB executable.

Simple Engine Systems

As you might  have guessed from the title of this blog, I’m not a big fan of over-engineering. Keeping engine systems as isolated and straight-forward as possible at the cost of flexibility made it way easier to code fearlessly. Sometimes, something turned out to be poorly designed and had to be ripped out and rewritten but having lightweight interfaces made that process enjoyable instead of dreadful. Mind that I’m talking about literal interfaces as in “what’s the functionality I can import by including that header” rather than slapping virtual onto everything. People might disagree with my design-pattern-agnostic low-tech approach. That’s okay. I’ll gladly tip my hat to those should my ways ever come back to bite my… err… posterior.

In-game Console

In the beginning, I wanted to  have one for the hell of it – Later on it became more and more clear how it made things easier. The ability to alter sound/graphics settings, starting, pausing and quitting games might not sound like much. But it can really be nice when you allow commands to be put in a file and executed any time or allow them to be provided as startup parameters. Debugging shaders becomes a lot more enjoyable when PIX is set up to start the game with

start_game map_4_0

and spares you the hassle of having to whip out a controller and navigate through the menu every damn time you want to double-check. Having the console initiate game-state transitions and settings also made setting up menus pretty easy. All that  buttons need to do is call console_execute(…) with the appropriate command in a const char* and be done with it. A possible downside I see looming over all of this is the console becoming some kind of god module that depends on every other module because it has access to all of them. This is definitely something I need to watch out for, going forward.

 Zealous Assertions

Having read quite a lot about custom assertions (especially the POW2-Assert on Charles Nicholson’s blog that I ended up using with a few slight additions) I knew using them would be a Good Thing™. However only reading Tom Forsyth‘s take on them, as in

I’ll put them on PARANOIA>=3. That’s basically the setting where it’s such a nasty bug to find that I’m happy to leave it running for an hour a frame if in exchange an ASSERT fires to show me where it is

made me use them zealously throughout the code and  boy was it worth it. It worked so well that I could start implementing a feature and try to run the game worrying stuff would literally explode into may face Star Trek-Style to find out every oversight was caught by another assertion and I could simply add code to satisfy conditions until everything worked like a charm. Now don’t get me wrong, I don’t put my brain to sleep and rely on that when I code. But it sure feels good to have a safety net that keeps you from spending nights debugging only to find out there was one weeny error in that damn support function someone (probably yourself) cooked up that one day two minutes before midnight.

Selected Details on what went wrong

Team Size

Even though we were not free to decide how many people would  be working on the game, I feel the need to re-iterate what I took from Fred Brooks‘ teachings. And a bunch of guys thrown together just like that makes no surgical team, that’s for sure. Also, as hopefully anyone can imagine – students come with wildly different levels of skill and motivation. Having more of them decreases your chances to maintain a certain level of motivation and proficiency. And before you know it, you’ll spend your time fixing frigging compile errors that somehow made it into the repository.

My hesitance to delegate work

I’ll  be frank about this. My presence on the team severely hampered general motivation. In the  beginning people could not wait to run off in all directions and start churning out code. We could have accomplished so much more, hadn’t I been such a wet blanket. What I did was tell people to spend a little more brains on stuff. When that didn’t seem to produce better code, I more and more turned to telling them not to do do anything before we at least had a chance to talk about it. In the end, the team had pretty much boiled down to  just Martin and me.

I stand by my decisions. People who just paste tutorial code everywhere and then bitch and moan about how nothing works and it could not possibly their fault because their code worked flawlessly in isolation should not be on any project. But sadly, they exist and I should try  harder to come up with remedies for situations like that- I won’t always be able to choose my colleagues out there so I must learn to cope with things.

Prematurely ruling out skeletal animation

From the get-go there was not really a need for skeletal animation in ColorIt. We did not consider implementing it because there would be no one willing to rig and animate models anyway. So I hacked together an animator component that was able to interpolate factors, vectors and quaternions so we could at least have things become bigger and smaller or float up and down. The system turned out pretty powerful and extensible for how little code it was  but it was also error-prone because it operated on plain memory. Furthermore, it took some brains to get what you wanted since key frames had to  be hard-coded and there was no way to preview any of that stuff other than running the game. It may have been just enough this time around, but a solid animation pipeline would have brought much to the table.

Not patching the model converter

When exporting Maya binaries to FBX, the vertex winding order for front-faces is clock-wise by default. Of course, our engine assumed the opposite but thankfully someone came up with an allegedly simple way to turn around vertex order in Maya prior to export. Only that it wasn’t simple at all and since I did not feel the pain of that contrived workaround myself, I did not push for the model converter to be updated so that it would fix winding order regardless of the output Maya produced. Which meant that our sole artist (Go Martin!) had to Z-scale every model with -1, rotate it by 180° about Y and then spill chicken blood over his keyboard and poke a goat in the eye before exporting it in Maya.

Not considering automated testing enough

After reading Noel Llopis’ excellent rundown on unit testing frameworks, I wanted to try to unit test stuff using googletest which was definitely a good decision. The framework is top notch and it all worked very well. Testing systems like the HeapAllocator, console buffer population and the AnimationPlayer component greatly increased my confidence with complicated code. But especially testing the console stuff showed how physical dependence kills unit testing. I could eventually hack my way out of the dependency jungle with forward declarations and some mocking but the point to take away was clear: Even though unit testing is not a silver bullet to make systems better, designing systems with any kind of automated testing in mind helps keep interfaces clean and dependencies few. Something I definitely want to mind when I rewrite some of the systems in KoreTech.

Not solving seemingly hard problems  up-front

Rigid separation of concerns as well as cutting down on physical dependencies was definitely a good thing. Storing components in  memory linearly and passing integer IDs  across system boundaries as suggested by Niklas Frykholm opens up lots of optimization opportunities. I’ve put everything in place that would allow me to use clever stuff like render queue sorting but I never really got around to actually implement the damn thing. My list of neat engine enhancements is filled to the brim because I kept delaying them for weeks with the excuse being that other guys on the project NEED FEATURES NAO and I guess that was still valid. But at the end of the day, the lesson to be learned is that even though sacrifices have to be made, one should not shy away from taking on daunting tasks.


Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images