How Poltergus' goofy 2.3D Phonetic Face System Works

Louis and I have been hard at work experimenting with a lot of new stuff for Poltergus lately, and it's been really great! One of those little experiments that I wanted to share with you all is our pretty silly approach to facial animations.

Gus' headshot, supplied by his agent.

Gus' headshot, supplied by his agent.

Poltergus is a fully 3D game, but we still want a lot of it to feel like a cartoon. For that, Louis decided that we should make all of the characters' facial features (eyes, eyebrows, and mouth) two dimensional. The resulting combination is pretty much exactly as unique as we were hoping for. I like to call it 2.3D.

Not only is it a neat artistic direction, but it also solves the issue of doing complex 3D facial rigging for character speech. It lets us keep a lot of the model's geometry simple since we don't need to worry about it deforming as the mouth opens and closed. Instead, we have a series of sprites that float a short distance in front of the model, and each of them can be swapped out to give the impression of facial animation. The idea is similar to the way that faces are done on Cartoon Network's Robot Chicken, where paper cut-outs of mouths are glued onto the faces of action figures and changed out every few frames.

Doing this manually for every possible animation in the game would be incredibly time consuming. Instead, I decided to come up with a system that would take care of synchronizing the lip movement sprites to any line of dialogue without the need for any animation work at all. With that in mind, I created our new Phonetic Lip Sync system.

Demo of the Phonetic lip-sync system developed for Poltergus.

Screenshot 2015-09-30 14.55.30.png

The core of this system is actually a text parser. This is the only part that actually requires some amount of configuration because of how adaptable it is. Basically, you give it a series of mouth sprites and then tell it which phonetic sounds they are equivalent to. We then feed a line of text into this script which it will break down and sort into the required mouth shapes and queue them up in order.

Where this system gets really funny is when you look at how we write the text input in a way that it can be properly interpreted. I wanted something that would be easy to use (IE, the text that you input should be as close as possible to the actual line of dialogue), but would still allow you to be flexible with how long each mouth lasted. This means that you can account for different intonations and emotions on the same line of dialogue.

We write out the line as a series of phonetically spelt syllables separated by a break symbol (I use semicolons) that tells it how to split everything up. When it gets parsed, the system only pays attention to the first few characters of each segment to determine which mouth sprite to take, and then treats any extra letters as an indication of how long it should last. For example, in the video demo above, Gus holds a really long "oo" sound in his hello. This is written out like this:

It's an easy to understand method which, in practice, is actually very quick to tweak the timing on. It's basically a combination of the way people write in text messengers with the phonetic spelling of Middle English texts that make you sound drunk while reading them.

Anyways, that's all for now! We'll have more stuff to show in the coming weeks as we get closer to launching our Kickstarter campaign!