Skin Shading in Unity3d

I've been away from AltDev for a while, a bit longer than I originally expected because after a period of crunch before IGF submission in late October, I went back to the comforts of getting a normal nights sleep, something that working a normal game development job this past summer spoiled me into enjoying. Now the Christmas holiday has given me not only the time to finally begin blogging again, but the chance to do a little side project in something I greatly enjoy: the rendering of human skin, which was also an excuse for giving the latest Unity3D 3.5 beta a whirl:

Motivation

This has entirely been a hobby project for me, because I do not foresee its application into any project I'm involved with. However, I've realized that Unity is rapidly approaching the point where someone may want to use a more complicated skin shading model than just basic wrap lighting (an example of which is provided in the documentation). With this in mind, I thought it might be of interest to do a post pulling together some of the better resources I've found on the topic as well as posting some of my more useful code so that perhaps it might help serve as a jumping off point for a more serious endeavor.

The one issue that is always at the core of skin shading is that of Subsurface Scattering, which is the effect of light bouncing around underneath the surface of the skin and re-exiting elsewhere. A simply using a Lambert model causes very harsh edges, because the scattering is what gives skin its softer appearance. Here's what the built-in "Bumped Diffuse" shader looks like, the basis that is trying to be improved:

Background: Texture Space Diffusion

No discussion of skin rendering starts with anything other than mentioning the NVIDIA Human Head demo, which has a detailed description of its implementation in GPU Gems 3. The technique relies upon Texture Space Diffusion, where the lighting for the mesh is rendered into a texture and then blurred various amounts (based off of the scattering of light inside of human skin). The results of those blurs is combined together to form the actual diffuse lighting term. I actually played around with this technique in Unity around Christmas time last year, but this proved to be difficult given the nature of the TSD and the fact that Unity did not at the time easily support some nice features like Linear Space lighting.

There have been some very useful resources on skin shading in since the publication of GPU Gems 3. There are some excellent Siggraph slides from John Hable where he details what Naughty Dog did to attempt cheaper calculations than the NVIDIA techniques in Uncharted 2. There is also a technique by Jorge Jimenez detailed in GPU Pro that does subsurface scattering calculations in screen space, which removes the cost per mesh of TSD (a serious limitation for it's use in an actual application). This seems to have garnered some adoption in game engines, but I understand that it is still a reasonably expensive technique (note: I'm less knowledgeable of Screen Space Subsurface Scattering, so take that comment with a grain of salt).

With regards to tools for doing Skin Rendering on your own time, a good quality head scan has entered into the public domain from Lee Perry-Smith and Infinite Realities. Also, Unity3d is much friendlier to doing some high quality rendering. There is now a public beta for Unity 3.5, which now features Linear Space lighting and HDR rendering conveniently built into the engine.

The Latest Hotness: Pre-Integrated Skin Shadding (PISS)

I've been impressed with the quality articles in GPU Pro 2 since I picked it up this past summer, one of which is Eric Penner's article detailing his technique "Pre-Integrated Skin Shading." He also gave a talk describing it at Siggraph, and the slides are available online. There are three main parts to it: scattering due to curvature, scattering on small details (i.e. the bump map), and scattering in shadow falloff. I did the first two, no shadows yet for me, but I'll do a follow up post if I get around to it.

The basic motivation is to pre-calculate the diffuse falloff into a look-up texture. The key to making the effect look good is that instead of having a 1-dimensional texture for NdotL, it is a 2D texture that encompasses different falloffs for different thicknesses of geometry. This will allow the falloff at the nose to differ from that in the forehead.

I've adapted the sample code to precompute the lookup texture into a little editor wizard for Unity. Simply make a folder titled "Editor" in Unity's assets and drop it in, and you'll have it extended to the editor (the function is added to the menu "GameObject/Generate Lookup Textures"). I've included the full script at the bottom of this blog post, which computes both a falloff texture that appears to be about the same as the one that Penner shows in GPU Pro 2. Here's what my lookup texture looked like when I was all said and done, this is to be sampled with 1/d in the y, and NdotL in the x:

An important note: if you activate Linear Rendering in Unity 3.5, know that the texture importer has an option to sample a texture in Linear Space. GPU Gems 3 has an entire article about the importance of being linear, I noticed that after I turned it on in Unity, the lookup texture had the effect of making the head look like it was still in Gamma Space. I then realized I needed to flip on that option in the importer. In fact you may notice that the lookup texture in GPU Pro 2 looks different than the one in Penner's Siggraph slides, this is because the one in the slides is in linear space. The one in the book is not, which is also the case with the one pictured above.

The parameter 1/d is approximated with a calculation of curvature based on similar triangles and derivatives (there's a great illustration of this in the slides), and I've included the snippet of GLSL from my shader. To see the output of curvature on my model (easily tuned with a uniform parameter in the shader to offset the calculation):

// Calculate curvature
float curvature = clamp(length(fwidth(localSurface2World[2])), 0.0, 1.0) / (length(fwidth(position)) * _TuneCurvature);

This curvature calculation is actually a serious problem with implementing the technique in Unity. This is because ddx and ddy (fwidth(x) is just: abs(ddx(x)) + abs(ddy(x))) are not supported by ARB, which means Unity can't use the shader in OpenGL. Those of you familiar with Unity, will know that CG is the shading language of choice, and it is then compiled for each platform. A thorny issue, that I have been stumped with in the past, especially because I do many of my side projects in OSX on my laptop. Tim Cooper (@stramit) came to my rescue with a solution that works decently: to write an OpenGL only shader directly in GLSL, which will support fwidth no problem. This is a *little* painful, especially because Unity seems to have dropped the glsl version of AutoLight.cginc (my 3.4 build has AutoLight.glslinc, but unfortunatey 3.5 beta 5 does not), which is going to make things like using the built-in light attenuation methods much more of a headache. In fact, the reason my sample images use a directional light is because I don't currently have point light falloffs matching *exactly* the same as Unity's. That being said, it hasn't been problematic enough to cause me to just abandon OpenGL support and move to a Windows computer. Furthermore, there's a nice wiki that served as a good jumping off point for doing GLSL in Unity.

(NOTE: Aras from Unity posted a suggestion down in the comments that might cause my choice to use GLSL directly to be un-needed. I will properly update the post after I try it out myself)

Softening Finer Details

As I mentioned, I'm sharing my experience with trying out 2 of the 3 techniques detailed by Penner. The second part is smoothing of small detailed bumps. While the pre-integrated texture will account for scattering at a broader sense, the fine details contained in the normal map are still much too harsh. Here's what my model looks like with just the pre-integrated scattering:

"Too Crispy" is my favorite way of describing the problem. Penner addresses this by proposing blending between a smoother normal map and the high detail normal map. The high detail is still used for specular calculations, but for the diffuse, you pretend that red, green, and blue all come from separate normals. This is very similar to what Hable details as the technique used in normal gameplay in Uncharted 2 in his Siggraph 2010 slides. By treating them separately the red channel can be made to be softer, Penner advises using the profile data from GPU Gems 3 to try to match real life.

In order to avoid the additional memory of having 2 normal maps, Penner uses a second sampler for the normal map that is clamped to a lower mip resolution (as opposed to using the surface normal like Uncharted 2). However, Unity does not allow easy access to samplers to my knowledge, so the only way to set up a sampler like that is to actually duplicate the texture asset, which won't save any memory. So, my solution was to just instead apply a LOD bias to the texture lookup (an optional third parameter in tex2D/texture2D in case you're not familiar). Here's what the same shot from above looks like with the blended normals applied using a mip bias of 3.0:

Specular

While many articles on skin shading focus almost entirely on the scattering of diffuse, the GPU Gems 3 article provides a good treatment of using a specular model that is better than Blinn-Phong. The authors chose the Kelemen/Szirmay-Kalos Specular BRDF, which relies on a precomputed Beckmann lookup texture and takes into account Fresnel with the Schlick approximation. Being that I had played around with that model a year ago when I was experimenting with TSD in Unity, I simply rolled that code into Penner's work. Here's the resulting specular:

A shot before applying specular:

And finally the specular and diffuse combined together:

Concluding Thoughts / Future Work

This has been a fun little side project for me, and I think it turned out decently well. I've made and fixed a few stupid mistakes along the way, and I wonder if I'll find a few more yet, don't be afraid to point out anything terrible if you spot it. My two main goals left are tighter integration with Unity : proper point/spot calculations that match CG shaders in Unity, and shadow support. When it comes to shadows I'm at a but of a crossroads between trying to hook into Unity's main shadow support vs. rolling some form of my own. I suspect rolling my own would be less useful in the effort to make the skin shader completely seamless with unity, but might be less of a headache to accomplish. Either way, if I do make make any drastic improvements, I promise I'll do a follow up post :)

Finally, if you're interested in the IGF project, Dust, that I mentioned as the root cause of my hiatus from AltDev, you can check out the project on it's website at www.adventureclubgames.com/dust/, and the trailer for it if clicking a link is too much effort:

[youtube http://www.youtube.com/watch?v=TJ5dLSh8PC0?rel=0&w=560&h=315]

Source Code

As I promised, here is the source for my Unity script that generates my lookup textures:

// GenerateLookupTexturesWizard.cs
// Place this script in Editor folder
// Generated textures are placed inside of the Editor folder as well
using UnityEditor;
using UnityEngine;

using System.IO;

class GenerateLookupTexturesWizard : ScriptableWizard {

public int width = 512;
public int height = 512;

public bool generateBeckmann = true;
public bool generateDiffuseScattering = true;

[MenuItem ("GameObject/Generate Lookup Textures")]
static void CreateWizard () {
ScriptableWizard.DisplayWizard<GenerateLookupTexturesWizard>("PreIntegrate Lookup Textures", "Create");
}

float PHBeckmann(float ndoth, float m)
{
float alpha = Mathf.Acos(ndoth);
float ta = Mathf.Tan(alpha);
float val = 1f/(m*m*Mathf.Pow(ndoth,4f)) * Mathf.Exp(-(ta * ta) / (m * m));
return val;
}

Vector3 IntegrateDiffuseScatteringOnRing(float cosTheta, float skinRadius)
{
// Angle from lighting direction
float theta = Mathf.Acos(cosTheta);
Vector3 totalWeights = Vector3.zero;
Vector3 totalLight = Vector3.zero;

float a = -(Mathf.PI/2.0f);

const float inc = 0.05f;

while (a <= (Mathf.PI/2.0f))
{
float sampleAngle = theta + a;
float diffuse = Mathf.Clamp01( Mathf.Cos(sampleAngle) );

// Distance
float sampleDist = Mathf.Abs( 2.0f * skinRadius * Mathf.Sin(a * 0.5f) );

// Profile Weight
Vector3 weights = Scatter(sampleDist);

totalWeights += weights;
totalLight += diffuse * weights;
a+=inc;
}

Vector3 result = new Vector3(totalLight.x / totalWeights.x, totalLight.y / totalWeights.y, totalLight.z / totalWeights.z);
return result;
}

float Gaussian (float v, float r)
{
return 1.0f / Mathf.Sqrt(2.0f * Mathf.PI * v) * Mathf.Exp(-(r * r) / (2 * v));
}

Vector3 Scatter (float r)
{
// Values from GPU Gems 3 "Advanced Skin Rendering"
// Originally taken from real life samples
return Gaussian(0.0064f * 1.414f, r) * new Vector3(0.233f, 0.455f, 0.649f)
+ Gaussian(0.0484f * 1.414f, r) * new Vector3(0.100f, 0.336f, 0.344f)
+ Gaussian(0.1870f * 1.414f, r) * new Vector3(0.118f, 0.198f, 0.000f)
+ Gaussian(0.5670f * 1.414f, r) * new Vector3(0.113f, 0.007f, 0.007f)
+ Gaussian(1.9900f * 1.414f, r) * new Vector3(0.358f, 0.004f, 0.00001f)
+ Gaussian(7.4100f * 1.414f, r) * new Vector3(0.078f, 0.00001f, 0.00001f);
}

void OnWizardCreate () {
// Beckmann Texture for specular
if (generateBeckmann)
{
Texture2D beckmann = new Texture2D(width, height, TextureFormat.ARGB32, false);
for (int j = 0; j < height; ++j)
{
for (int i = 0; i < width; ++i)
{
float val = 0.5f * Mathf.Pow(PHBeckmann(i/(float) width, j/(float)height), 0.1f);
beckmann.SetPixel(i, j, new Color(val,val,val,val));
}
}
beckmann.Apply();

byte[] bytes = beckmann.EncodeToPNG();
DestroyImmediate(beckmann);
File.WriteAllBytes(Application.dataPath + "/Editor/BeckmannTexture.png", bytes);
}

// Diffuse Scattering
if (generateDiffuseScattering)
{
Texture2D diffuseScattering = new Texture2D(width, height, TextureFormat.ARGB32, false);
for (int j = 0; j < height; ++j)
{
for (int i = 0; i < width; ++i)
{
// Lookup by:
// x: NDotL
// y: 1 / r
float y = 2.0f * 1f / ((j + 1) / (float) height);
Vector3 val = IntegrateDiffuseScatteringOnRing(Mathf.Lerp(-1f, 1f, i/(float) width), y);
diffuseScattering.SetPixel(i, j, new Color(val.x,val.y,val.z,1f));
}
}
diffuseScattering.Apply();

byte[] bytes = diffuseScattering.EncodeToPNG();
DestroyImmediate(diffuseScattering);
File.WriteAllBytes(Application.dataPath + "/Editor/DiffuseScatteringOnRing.png", bytes);
}
}

void OnWizardUpdate () {
helpString = "Press Create to calculate texture. Saved to editor folder";
}
}


I haven't included the actual shader because it's a little messy still and currently only supports the OpenGL path. The numerous resources that I've mentioned should guide you in the right direction though.

3 Years Later: Year Three

This post is drawing to a close my little series on how I went from not knowing the first thing about game development to being a programmer that eats rendering code for breakfast (as all good graphics programmers should). Here are links to my posts about Year 1 and Year 2.

Olympus

After finishing out my second year of college, I spent my Summer working on what would become the longest running project I've ever been a part of. Olympus is a project from the MSU Games for Entertainment and Learning Lab developed to study the effectiveness of exercise in motion games in the context of a game intended primarily for entertainment.

The design was that of an action adventure game set in Greek mythology. It not only required us to support toggling everything from dialog options to whether the game was played with a 360 Controller or a Wii-mote/Dance pad combination for various research groups, but it also demanded that we deal with the problems of maintaining a codebase over the course of a long-term project.

The lessons here were invaluable. There were only ever two main programmers on Olympus at a time, with Shawn Henry Adams and myself moving into those roles as the original two graduated. It very quickly became apparent to us that the code base was riddled with quick fixes, lack of proper tools, and poor software engineering. In other words- it was made by students, and not bad students at that. The fact that students are rarely have to revisit class projects in the future is something that I think should be fully realized when doing a student based project.

I'm not sure when students are supposed to learn the life lessons of the dangers of ignoring an attempt at good software design, but I certainly learned it from Olympus. It was also the most fun I ever had making a game for the GEL lab, working with my coworkers to get through the challenges of developing hours of gameplay and adapting existing, and often messy, systems to new designs. So as a student if you have the chance to work on or take a class that involves a project that lasts at least a semester, don't let it pass you by.

Becoming Nocturnal

When school actually started up, I was enrolled in two classes that changed me forever. One was the portfolio class for the game development track at MSU, and I threw myself into that class like there was no tomorrow. There was not an all-nighter that I wouldn't pull for the sake of my games, and looking back now I'm pretty sure I know why I had that mentality. The number one thing that I've ever heard about getting a job in game development is that a stellar portfolio is essential. Worthwhile? Yes. Although, as I lamented in my last post, a passing grade is not enough incentive to have people create amazing games. For me, feeling like I'm the only one working on a project is a lot more troubling than a difficult technical challenge. I'll always help and vouch for the kids that throw themselves at their work, because with a little direction, they'll always do great things. But in a portfolio class, it was always frustrating to deal with kids that had already let their enthusiasm slide to the point of no longer pursuing game development seriously.

I also took Yiying Tong's graduate level graphics programming class that Fall, and it was perhaps both the greatest and hardest class I've ever taken. There's a different lesson to be learned here than working hard to make a good game: it's worth it to go way over your head in a subject you enjoy. While a lot of it was beyond my comprehending at the time, I've come to learn that not being afraid of material you don't fully understand can help you out in the long run. Maybe the first time or the second time it doesn't make sense, but eventually, it all comes around.

On My Way Out

The Spring of junior year brought along my first serious attempt hunting for a job in the industry beyond Michigan State University's game development research projects. It also brought about the beginning of my writing for AltDev, the beginnings of a stint writing tutorials for Shaders in Unity, and my first semester as a TA for the introductory Game Design course. This was a bit of a tipping point for me and draws my story to a close.

I never had a good gauge for whether or not I was on track to make games professionally after college. When I seriously started teaching others how to make games, and people actually seemed to be improving as a consequence of my effort, I began to realize I was probably doing alright. My hunt for a game development internship landed me at the mighty Iron Galaxy Studios, and maybe someday after what I was working on gets announced I'll talk a little about my experiences as an intern there (spoiler: it was great!), but it certainly reaffirmed my suspicions that I had gone from someone wanting to learn game dev to someone that could competently contribute to a substantial game.

A final word on my efforts to contribute to helping others learn about game development: it's very rewarding. If you find yourself in a position to help others out, seriously consider it, whether it's teaching a class or writing a blog post or even giving a presentation at a local club or IGDA chapter meeting. I know that the reason I enjoy spending so much of my time trying to give back to the game development program at MSU and the development community as a whole is because a younger version of myself would have loved to be on the receiving end of any of the information I've shared. Many people helped me along the way to understanding what I've learned and experienced, it's all I can do to continue that effort now that I'm in their shoes as one of the more experienced individuals at MSU.

3 Years Later: Year Two

This is part two of my little series about what I've done in the past 3 years of my life to take myself from a kid that had never made a 3D game in his life to someone who lives and breathes game programming on several projects (commercial and otherwise). Here's a link to the first post in the series.

Getting Paid to make a Game

During the academic semesters for my first two years of college, I got paid by a scholarship to work for the Games for Entertainment and Learning Lab (an incentive to have first and second year students involved in research labs on campus). However, part of the way through the summer leading up to my second year of college, I got pulled on to a project as a second programmer, and being paid outside of the academic year by the Lab's actual budget was exciting for me. However, the project went anything but smoothly (and had been a mess well before I was brought on).

I've noticed that sometimes game devs talk about getting jaded or disillusioned after getting into game development professionally. If there was ever a project that did it for me it was this one, a serious game about power plant management. It was continually misdirected by a client that didn't understand game design or development. The project got extended into the Fall semester as the development continued to be a mess. Eventually this culminated into the client asking us to add back as many of the features the Lab had attempted to add to make the game more fun, finally realizing that the direction they had been steering it was keeping it from being fun for the target audience.

For as unfortunate as that was, I think something really important came from this, the knowledge that programming, not game design, was what I loved no matter how badly a project was going. Level design in particular came hard for me, realizing that it really just couldn't hold my interest like a programming challenge. I think most people can find a lot of glamour in more than one discipline of game development, and for me I thought that maybe game design was as an enjoyable as coding. Clearly I was wrong, and it proved to me that even a bad project can have a lot of value, giving me a better sense of direction of what I wanted to do moving forward.

Greener Pastures

While the power management game was wrapping up, two much more promising projects were just starting up. One was a competition hosted by Ford Credit between Michigan State University and University of Michigan's game development clubs (Ford is headquartered in Michigan, thus the localness of the competition). The goal was to make an serious/adver game to teach potential Ford customers about car financing. The target audience was people in their early 20's, so having college students make a Flash game seemed like a great way for Ford Credit to go about it and was probably a fun PR stunt at the same time.

While the content of the game might not sound amazing, Ford Credit was very hands off about the development, which was a breath of fresh air compared to the project I was coming off of. The real kicker though was the prize: all expenses covered for GDC with all-access passes. I threw myself into that project like there was no tomorrow, and it ended up being the first project I seriously crunched on as the team streaked towards our relatively aggressive deadline (I believe we had 3 or 4 months of development, and many of us had never done a Flash game before). It wasn't uncommon for me to be up at 2 am rolling in features and artwork that probably should have been cut for scope reasons, and even then I was fixing bugs right up to the deadline. And when I say "right up to," I mean that I did the submission build of the game on a laptop in the back of a van as we drove to present the game to the judges in Dearborn. You can check out the game here.

The result was that we won, and getting to go to sessions at GDC 2010 blew my mind. While I know many devs that have been in the industry go to GDC as much for the socializing as the sessions, as a student I'd say that the talks are infinitely valuable. Especially compared to the Career Pavilion, which I'd wager is what the majority of students attend GDC for. It was also then that I realized that the allure of rendering and engine code was ever so tantalizing, with John Hable's talk about HDR lighting in Uncharted 2 convincing me to drown myself in graphics programming in my spare time. This was a big jump for someone that thought they might still want to be a game designer less than a year earlier.

Lesson here? Student competitions are important, teaching more about deadlines and quality game development than any class could, because to be honest, student projects often are only a fraction of a someone's grade. The project for Ford Credit had that extra mile of polish that can only come from really wanting to make the best game possible. Getting in that extra stretch of polish and bug fixes before a class deadline lacks incentive because it probably won't budge someone's grade unless it's worth at least half the student's grade. I'll be revisiting that theory of mine in my next article.

Enter: Olympus

I mentioned that their were two projects in the wake of the power plant management game. The second one was a motion controlled action adventure about Greek Mythology. The purpose of the game was to be used to study the effectiveness of aggressive motion controls in an entertainment game (as opposed to a game like Wii Fit, where exercise is the consumer's intent).

I started as what I would probably refer to as a "junior programmer" handling basic gameplay tasks while I was still heavily involved in the Ford project and my first class about game design and development. However, I would inherit the role of leading up the player and motion control code when the original programmer graduated, continuing into the summer. I'll pick up on that story in my next post about lessons from my third year of learning game dev.

3 Years Later: Year One

A while back, someone on Twitter (I can't find the original conversation), asked me if I would consider writing up how I learned to program video games. I'm going to split this into several articles (for better or for worse) based around each year, and then one wrapping everything up with a big dump of the tools, articles, and books I've found incredibly helpful along the way. I thought splitting them up would be good because I'm only *a little* busy with moving back to Michigan and getting up to full steam on a project for IGF.



I think very few people will disagree that it's a lot harder to get into the games industry than it used to be, being that there are so many more people interested in getting into the industry these days. I've been happily interning for these past months at Iron Galaxy Studios where I do programming work commercial games, so I'd say that I at least got some important bits right. I think a lot of what I've done can manifest itself in a slightly different way for other prospective game developers, so hopefully these are helpful in some way to them.



Year One



When I started college, I knew I wanted to make video games for a living. However, like most college freshmen, I didn't know how to make video games, but I did know that getting into the games industry was no cakewalk. One of my scholarships paid a stipend in exchange for working 10 hours a week under a professor, essentially a way for the university to get underclassmen involved in research without putting strain on a lab's budget. Due to my interest in game development, I joined the MSU Games for Entertainment and Learning (GEL) Lab to work under Professor Brian Winn, but I suppose it was not the most opportune time to be a GEL Lab professorial assistant.



There was very little going on in the lab that year other than a small game design conference that we helped organize called Meaningful Play. However, beyond helping prepare for the conference, there was very little concrete game development work to hand me. Besides the lack of projects, I didn't know a whole lot about game development. I can still remember not having a good answer about what part of developing games I actually liked doing when I first started in the lab. All I knew was that I liked programming in general from the few classes I had in High School, although game design still seemed like "the cool thing" at that point, and I thought that I would probably enjoy design more if I was given the chance (spoiler alert: programming is actually way cooler, but this won't be discovered until year 2).



Contrary to what one might think, something very good came out of the lull of activity in the GEL. My commitment to GEL Lab was for 2 years, so Brian had me begin to teach myself how to use Unity with the hope that I'd be able to use it for future projects, being that the department had just adopted into the curriculum as its 3D engine of choice. As a result I had a conscious reason to teach myself game development, putting in at least 10 hours a week towards a small 3D project. I started with the standard tutorials, which are only really helpful for learning the menu flow. As with any first 3D game, the learning curve still felt steep even though I was using a fairly user-friendly engine like Unity. However, if you get a jolt of excitement from getting a cube to move back and forth across the screen for the first time in your life, then you know that game programming might actually be your thing. The project evolved into a small game that I presented at the end of my Spring semester.



It was an action-adventure game about a manatee. It was terrible, and my code base was even worse, but to this day I still love it (and amazingly its poster presentation won an award). What's important here? I did everything, even the art, and I committed to spend at least a minimum amount of time on it each week. I learned so much, and I didn't have things like fears of letting team members down, because I was the whole team.



Speaking of teams, I did get involved with Spartasoft, the student game development club, which was another important step toward being able to program a half-decent game. The club served a few primary purposes at that point, such as hosting a games party occasionally and getting alumni to come back and present to the club about their experiences in the games industry. However, the most important function for me was the 48 hour Game Jams that were hosted every few months. If you're not familiar with the concept of a game jam, we basically split into small teams on a Friday evening, a theme is announced, and then each team makes a game about that theme over the course of 48 hours. It often results in a lot of terrible games, but inevitably there's something new that's learned, new game ideas explored, and a lot of friendships built with game developers that you might not otherwise get to know. I cleared my schedule for these as a Freshman, and participated in every single one.



It's how I got connected with a couple of seniors, and I ended up putting in more than just 10 hours a week on one project in the Spring. I started meeting with them in between game jams to polish some of our better ideas. The fact that upperclassmen like Bert, a programmer that now works with me at Iron Galaxy, and Marie, an amazing artist that's now a grad student at SCAD, wanted to work with some freshman was amazing. I had gotten past the hump for being able to contribute to a game at all. I could help make their games better, and because I was more than ready to step up to the task, I ended up learning a lot back from them.



Conclusion



So what can be learned from my first year making games? First, don't be afraid to go it alone and force yourself to spend at least a minimum amount of time each week working on it. Secondly, game jams are great, especially if you don't know very many people to collaborate with. Between the manatee game and the game jams, I had worked on 6 different games by the time I finished my first year of college. How many games had I worked on for class? Zero, and Michigan State even has a game development curriculum! If you have the opportunity to work on *any* game when you're just starting out, even a game jam game, you'd better have a damn good reason if you pass on it. Failing any opportunities for collaboration, the only person keeping you from making your own game is yourself. Don't be the asshole that's keeping you from learning how to make video games. I got lucky that I didn't do that, it's easy to be lazy when you're an 18 year old college freshman.

Backface Culling 101

This post is for artists, designers, and anyone else who has had backface culling shoddily explained to them. Perhaps I should have checked the Venn Diagram of AltDev readership before writing about a rendering technique specifically with non-programmers in mind. I know I've gotten embarrassingly confused by forgetting what's important as little as 2 months ago.

The "Simple" Answer is Misleading

Backface culling comes up all the time when people are first becoming acquanted with 3D game dev. All it takes is deleting half of a cylinder in Maya and dropping it inside of an easy to use engine like Unity. Then they're suddenly be trying to figure out what got screwed up that's causing the inside of it to be invisible. Inevitably the answer given is that those faces are backface culled to avoid rendering the inside of 3D models. That's true to some extent, but in my opinion very misleading.

The other part of the common answers is that backface culling works by only drawing the side with outward facing normals. This part, is not actually right and it leads to a lot of misunderstandings. I think this answer comes up because people like artists and designers are used to thinking of meshes in 3D space, rarely do people think about the process that turns it into a 2D image unless they get into rendering programming.

The Triangles are not Drawn twice

First off, understand that you have a big mess of triangles that you are using to represent your piece of art. One of the simplest representations of this is to have all the vertices in a big long list, and then to have a big long list of triangles defined from those points. Your computer is taking those triangles and transforming them into the 2D space that is displayed to you. That triangle is so simply defined to expedite rendering that it has no concept of "front" or "back" (well it does, but I'm getting to that). The vertices may have normal information, but that's used for lighting, not culling. Consider a camera pointed at a half-cylinder that renders properly in the first case, with the following badly drawn diagram taken from above. The green lines are normals:

Now consider the *exact same* list of triangles with the only change being to flip the normals at each vertex. If you try this on your own, it is probably best done by modifying your vertex list in the rendering code, for reasons that will be explained shortly.

With backface culling turned on, you will still see the cylinder in each image, except the lighting will be flipped in the second one. And if you turn off backface culling then the same images will be rendered, and it will be done so with the same number of triangles, because there are no backfacing triangles in the given example. Don't think that the GPU is rendering the backside and then going over it a second time when you have backface culling turned off for the front side, unless you actually did specify two triangles in that big list of triangles with vertices in the exact same locations.

Assuming you understand that what I described is not what typically happens, let me now explain the point that I'm making. When you flip the normals in a 3D modeling package it's doing more than just changing the vertex information, it's also changing that triangle list. This is because the front and back face of a triangle are determined through the novel concept of winding order, which is quite simply that the front of a triangle is the side of the face where vertices are viewed counterclockwise. Suppose you have a mesh with no normal data, just positions in 3D space for vertices. You can still use backface culling just fine as long as your triangle list is specified in the proper order, a point that I think is often missed by designers and artists when trying to understand backface culling.

If you want the proof in the pudding, here's the OpenGL call to specify culling:

glFrontFace(GLenum mode);

Where mode is either GL_CW or GL_CCW, standing for clockwise and counterclockwise, notice that this has nothing to do with vertex normals!

And here's a diagram to illustrate which side is the front for a triangle by default (you could assume that triangles are front clockwise as your standard if you really want to, but the default is counterclockwise). If the points defining your triangle are listed in an order that would cause the diagram on the left to be true after transformed into screen-space, then that triangle is front facing:

So what are we culling again?

We're culling triangles facing away from the camera, which are going to be obstructed by the forward facing polygons on a closed mesh. Those triangles that would be drawing the inside of that cylinder are just going to be covered up by the front-facing triangles that you would actually see. With the winding order in hand, the triangle's face normal can be determined and checked against the camera. I made this little diagram (insipred by the great explanation for backface culling in Real-Time Rendering). The triangles on the backside of the full cylinder are facing away from the camera, so they are culled (indicated by the dotted part).

So on a closed mesh like a full cylinder, you can avoid doing the work of rendering all of those triangles that you already know are going to be obscured. This is why disabling backface culling is typically not the correct answer when you have triangles culled that you didn't intend to, and it's also bad because that usually means that the backfacing triangles are being shaded incorrectly with backwards normals.

If you do intentionally cut that cylinder in half, the easy fix is to add front facing triangles along the inside of the mesh. I believe there is functionality these days (DX10 maybe?) to figure out which side of the triangle is being rasterized. Theoretically you could have the shader flip the normals based off of that information to still have correct lighting, but if you just read a post on the basics of backface culling, I bet that's not what you're looking to do.

Because 3D modeling programs are automatically adjusting the winding order for you based off of what direction your normals are, it's easy to mistakenly think of backface culled triangles as triangles with incorrect normal information, when really it's winding order that determines it. This is why sometimes you can really end up in weird situations while using a 3D modeling package. I know that as a Freshman in college, there were many models at game jams that had those stray triangles that artists just couldn't get to show up, and maybe if they did the lighting got all weird. Perhaps thinking about what's actually happening can help alleviate those pains.