Indie Speed Run: Scarlet

A little while back, I held down the programming end of things on my second year of participating in the Indie Speed Run 48-hour game jam with some of my incredibly talented former coworkers from Michigan State's GEL Lab, where we worked together on a motion controlled game set in ancient Greece, which is in hindsight, weirdly similar to the concept behind Crytek's Xbox One title Ryse.

Anyways, last year we were a close runner up to having Ron Gilbert's finalist nomination with our game Umbrella Party. Here's a shot from that game. It was a nifty little speech simulator game that was pretty silly:

This year, the theme we were randomly given for the game jam was "Secrecy," so naturally we made a game about keeping your spouse from finding out that you'd been cheating on them called Scarlet. It was a bit stressful for me since this year our team had extra art support instead of extra programming, but it turned out pretty well in the end. We got some early recognition when the games went live on the Indie Speed Run website, and even more amazingly, was Peter Molyneux's pick for a finalist. Here's a screenshot from the finished game:

Besides wanting to get around to shamelessly plugging our successes with our games on my blog, I wanted to point out a few of the features in Unity I used to save us a lot of time getting the look and feel right within 48 hours. Although we unfortunately didn't have time to get in one big gameplay feature, the ability to select your gender at the start and swap out all of the items you have to hide around the house accordingly. Heck, it would've been as easy jump from that to avoid gender norms entirely and be able to select both your gender and your spouse's gender, and the gender of the person you're having an affair with, although I'm not sure if that actually has any relevance to the core of the game. Maybe this is why I'm not a designer in my day job. Anyways, there were a few things that went really well on the technical side of things.

First off, we made the really obvious decision of emulating a simple color palette with very hard edges on geometry (especially on characters) to create a polygonal look, obviously there were important benefits to this for saving art development time. However, it turned out looking pretty good for a couple of reasons in the engine itself. First, I leveraged Unity's lightmapping system to get the shading and shadows on static objects, even though all of our materials were just using a simple constant color. This ended up being very similar to what the game Super Hot did recently which uses lighting on mostly white levels coupled with red enemies and bullets. The items that the player must hide in Scarlet are also red, but that's as much to do with them being the color of a torrid affair (and also obviously ties in to the name of the game we chose), however we opted for more of an array of duller shades than just the stark white of Super Hot.

To get this working is rather pain free, just make sure your level geometry is all set to static for lighting in Unity, and I got the results seen in the final game with a few iterations on some test bakes late on Saturday evening to get the settings right. Namely just some fiddling with amount of bounces and the lightmap resolution. getting the bounces right was important for having the soft light fill the room from just a handful of point sources. There's one glaring bug with the static lighting you can see above where the ceiling lights are set to not cast shadows and the light is just kindof blown out in the ceiling. With a little more time and placement, making those object play more nicely correctly could probably have become a reality, but wouldn't have been worth the amount of time needed during a game jam.

The second part of what I did after getting the lightmaps set up to bake (we would repeat the baking process a few more times the next day as art became finalized), was that I needed some way to handle lighting of the characters as they ran around, since it would be a little off-putting to have the wife not become shadowed in the same way that lighting is handled bouncing around the scene. Unity's light probe system seems to be the correct way to handle this, and while I'm no stranger to Unity's lightmapping system, this would actually be my first adventure with setting up and baking probes. I would post a screenshot of my probe layout, but unfortunately my Unity install is currently non-existent since I wiped my computer and repartition my hard drive. It worked pretty well though, my biggest gripe with it is that to my knowledge there's not a great way to then integrate dynamic shadows into the dynamic and static objects to bridge the gap between the static and dynamic object's lighting. I'd really like some sort of hacky shadow projection coming off of the wife as she runs around from a fake directional light (possibly even rotating it based on her position relative to the main light in each room), you'll notice she has no shadow in the screen shot.

Finally, it's worth mentioning Mechanim + Pathfinding working in tandem, two features of the engine that I have never used before. It took a little digging to get my feat wet, but once I got it working, it worked shockingly well. It's a bit goofy the way the she runs around the house, but it was also quite the crash course for me. The best was that baking the pathfinding graph basically just worked with our static geometry and collider placements, and getting the wife's AI to have her visit different important locations was very straightforward. It took a bit more effort to have her animate when moving around, and ended up abandoning an attempt at having root motion work, deciding feet sliding was acceptable to get better results working faster and then just move on. By far though, it was most impressive to see Mechanim retarget the animations to the character rigs, stunning results I really wish I had access to when I was using Unity in the GEL Lab. Really wish we had gotten the gender select in on Scarlet, since this would have made the character model swap work very smoothly. Definitely will be looking to learn a bit more about how to tune it just right in the future.

Well, I guess this concludes my little micro post-mortem of Scarlet. Hopefully we'll end up doing another Indie Speed Run game next year!

Here's a links to the web pages for my 3 compatriots from Scarlet:
Marie Lazar: http://www.pixelbutterfly.com/
Andrew Dennis: http://tc.msu.edu/users/andrew-dennis
Jordan Ajlouni: http://www.jordanajlouni.com/

And our pal Adam Rademacher was on Umbrella Party in the place of Andrew the year before: http://www.whitefoxproduction.com/

New Home for The Blog 

Went to thre coffee shop for my usual bout of Sunday afternoon caffeine and coding, and needed to run on a dev kit, that is not currently with me, within a record time of 10 minutes. At least I found my bug quickly.

Anyways, I've decided to instead take the opportunity to finally pull the trigger on abandoning ship from blogger and launch "Jon Moore Blogs about things 2.0." All the old posts will stay posted there, but I finally got squarespace's blog import features playing nice with google. My current plan is to do all future posts here, and you know, to actually blog semi-regularly again. If I can figure out how to make them easily double post to both here and the old blogger I will, but my reason for jumping is that I can't handle how stuck in the past blogger feels these days, I'd rather have it rolled directly into my personal site and use squarespace's blogging tools for better or worse.

For better or worse, in the year or so since I've posted anything I think I've finally chilled out substantially about video game development being what I spent all my time on, and am thinking I'll be posting about music/comics/guitars/art as well for some extent. So for anyone that actually cared to pay attention to my posts, expect a dip in the signal to noise ratio compared to when I was posting regularly on AltDevBlogADay. That being said, I've had some interesting ideas and side projects in the year or so of absence from longer form internet activity, hopefully some of that will make for something goof. I've been thinking I've missed writing regularly for a little while now, so we'll see how this goes. I'm going to start my first real post shortly :)

Plus, there's no better way to upset someone's RSS feed than to just randomly change sites.

It Was Warm In Chicago Today

It's back to being unseasonably good weather in Chicago today, so I spent some time tonight skateboarding.


I've been skating a lot since graduating this past Spring, and unfortunately I'm not nearly as good as I used to be (in comparison my increase in guitar practice has had great gains). There's a sort of 20% of the tricks that I'm missing. My coworker Nate always says that the last 20% is usually the hardest part of any task.

Honestly, it's fear. There's a sort of commitment required when you set up a trick and then there's a weightless moment where there's no going back and then you come back and snap the trick into the landing. I'm copping out and doing things like instinctively putting a foot on the ground in that moment. It's safe. It's lame. It ruins the trick.

That moment of insecurity is what doing tricks on a skateboard (or more realistically for me, a snowboard, my preferred sport), is all about. The reason people enjoy progression is because that feeling is like a drug. It's the rush of flying through the air or sliding across a 40 foot piece of metal or spinning across the top of a box and just enjoying the experience of doing it right. It's why fear is what what ruins tricks, the fear that makes you fight that moment instead of committing.

Progression is natural in board sports because people just want to extend that moment: another rotation, a longer rail, a bigger jump. Competitiveness with other riders is only a small part of it.

In snowboarding there's fewer opportunities to bail without falling and feeling some pain, I think it's a blessing in disguise. The thing that's terrifying about programming, is it's just the opposite. There isn't always an easy route from A to B, but you can hack one together. Programming is open ended.

Sometimes you have to rely on a piece of paper and some matrix algebra to get you to the other side.

Sometimes hooking up a debugger won't be an option.

Sometimes you'll be up all night trying to figure out why the AI is jittering as it moves.

I think that it's one reason why there's this gap between general software engineering and game development that a lot of people have trouble crossing. Especially when there's so much demand in the world for software engineers that only have to solve straightforward problems.

Don't put your foot on the ground. Do it right.

Summertime, Clustered Shading, and Blogging

It's summertime! Now is the time to be roasting hot dogs and marshmallows around a fire (just did that myself this evening back on my parents farm)! I graduated with my Bachelors diploma a few weeks ago from Michigan State University, which actually feels substantially weirder than when I graduated from High School. Perhaps a big part of it is going away from working with some great people over the past 4 years, namely Brian Winn and the GEL Lab as well as the crew over at Adventure Club Games. Adventure Club may not have left East Lansing after they graduated from the University a year ago, but that doesn't mean they haven't been doing a lot of great game dev projects (and more in the pipeline to my understanding).


Speaking of East Lansing game dev people of interest, the awesome Dan Sosnowski has stepped up to the task of President of Spartasoft, the MSU game development club and has recently started as my replacement as the Iron Galaxy Studios intern in Chicago for the summer. I'll be moving back to Chicago myself this coming Friday, where I'll be moving into a full-time graphics engineering position at Iron Galaxy. Whoah. Growing up I guess. I'm hoping to possibly make it back to East Lansing for the Meaningful Play Conference that's coming up again this fall, if you are interested in games in a capacity beyond entertainment, such as art or education, consider attending! or submitting a talk or a paper or a game to the showcase!

On the note of conferences, I've been tossing around the possibility of trying to make it to SIGGRAPH this fall in Los Angeles, especially because I do enjoy nerding out to computer graphics. Especially after reading the pre-print for Clustered Shading from the HPG conference that has been going on recently. The idea is to take the tiled rendering used to optimize many deferred renderers, and extend upon it. The "Forward+" renderer used in the AMD Leo Demo got a lot of buzz at GDC this year for combining tiling with forward rendering by utilizing DX11 features. The idea presented by Clustered Shading is to improve tiling by have it operate in 3D instead of just slicing in the screen into 2D tiles to cull lights. This is really interesting to me because binning in 3D matches the load of a designer or artist much better when spacing out a large number of lights in a scene, whereas a tiled renderer will have hot spots pop up more due to a particular camera angle where many lights align. I expect for more flavors of core rendering pipeline to pop up if next-gen consoles mostly support the more flexible possibilities of compute shaders on the minimum spec for multi-platform games. Hopefully I'll start playing around with more of my own DX11/CUDA type stuff this summer, as I'm planning to start putting together a new computer that runs DX 11.1 code to keep myself relevant with the future.

Obviously, I won't be pushing this meandering pile of thoughts up to AltDev, but I do hope to push some more technical articles out to both this blog and AltDev. I'm uncertain if I will actually rejoin the two week schedule though, because I'm uncertain if my current side projects and learning will either fit well into a two week turnaround or be all that interesting. I'm also going to try to do more personal-ish posts up on here because I've realized that I've become a little more spread out across the nation/world from many friends and acquaintances, so maybe I should be making better use of this newfangled internet technology. Hopefully I didn't say anything too stupid in this post, because I don't entirely feel like going through my normal revision/proofreading process that helps me be a coherent writer.

Finally, I had an awesome experience hanging out with some manatees at the Columbus Zoo and my awesome girlfriend Kelsey in my free time between graduation and moving. AREN'T MANATEES THE COOLEST?


(You should go hang out with them too, and then adopt a manatee from Save the Manatee Club)

Skin Shading in Unity3d

I've been away from AltDev for a while, a bit longer than I originally expected because after a period of crunch before IGF submission in late October, I went back to the comforts of getting a normal nights sleep, something that working a normal game development job this past summer spoiled me into enjoying. Now the Christmas holiday has given me not only the time to finally begin blogging again, but the chance to do a little side project in something I greatly enjoy: the rendering of human skin, which was also an excuse for giving the latest Unity3D 3.5 beta a whirl:

Motivation

This has entirely been a hobby project for me, because I do not foresee its application into any project I'm involved with. However, I've realized that Unity is rapidly approaching the point where someone may want to use a more complicated skin shading model than just basic wrap lighting (an example of which is provided in the documentation). With this in mind, I thought it might be of interest to do a post pulling together some of the better resources I've found on the topic as well as posting some of my more useful code so that perhaps it might help serve as a jumping off point for a more serious endeavor.

The one issue that is always at the core of skin shading is that of Subsurface Scattering, which is the effect of light bouncing around underneath the surface of the skin and re-exiting elsewhere. A simply using a Lambert model causes very harsh edges, because the scattering is what gives skin its softer appearance. Here's what the built-in "Bumped Diffuse" shader looks like, the basis that is trying to be improved:

Background: Texture Space Diffusion

No discussion of skin rendering starts with anything other than mentioning the NVIDIA Human Head demo, which has a detailed description of its implementation in GPU Gems 3. The technique relies upon Texture Space Diffusion, where the lighting for the mesh is rendered into a texture and then blurred various amounts (based off of the scattering of light inside of human skin). The results of those blurs is combined together to form the actual diffuse lighting term. I actually played around with this technique in Unity around Christmas time last year, but this proved to be difficult given the nature of the TSD and the fact that Unity did not at the time easily support some nice features like Linear Space lighting.

There have been some very useful resources on skin shading in since the publication of GPU Gems 3. There are some excellent Siggraph slides from John Hable where he details what Naughty Dog did to attempt cheaper calculations than the NVIDIA techniques in Uncharted 2. There is also a technique by Jorge Jimenez detailed in GPU Pro that does subsurface scattering calculations in screen space, which removes the cost per mesh of TSD (a serious limitation for it's use in an actual application). This seems to have garnered some adoption in game engines, but I understand that it is still a reasonably expensive technique (note: I'm less knowledgeable of Screen Space Subsurface Scattering, so take that comment with a grain of salt).

With regards to tools for doing Skin Rendering on your own time, a good quality head scan has entered into the public domain from Lee Perry-Smith and Infinite Realities. Also, Unity3d is much friendlier to doing some high quality rendering. There is now a public beta for Unity 3.5, which now features Linear Space lighting and HDR rendering conveniently built into the engine.

The Latest Hotness: Pre-Integrated Skin Shadding (PISS)

I've been impressed with the quality articles in GPU Pro 2 since I picked it up this past summer, one of which is Eric Penner's article detailing his technique "Pre-Integrated Skin Shading." He also gave a talk describing it at Siggraph, and the slides are available online. There are three main parts to it: scattering due to curvature, scattering on small details (i.e. the bump map), and scattering in shadow falloff. I did the first two, no shadows yet for me, but I'll do a follow up post if I get around to it.

The basic motivation is to pre-calculate the diffuse falloff into a look-up texture. The key to making the effect look good is that instead of having a 1-dimensional texture for NdotL, it is a 2D texture that encompasses different falloffs for different thicknesses of geometry. This will allow the falloff at the nose to differ from that in the forehead.

I've adapted the sample code to precompute the lookup texture into a little editor wizard for Unity. Simply make a folder titled "Editor" in Unity's assets and drop it in, and you'll have it extended to the editor (the function is added to the menu "GameObject/Generate Lookup Textures"). I've included the full script at the bottom of this blog post, which computes both a falloff texture that appears to be about the same as the one that Penner shows in GPU Pro 2. Here's what my lookup texture looked like when I was all said and done, this is to be sampled with 1/d in the y, and NdotL in the x:

An important note: if you activate Linear Rendering in Unity 3.5, know that the texture importer has an option to sample a texture in Linear Space. GPU Gems 3 has an entire article about the importance of being linear, I noticed that after I turned it on in Unity, the lookup texture had the effect of making the head look like it was still in Gamma Space. I then realized I needed to flip on that option in the importer. In fact you may notice that the lookup texture in GPU Pro 2 looks different than the one in Penner's Siggraph slides, this is because the one in the slides is in linear space. The one in the book is not, which is also the case with the one pictured above.

The parameter 1/d is approximated with a calculation of curvature based on similar triangles and derivatives (there's a great illustration of this in the slides), and I've included the snippet of GLSL from my shader. To see the output of curvature on my model (easily tuned with a uniform parameter in the shader to offset the calculation):

// Calculate curvature
float curvature = clamp(length(fwidth(localSurface2World[2])), 0.0, 1.0) / (length(fwidth(position)) * _TuneCurvature);

This curvature calculation is actually a serious problem with implementing the technique in Unity. This is because ddx and ddy (fwidth(x) is just: abs(ddx(x)) + abs(ddy(x))) are not supported by ARB, which means Unity can't use the shader in OpenGL. Those of you familiar with Unity, will know that CG is the shading language of choice, and it is then compiled for each platform. A thorny issue, that I have been stumped with in the past, especially because I do many of my side projects in OSX on my laptop. Tim Cooper (@stramit) came to my rescue with a solution that works decently: to write an OpenGL only shader directly in GLSL, which will support fwidth no problem. This is a *little* painful, especially because Unity seems to have dropped the glsl version of AutoLight.cginc (my 3.4 build has AutoLight.glslinc, but unfortunatey 3.5 beta 5 does not), which is going to make things like using the built-in light attenuation methods much more of a headache. In fact, the reason my sample images use a directional light is because I don't currently have point light falloffs matching *exactly* the same as Unity's. That being said, it hasn't been problematic enough to cause me to just abandon OpenGL support and move to a Windows computer. Furthermore, there's a nice wiki that served as a good jumping off point for doing GLSL in Unity.

(NOTE: Aras from Unity posted a suggestion down in the comments that might cause my choice to use GLSL directly to be un-needed. I will properly update the post after I try it out myself)

Softening Finer Details

As I mentioned, I'm sharing my experience with trying out 2 of the 3 techniques detailed by Penner. The second part is smoothing of small detailed bumps. While the pre-integrated texture will account for scattering at a broader sense, the fine details contained in the normal map are still much too harsh. Here's what my model looks like with just the pre-integrated scattering:

"Too Crispy" is my favorite way of describing the problem. Penner addresses this by proposing blending between a smoother normal map and the high detail normal map. The high detail is still used for specular calculations, but for the diffuse, you pretend that red, green, and blue all come from separate normals. This is very similar to what Hable details as the technique used in normal gameplay in Uncharted 2 in his Siggraph 2010 slides. By treating them separately the red channel can be made to be softer, Penner advises using the profile data from GPU Gems 3 to try to match real life.

In order to avoid the additional memory of having 2 normal maps, Penner uses a second sampler for the normal map that is clamped to a lower mip resolution (as opposed to using the surface normal like Uncharted 2). However, Unity does not allow easy access to samplers to my knowledge, so the only way to set up a sampler like that is to actually duplicate the texture asset, which won't save any memory. So, my solution was to just instead apply a LOD bias to the texture lookup (an optional third parameter in tex2D/texture2D in case you're not familiar). Here's what the same shot from above looks like with the blended normals applied using a mip bias of 3.0:

Specular

While many articles on skin shading focus almost entirely on the scattering of diffuse, the GPU Gems 3 article provides a good treatment of using a specular model that is better than Blinn-Phong. The authors chose the Kelemen/Szirmay-Kalos Specular BRDF, which relies on a precomputed Beckmann lookup texture and takes into account Fresnel with the Schlick approximation. Being that I had played around with that model a year ago when I was experimenting with TSD in Unity, I simply rolled that code into Penner's work. Here's the resulting specular:

A shot before applying specular:

And finally the specular and diffuse combined together:

Concluding Thoughts / Future Work

This has been a fun little side project for me, and I think it turned out decently well. I've made and fixed a few stupid mistakes along the way, and I wonder if I'll find a few more yet, don't be afraid to point out anything terrible if you spot it. My two main goals left are tighter integration with Unity : proper point/spot calculations that match CG shaders in Unity, and shadow support. When it comes to shadows I'm at a but of a crossroads between trying to hook into Unity's main shadow support vs. rolling some form of my own. I suspect rolling my own would be less useful in the effort to make the skin shader completely seamless with unity, but might be less of a headache to accomplish. Either way, if I do make make any drastic improvements, I promise I'll do a follow up post :)

Finally, if you're interested in the IGF project, Dust, that I mentioned as the root cause of my hiatus from AltDev, you can check out the project on it's website at www.adventureclubgames.com/dust/, and the trailer for it if clicking a link is too much effort:

[youtube http://www.youtube.com/watch?v=TJ5dLSh8PC0?rel=0&w=560&h=315]

Source Code

As I promised, here is the source for my Unity script that generates my lookup textures:

// GenerateLookupTexturesWizard.cs
// Place this script in Editor folder
// Generated textures are placed inside of the Editor folder as well
using UnityEditor;
using UnityEngine;

using System.IO;

class GenerateLookupTexturesWizard : ScriptableWizard {

public int width = 512;
public int height = 512;

public bool generateBeckmann = true;
public bool generateDiffuseScattering = true;

[MenuItem ("GameObject/Generate Lookup Textures")]
static void CreateWizard () {
ScriptableWizard.DisplayWizard<GenerateLookupTexturesWizard>("PreIntegrate Lookup Textures", "Create");
}

float PHBeckmann(float ndoth, float m)
{
float alpha = Mathf.Acos(ndoth);
float ta = Mathf.Tan(alpha);
float val = 1f/(m*m*Mathf.Pow(ndoth,4f)) * Mathf.Exp(-(ta * ta) / (m * m));
return val;
}

Vector3 IntegrateDiffuseScatteringOnRing(float cosTheta, float skinRadius)
{
// Angle from lighting direction
float theta = Mathf.Acos(cosTheta);
Vector3 totalWeights = Vector3.zero;
Vector3 totalLight = Vector3.zero;

float a = -(Mathf.PI/2.0f);

const float inc = 0.05f;

while (a <= (Mathf.PI/2.0f))
{
float sampleAngle = theta + a;
float diffuse = Mathf.Clamp01( Mathf.Cos(sampleAngle) );

// Distance
float sampleDist = Mathf.Abs( 2.0f * skinRadius * Mathf.Sin(a * 0.5f) );

// Profile Weight
Vector3 weights = Scatter(sampleDist);

totalWeights += weights;
totalLight += diffuse * weights;
a+=inc;
}

Vector3 result = new Vector3(totalLight.x / totalWeights.x, totalLight.y / totalWeights.y, totalLight.z / totalWeights.z);
return result;
}

float Gaussian (float v, float r)
{
return 1.0f / Mathf.Sqrt(2.0f * Mathf.PI * v) * Mathf.Exp(-(r * r) / (2 * v));
}

Vector3 Scatter (float r)
{
// Values from GPU Gems 3 "Advanced Skin Rendering"
// Originally taken from real life samples
return Gaussian(0.0064f * 1.414f, r) * new Vector3(0.233f, 0.455f, 0.649f)
+ Gaussian(0.0484f * 1.414f, r) * new Vector3(0.100f, 0.336f, 0.344f)
+ Gaussian(0.1870f * 1.414f, r) * new Vector3(0.118f, 0.198f, 0.000f)
+ Gaussian(0.5670f * 1.414f, r) * new Vector3(0.113f, 0.007f, 0.007f)
+ Gaussian(1.9900f * 1.414f, r) * new Vector3(0.358f, 0.004f, 0.00001f)
+ Gaussian(7.4100f * 1.414f, r) * new Vector3(0.078f, 0.00001f, 0.00001f);
}

void OnWizardCreate () {
// Beckmann Texture for specular
if (generateBeckmann)
{
Texture2D beckmann = new Texture2D(width, height, TextureFormat.ARGB32, false);
for (int j = 0; j < height; ++j)
{
for (int i = 0; i < width; ++i)
{
float val = 0.5f * Mathf.Pow(PHBeckmann(i/(float) width, j/(float)height), 0.1f);
beckmann.SetPixel(i, j, new Color(val,val,val,val));
}
}
beckmann.Apply();

byte[] bytes = beckmann.EncodeToPNG();
DestroyImmediate(beckmann);
File.WriteAllBytes(Application.dataPath + "/Editor/BeckmannTexture.png", bytes);
}

// Diffuse Scattering
if (generateDiffuseScattering)
{
Texture2D diffuseScattering = new Texture2D(width, height, TextureFormat.ARGB32, false);
for (int j = 0; j < height; ++j)
{
for (int i = 0; i < width; ++i)
{
// Lookup by:
// x: NDotL
// y: 1 / r
float y = 2.0f * 1f / ((j + 1) / (float) height);
Vector3 val = IntegrateDiffuseScatteringOnRing(Mathf.Lerp(-1f, 1f, i/(float) width), y);
diffuseScattering.SetPixel(i, j, new Color(val.x,val.y,val.z,1f));
}
}
diffuseScattering.Apply();

byte[] bytes = diffuseScattering.EncodeToPNG();
DestroyImmediate(diffuseScattering);
File.WriteAllBytes(Application.dataPath + "/Editor/DiffuseScatteringOnRing.png", bytes);
}
}

void OnWizardUpdate () {
helpString = "Press Create to calculate texture. Saved to editor folder";
}
}


I haven't included the actual shader because it's a little messy still and currently only supports the OpenGL path. The numerous resources that I've mentioned should guide you in the right direction though.