We're continuing our vignette building in Unity. This time we're executing an idea with more shots, more effects, and most importantly, more hair. We've done a lot of artwork for this already and woul like to share it with you.
A simple SSGI (Screen Space Global Illumation) effect was a surprisingly quick feature to add, as most of the legwork had already been done by Unity's SSAO (Screen Space Ambient Occlusion) Image Effect.
The aim was to add additional contextual information to the final render, by way of indirect bounce light (or more accurately - color bleeding) from nearby surfaces.
SSGI is OFF, SSAO is OFF.
We lack substantial spatial cues.
The process I used is identical to SSAO, only instead of merely measuring contributions to the occlusion term, I allowed the occluding surfaces to contribute local surface color as well.
Altering the SSAO shader, I stuffed the AO term into the _SSGI tex's .a channel, and the GI term into the .rgb channels, then blended the color bleeding signal with the original image, using a remapped ao signal.
half4 frag( v2f i ) : COLOR
half4 c = tex2D (_MainTex, i.uv);
half ao = tex2D (_SSGI, i.uv).a;
ao = pow (ao, _Params.w);
half3 gi = tex2D (_SSGI, i.uv).rgb;
c.rgb = lerp(c.rgb, gi, 1-ao);
What's nice about this hack is all the parameters for AO are separable and compatible with those for GI.
SSGI is ON, SSAO is ON.
Indirect illumination and occlusion provide greater spatial context.
I activated my 30 day Unity Pro trial license a few days ago so we're frantically trying to finish this forest sprite demo. While Lou is coding like mad, I am going to share some of our models with you. We are using Maya 2013 to build and rig (where necessary) our assets. We are using Photoshop and Mari to texture our character and Unity to shade our assets. In Unity we have actually created our own "uber shader" which almost all of our assets are currently using. For now, the modeling.
Here's our set from our main camera with all of our assets referenced into a scene and dressed according to our artwork. The green bits are maya joints for some vegetation that will have some wind animation in Unity.
Here is the same set from a different camera to show our hackery. The beauty of working in context is that we can shape all of our dressing to our camera.
The log is detailed where it needs to be and sparse everywhere else
This is one of the flowers that also has a skeleton bound to it.
This is our forest sprite character. We are using our cache exporter and importer to bring our animation into Unity.
We wanted some soft "texture" in our scene to make it more inviting and organic, so in envisioning the scene we decided we ought to add some cushy moss to the fallen log.
To reiterate - we're using Unity on a mac - so we don't have access to DirectX 11's tessellation support to grow isoline fur or anything like that. The tried and true method in games prior to that technology has been to implement a shell/fin method - but we wanted to take that idea a step further.
A simple 3 pass shell extrusion shader.
In order to have decent-quality fur using the method mentioned above, you need a significant number of shells, each serving as a cross section of the fur volume (which in Unity would each require a separate draw call per sub shader, since we'd prefer to do all the extrusion using the vertex program). This can bring down performance, requires a lot of redundant code, and a long compilation cycle.
I want to keep the draw call count low per model, so instead I chose to integrate the effects of multiple cross-section layers in only one layer. This essentially turns into a ray casting problem.
Rather than spend a bunch of shader instructions writing the ray casting itself, I hijacked another popular shading concept called parallax mapping.
Essentially, naive parallax mapping allows for a texture to seem "deeper" inset into the surface being shaded, as a function of a height map. If you replace the height map value with a constant, you can set the entire texture a certain "depth" into the material. Do this enough times with linear varying depths, and integrate things such as opacity, surface normal, root-tip color, etc in a loop, and you get a holographic ray-marched-esque shader of the fur volume.
for each iteration
shift uv by ParallaxOffset(0, depth*iteration/numIters, viewDirection);
look up cross section tex2d(tex, shiftedUV);
integrate tinted texture over length with root&tip color;
integrate alpha over length with root&tip opacity;
integrate normal from tex2D(NormalMap, shiftedUV);
With 30 iterations and 1 draw call you can achieve the look of 30 shells.
One shortcoming of naive parallax mapping is that you don't have true silhouette edges. This could be solved by implementing relief mapping. Instead I opted to AlphaTest anything lower than a certain fixed value, allowing for blades of moss and grass to be clipped.
Clumping was also achieved by additionally offsetting the uv value at the tip by some smoothly varying vectors, encoded in a normal map generated from tiling FBM. Same goes for keep alive/wind.
Gross directionality changes and density are attenuated via vertex colors = (tangentShift, bitangentShift, 0, density).
Sometimes you just want to colored shadows. It may not be physically correct, but perhaps you're making a stylistic choice. Unfortunately our options are limited. Unity's Ambient Light is a global concept, so if you want to add colored shadows on a per-material basis for fine grain control, you probably need to implement it in your shader.
In order for our forest to feel more organic, I made some adjustments to all of our custom shaders - a very simple addendum within our lighting models.
Let's start with standard lambert irradiance.
c.rgb = s.Albedo * saturate(NdotL) * atten;
There are two things happening here, we're diminishing the color of the material by the diffuse component (saturate(NdotL)), as well as the combined factor of the surface's distance to the light source and whatever cast shadows are occurring (atten). Both of these are 0->1 values. This is what that looks like:
With all that black we lose a lot of contextual information.
Let's add some terms to our lighting to make it more interesting.
Since both the diffuse and attenuation/shadow components map 0->1 as "in shadow"->"in light", we can remap them, multiply against a per-material _ShadowColor, and add the result back into the mix. Now colors that used to be crushed to (0,0,0) are (_ShadowColor.r,_ShadowColor.g,_ShadowColor.b), and our whites are preserved:
Rather than animating and cacheing a bunch of flower bending by hand, we wanted to create some wind procedurally. This wind is simply for "keep alive," or, to add some subtle movement so our scene does not look completely static and CG. In order to do this, the flowers were rigged, and three scripts were written: WindSim, WindObject, and AssignIDs.
The WindSim script is attached to any flower or object that has a rig and wants to be simmed. The object "wants" to be simmed if it is specifically tagged in the inspector. This allowed us the ability to turn off simulation easily rather than removing the script entirely from the object. WindSim does the actual moving of the joints and takes in a direction from WindObject. WindSim has attributes such as strength and animation offset to vary the wind motion from object to object. Here is the noise function we came up with:
Inigo Quilez's Graph Toy or the Grapher tool for Macs are really helpful tools to come up with interesting noise functions. This is the function visualized with Graph Toy:
WindObject is attached to an empty gameObject and serves as a global way of controlling the direction of the wind as well as other parameters. The direction is just based on the gameObject’s directional vector. We have a simple Gizmo line drawing in the direction of the wind which is visible in the scene view. Like WindSim, WindObject also has strength, flexibility, and animation offset attributes that are passed to WindSim initially by default. If any of these attributes are set in WindSim, then they override WindObject’s settings.
Here's a screenshot of our WindObject selected. The yellow line tells us which direction the wind is blowing.
We created AssignIDs as an editor script to add some randomization for each simmable object, . It looks for all simmable objects in the scene and incrementally assigns an id to each of the objects.