Tutorial *106*

Ok, this is my first tutorial ever, and I'm not really one to be posting tutorials, seeing as my current engine in progress is at least 80% tutorial code, probably more... I'm working on making an anime-inspired game with the Quake engine, and I figured that I might as well try to implement some cel shading. I found a lot of useful info at http://nehe.gamedev.net/tutorials/lesson38.asp and http://www.gamedev.net/reference/programming/features/celshading/. If you read over the articles (both are essentially the same since one is based off of the other) you'll notice that they calculate the dot product of the face normal and the light angle PER POLYGON to get a value for the cel shaded texture. This is a little costly IMHO, in terms of speed, and it's cheating by turning off lighting and using the texture to simulate light on the model. If you wanted to have a texture on the model, you'd have to multi-texture, and seeing as I'm a novice (I'm learning a little bit of Visual C++ through messing around with the Quake source), I looked for some other way to do things. Quake is gracious enough to have pre-calculated dot products and shaded light tables, so why should I bother redoing the work that's been done? Sure, it's only 16 shades (every 22.5 degrees) but we're not looking to make the light smoother, but instead, a little rougher. So what I've done is change the lighting value from a float to a vec3_t so that I can calculate the intensity of the light as the length of the vector. I use that length to find the value of intensity in the celshade array, and then normalize the light vector so that when I scale it, the length will be the same as the celshade entry. It's a little lame, I know, but I found it to be the only way that I could keep the colored lighting values correct AND flatten the lighting.

Oh yeah, this code assumes that you've completed the colored lighting tutorials (dynamic and LIT support) and that you've done Fenix's interpolation, though with a little modification, this works with any model-drawing routine.

So, with all of that said, let's get to the code!

We're going to be working in gl_rmain.c.

Find the definitions of the vertex normal tables, right after "Alias Models."
Right after:



int	lastposenum;

// fenix@io.com: model animation interpolation

int lastposenum0;

Add:



//  Gongo - cel shade tutorial

//  cel shading table

float celshade[16] =

{ 0.2, 0.2, 0.2, 0.5, 0.5, 0.5, 0.5, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 };

This is our "sharp lighting" table. Now, in GL_DrawAliasBlendedFrame, at the top variables, comment out:



float l;

and add:



int			i;  //  for "for" loops

vec3_t		l;  //  new - used for cel shading

float		l2;  //  cel shading lookup value

right beneath it.

Now go down into the "for" loop that draws the model and find where it calculates light.
Comment out:



l = ( shadedots[verts1->lightnormalindex] + (blend * d[0]) );

glColor3f (l * lightcolor[0], l * lightcolor[1], l * lightcolor[2]);

and add:



//  calculate light as vector so that intensity is it's length

for ( i = 0; i < 3; i++ )

{

      l[i] = ( shadedots[verts1->lightnormalindex] + (blend * d[0]) );  //  shade as usual

	l[i] *= lightcolor[i];  //  apply colored lighting

}

So that we're still taking colored lighting into account.
Now comes the real stuff... I've got an "if" statement at the beginning so that I can turn off the shading with a cvar, it helps me switch back and forth in-game to see if the lighting is doing what I think it should.
Right beneath the "for" loop we just added, add:



// cel shade lighting

if ( gl_outline.value )

{

      l2 = sqrt( (l[0]*l[0]) + (l[1]*l[1]) + (l[2]*l[2]) );  // get the length of the lighting vector (intensity)

	if ( l2 > 1.0 )  // if it's greater than 1.0

		l2 = 1.0;  //  we'll clamp down to 1.0, since it'll be the same shade anyway

	l2 = celshade[(int)(l2 * 15)];  //  lookup the value in the cel shade lighting table

	l2 *= 1.25;  //  brighten things up a bit

	VectorNormalize (l);  //  bring the lighting vector length to 1 so that we can scale it to exactly the value we want

	VectorScale (l, l2, l);  //  scale the light to the clamped cel shaded value

	for ( i = 0; i < 3; i++ )

	{

		if ( l[i] > 1.0 )  //  check for overbrights

			l[i] = 1.0;  //  clamp down to 1.0

		if ( l[i] <= 0.0 )  //  check for no light

			l[i] = 0.15;  //  provide some minimum light

	}

}

glColor3f (l[0], l[1], l[2]);  //  apply the (finally) calculated light

I've commented the code a lot, so hopefully it's self-explanatory.
If you compile and run now, all MDL models will just look like they've got ugly lighting...
Now, this isn't as impressive as the "sharp lighting" demonstrated in other cel shading tutorials (like the links at the beginning of this tutorial), but it certainly helps to flatten out the models... What's missing? Oh yeah, the "hand-drawn" outline!

This next part is pretty much adapted straight from the sample code in the tutorials I linked to above.

Still in GL_DrawAliasBlendedFrame, copy everything from:



verts1  = (trivertx_t *)((byte *)paliashdr + paliashdr->posedata);

verts2  = verts1;

to:



      } while (--count);

      glEnd ();

}

and paste it right before the end of GL_DrawAliasBlendedFrame (right after the last "}" we copied). Now, just before the "for" loop for drawing, paste in:



glPolygonMode (GL_BACK, GL_LINE);  //  we're drawing the outlined edges

glEnable (GL_LINE_SMOOTH);  //  make the outline look cleaner

glLineWidth (1.0);  //  values above this look too thick at long distances

glEnable (GL_CULL_FACE);  //  enable culling so that we don't draw the entire wireframe

glCullFace (GL_FRONT);  //  get rid of the front facing wireframe

glFrontFace (GL_CW);  //  hack to avoid using the depth buffer tests

glEnable (GL_BLEND);

glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);  //  make sure the outline shows up



//  now draw the model again

This sets up the outline rendering mode. In the linked articles, they used a glDepthFunc (GL_LEQUAL) to get the outline right, but when I enable it, all of the models flicker and are drawn in reverse order, so I don't know what I'm doing wrong, but if someone else figures it out, please post an update to this code.

Now go into the "for" loop and remove the texture and lighting calls.
Change glColor3f to black.
At the end of the "for" loop, paste in:



glDisable (GL_LINE_SMOOTH);

glPolygonMode (GL_BACK, GL_FILL);  //  get out of wireframe mode

glFrontFace (GL_CCW);  //  end of hack for depth buffer

glCullFace (GL_BACK);  //  back to normal face culling

glDisable (GL_CULL_FACE);

glDisable (GL_BLEND);

Again, I'm sure I've made some mistakes here because I'm essentially drawing the model twice, which is causing a bit of a speed hit. Feel free to post an update and correct my errors here, as I'm sure there's a reason why it's slowing down a bit.

If you want to see what my outline routine looks like, here it is:



			// cel shade outline

			if ( gl_outline.value )

			{

				//  get the verts data and stuff

				verts1  = (trivertx_t *)((byte *)paliashdr + paliashdr->posedata);

				verts2  = verts1;



				verts1 += pose1 * paliashdr->poseverts;

				verts2 += pose2 * paliashdr->poseverts;



				order = (int *)((byte *)paliashdr + paliashdr->commands);



				glPolygonMode (GL_BACK, GL_LINE);  //  we're drawing the outlined edges

				glEnable (GL_LINE_SMOOTH);  //  make the outline look cleaner

				glLineWidth (1.0);  //  values above this look too thick at long distances

				glEnable (GL_CULL_FACE);  //  enable culling so that we don't draw the entire wireframe

				glCullFace (GL_FRONT);  //  get rid of the front facing wireframe

				glFrontFace (GL_CW);  //  hack to avoid using the depth buffer tests

				glEnable (GL_BLEND);

				glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);  //  make sure the outline shows up



				//  now draw the model again

				for (;;)

				{

					count = *order++;



					if (!count) break;



					if (count < 0)

					{

						count = -count;

						glBegin (GL_TRIANGLE_FAN);

					}

					else

					{

						glBegin (GL_TRIANGLE_STRIP);

					}



					do

					{

						order += 2;

						glColor3f (0.0, 0.0, 0.0);  //  outline color



						VectorSubtract(verts2->v, verts1->v, d);



						glVertex3f (

                             verts1->v[0] + (blend * d[0]),

                             verts1->v[1] + (blend * d[1]),

                             verts1->v[2] + (blend * d[2]));



						verts1++;

						verts2++;

					} while (--count);

					glEnd ();

				}



				glDisable (GL_LINE_SMOOTH);

				glPolygonMode (GL_BACK, GL_FILL);  //  get out of wireframe mode

				glFrontFace (GL_CCW);  //  end of hack for depth buffer

				glCullFace (GL_BACK);  //  back to normal face culling

				glDisable (GL_CULL_FACE);

				glDisable (GL_BLEND);

			}

Again, I've got the cvar in the "if" statement to observe changes while in-game. I'm not going to go over how to register a new cvar, that's easy enough, and there are tutorials for that too. :)

So that's it! Compile and run, and (hopefully) all MDL models will be outlined and lit a bit more "flatly" (is that a word?).

My machine is a Celeron 733MHz @ 1.1GHz with 512Mb PC133 SDRAM and a Voodoo5 5500 AGP. I get about 50 - 60 fps at 1024x768 with the cel shading on (as opposed to 90 - 100 fps with it off).

A note to all 3dfx users: this code will NOT work with WickedGL drivers or MiniGL (hence why I'm suffering such a heavy drop in framerate). The Mini and Wicked drivers support GL_LINES but apparently not glPolgonMode in GL_LINE... unless it's something that I'm not coding right...

Anyway, enjoy, and please, feel free to post updates/corrections to this.

---Gongo


 
Not logged in
Sign up
Login:
Passwd: