Friday 12 April 2013

There Is No Class Like Shader Class

I will admit right away that I am not much of a coder, but I am a gamer. A gamer looking to make his way into the industry and one thing that is absolutely necessary to doing well in the gaming world is having a working knowledge of shaders. Why? Simple. BECAUSE SHADERS ARE EVERYWHERE IN GAMES!         You couldn't squash a goomba now a days without using at least one shader running. But why is this so, why have shaders become so prevalent within games that they have become indispensable? This is because shaders have given developers the power to manipulate every aspect of the rendering pipeline allowing them to create effects of great realism and impact.

The class I am currently enrolled in is called Intermediate Computer Graphics and it is an amazing course, The class introduces students to the world of shaders. This wonderful class is taught by Andrew "Shaderman" Hogue PhD. This guru of Gaussian, this sultan of shadows, this master of mesh is responsible for teaching students about the inner workings of shaders and their many, many applications. His classes are very fast paced and jam packed with information but his good nature and upbeat attitude keep it at least a little bearable, though it would be nice to be able to go through them at a more comfortable speed. But then again that's what the office hours are for right? These classes are accompanied by tutorials taught by Daniel Buckstein who is also an amazing teacher capable conveying informations in a hilarious and enjoyable manner though he seems to have a strange fixation on cheeseburgers.

All in all the course is a fast paced, helter-skelter, runaway train ride of shaders and code lead by a mad doctor and a cheesburger obsessed assistant. I would totally recommend this course if you are interested in learning the inner workings of shaders, though be warned as you will only get out of this course what you put into it and taking this course may result in you becoming just ans mad as they are. You have been warned.


gamecon

On April 9th, 2013 Gamecon was once again upon us. But what is gamecon? Gamecon is a sort of showcase held by UOIT to show the public the game created by the students in the Game Development and Entrepreneurship program. This year Gamecon was held in the atriums of the 3 main buildings of the North Oshawa Campus, the UA building, the UB building and the ERC building (I don't remember their full names). We met in the morning to set up our booth with whatever decorations we had and made sure our game up and running and ready to be played.

Soon enough people started to come around and play our game much like what happened at LevelUP but with a much smaller turn out. I guess its because the people who came by were students and did not have time to play everyone's games, which was unfortunate. A big issue that kept occurring was that the power to the wall sockets kept cutting out leaving people without power and causing some booths to "shut down" as their computers did not have any power left in them. Although the location for Gamecon this year was somewhat better than last year's location of the gym because of the amount of traffic that goes through these areas. But these areas were not meant to take that kind of power consumption. It is imperative that we find a suitable place to host Gamecon, a place with both high traffic and stable power.

Though other than that Gamecon was another enjoyable experience as both the public and the professors who played our game seemed to enjoy it.

Leveled UP!


On April 3rd, 2013 my team and I had the opportunity to attend the 2013 LevelUP showcase which featured videogames created by students from varying universities and colleges. This meant that the team and I would be  showing the game we have been developing since last semester, Luna, to not only students and professors from other schools but to the general public as LevelUP is a free event and open to the public. It was safe to say we were nervous being there, like we didn't belong because our game did not look presentable to us at all. But hey, any excuse the get out of school right?  It was cold that morning, I was a little excited to be back in the city after so long, but that did little to cure my nervousness. All teams from our school were scheduled to meet at around 10am at the Design Exchange building in Toronto, the showcase was held to be held on the first and second floors of the building. We got there, helped set up a few things in the room but essentially did nothing for a good long while but then we were instructed to finally set up our games at our designated areas. After completing that task, we did nothing again. The coordinators of the event promised us pizza at noon, we did not get the pizza at noon! We got it at 3pm! By that time we were hungry as all hell and descended on those pizzas like wolves.

Finally, 5pm rolled around and the event became open to the public. By this time I was so bored, and tired that I didn't care how our game looked and was eager to do something to relieve my boredom. A great variety of people drifted into the hall, from students from other schools to little children eager to play some video games. The hall filled up fast and was soon filled with the noise of video games and jostling people. We presented our game to anyone who even remotely seemed interested in our game and we found that the people that seemed to enjoy our game the most were little girls which was both adorably awesome  and valuable to our marketing plan.

Whenever there was an opportunity I would wander the hall observing and playing the games from the other schools. I found that most of them used pre-made engines like the Unreal Engine or Unity unlike us who have to build it from scratch. i do not deny the value in learning to make games from scratch but my main gripe comes from the sheer difference in the aesthetic value these engines give teams as it allows them to make their games much more visually appealing despite them having almost the exact same mechanics as ours do. This seemed to draw people towards their games more as oppose to ours. Though there were a few games that truly deserves the attention like this one game who's main character is a shadow (sadly, never got a chance to play that one).

While this was all going on up stairs on the second floor, down on the first floor was where industry professionals were set up where they would present their products to anyone who would listen much like how we were doing up stairs. they were also interested in taking in people's resume's and business cards, especially Big Viking Games (I really wanted that hat). This was another problem I had with the setup of LevelUP because the students were busy focusing on presenting their games upstairs and the public, upon entering the building, went straight upstairs so that the first floor was almost completely empty the whole time. I think that next time both the professional developers and the students should be in the same area so as to maximize the interaction between the students and themselves as well as allow the professionals more exposure to the public.

Overall LevenUP was a great experience and we got some awesome feedback about our game.

Sources

https://www.facebook.com/events/492049624175588/

ambient occulsion

OK, truth is I'm running out of ideas for these titles, just bare with me. So, obviously, this blog post is going to be about ambient occlusion. But what is ambient occlusion? you ask. Well, ambient occlusion is a method of computing shadows within a scene like shadow mapping but at the same time not like shadow mapping because of one major characteristic of ambient occlusion, which we will discuss later.

So, how does this thing work? The basic idea is that during the preprocessing of a scene each triangle (all models within a scene use triangles because triangles are the easiest shape to compute, and if you do not triangulate your models before using them you are a bad person)  of the scene is used to determine whether or not an area of the scene is being hit by light or not. To begin, calculations are made to determine the center of each triangle. Once this has been done we take the normals of these triangles at the points we just calculated and use them to project rays in random directions, taking care not to include results from rays that go into the model. These rays travel outwards from the center of their respective triangles and should a ray hit anything in the scene it has been determined that the triangle that that ray came from is occluded (at least for that ray). But if this is true then the effect ambient occlusion creates is blank and white with nothing in between. This is not so as the final step is to find the average of all the rays projected from that triangle this will give relatively nice looking shadows.

But as mentioned before there is a problem with ambient occlusion and it is that ambient occlusion is not light based but model based. In other words the outcome of AO is dependent on the characteristics of the model. Which should be fine until you have a light shinning right on top of where AO has determined that that a shadow must go. Despite this, ambient occlusion has its uses as it can be used to determine shadows for far away objects, or very large though any work regarding realistic graphics is not recommended as the GPU is capable of computing much more complex shadow algorithms.






Sources
http://academy.cg-masters.com/nicks-rants-and-raves/cg-myth-2-ambient-occlusion-shaders-are-awesome/
http://http.developer.nvidia.com/GPUGems/gpugems_ch17.html

In The Depth of the Field

This blog post is about and effect that allows to create a more realistic look to a rendered scene. Truthfully the effect can be used in both photo-realistic graphics and non-photo realistic graphics alike, I guess a better way of putting it is that it helps create a more believable world blurring a certain degree depending on the distance they are from the target object or camera.



Now depth of field can be achieved in two ways one way is the use the distances of the nearest and the farthest parts of an object and feed them into a fragment shader and anything within that range of distance stays on focus but anything outside of it is blurred. The other way is to calculate the blur radius of the camera with some camera properties such as focal length and focal stop. This is the method I am going to explain.

The first thing we do is calculate the how much an object is blurred depending on the distance the object is from the camera. This can be done using the blur disc diameter equation

Focal length(magnification)  distance                                                                              = blur
Focal stop                          distance of object +/- the distance of object from fragment 


Now the pixels/millimeter needs to be calculated. After that we blur, we can do this by separating bur into two passes one for blurring all the pixels horizontally and the other is for all the pixels vertically (aka box blur). When done we combine it all together, the blurred scene we just did and the regularly rendered scene so that the highly blurry parts blends with the really low resolution parts (as defined by the blur-disc equation) and the highly defined parts blend with the high resolution parts (again, as defined by the equation).

A Bloom by any other name would smells as sweet

So this blog, as mentioned in the title, is about an effect that has made games look so, so, so sweet. And that effect is bloom. Now back in the day, when I was a kid there were no fancy shaders like bloom to make things look awesome. When you played games you played them with harsh edges, blocky models, and everything was rendered EXACTLY how it was originally, and you liked it. But today shaders are everywhere in games, you can't imagine games without them anymore and one of the most used effect is bloom. Bloom is an effect that, essentially, blurs an area of a model or a model itself that is emitting or reflecting, or generally sending light into the camera so that it gives the lights are softer and brighter look.


So how is this done, well the first part is to know that bloom has 2 main parts, one is the creation of something called a glowmap. A glowmap is just a texture used to map the parts of the scene which will emit light. Remember to disable the colour writes then draw all the non-lighting stuff to the frame buffer object. The GPU will write the depth values to the FBO. The other part of bloom is the enabling of the colour writes again and drawing of all the glowing parts of the scene. So to reiterate, to create bloom render the glowmap then blur it using a Gaussian blur. Once this is done blend it the rendered scene using additive blending. This matches the edited scene perfectly as I used semitransparent layers coloured white to increase the brightness of the colours.














Sources

http://devmaster.net/posts/3100/shader-effects-glow-bloom

It's Could Be Motion Bur, Or You Could Just Be Drunk

You stagger forward through the dark alleys of some dirty, polluted city. You should not have had that drinking contest with that russian at the pub but the look on his face as he passed out was priceless. Your vision beings to blur, each movement of everything you see blending with the next. You wonder what is happening; its simple, its called being drunk. Now when this happens to a character in a videogame its called motion blur. There are a few ways of creating motion blur such as through the accumulation buffer or through the use of per-pixel motion vectors. I will be briefly going over both.


First let us go over the accumulation buffer method of creating motion blur. Firstly what is the accumulation buffer? the accumulation buffer is essentially a feature of Opengl that stores images from the draw buffer (the draw buffer takes in images in whatever perspective the user is currently set to). But the thing about the accumulation buffer is that it takes the images it got from the draw buffer  multiplies them by a decay value (the decay value is how much of the image or frame is left visible) before inputting them into the current draw buffer. So when the scene is finally drawn on the computer screen you see multiple frames at once but with varying opacity. This method is easy to implement but presents an unrefined look. 

Per-pixel motion vectors on the other hand give a much more smoother blur as oppose the accumulation buffer. How it does this is that it first uses a pixel shader with values taken from the depth buffer to calculate the location of each pixel in a scene relative to the world and in alignment with the view-projection matrix presently being used. Now the difference in pixel positions from the previous frame and the current frame is calculated to give a vector, a per-pixel velocity vector. This can be used to accumulate samples from the frame buffer which are then averaged to create the blur. 




References

We're wacky! We're loony! We are all a little toony!

This time around I am going to talk about a really cool effect that will leave you flattened like an anvil got dropped on your head by a pesky rabbit. In this entry I am going to discuss toon shading. But what is toon shading you ask? Toon shading is and effect that can be applied to a rendered scene to give it a cartoon look. Characterized by thick lines and non-smooth shadows as seen in the following image.


So, how do we do this. Well, first thing you have to know is that there are actually 2 effects happening on the screen, one is called cell-shading, the other is called a sobel filter. We will cover how both work.

First up is cell-shading. As can be seen previously, in the image above, cell-shading is responsible for the layered or "blocky" shadows in the scene.


To do this, two shaders are needed a fragment shader and a vertex shader as well as the position where the light is coming from. The fragment shader requires four variables in order to function properly, the texture coordinates, the information on the normals, and two uniform variables for the input image and the qmap. For our purposes, the qmap is simply a grey scale from white to black.  The vertex shader on the other hand only requires the texture coordinates, the information on the normals, and vertex positions. This shader will inform the fragment shader where to draw the shadows and what shade they should be. 
 
Now that we have these shaders we normalize the vectors and create the calculations for the qmap which first needs a diffuse calculation which helps in calculating the shadows of the scene, after this the maximum of either the zero or the dot product of the normalized vector and the vector for the direction of the light. These calculations are then used to calculate the shadows which are displayed as layers of gradient as oppose to a smooth transition from light to dark. Lastly, the texture 2D data of the input image and the texture coordinates are stored in a vector. This vector is then multiplied by the results of the shadow calculations which creates the toon shadows. 


The thick lines of the image are created using the sobel filter which filters the normals to create an edge texture that creates black lines around the edges of the models but leaves the rest white. To start define a kernel to calculate the edge detection so that both the horizontal and the vertical edges of the models are detected. The sobel filter will take in the x and y image coordinates and the image data. Now the pixel size and sum vector for the sum of location of the kernel and the texture data. Now the dot product of the sum by itself is taken and returned if the result is lower than 1 otherwise zero is returned. 


Now the put the shading and the lines together simply multiply them together to generate the finished toon shader.

Wednesday 10 April 2013

Power Of The Dark Side

 I dance around flame
But am never burned

I am as quick as light 
but can never out run you

I take form in day
But am colour of night

What am I?

Seriously, if you answered anything other than a ****** (answer is below) I am sorely disappointed. So, for those who got the riddle this blog entry is about ****** mapping, essentially a way of creating ******s.



OK so the first step to ****** mapping is to create something called a depthmap which is essentially a texture that holds all the distances from a light source to all the vectors in a scene. The depthmap can be made by rendering the scene from the light’s perspective and take in the distances of all the vectors of all the objects in the scene to have ******s from the light source. Now that the depthmap has been created render the scene from the camera’s perspective. Now use a fragment shader to do calculations for lighting. Now determine if a fragment is cast in ****** by using the vertices from the light’s perspective. This can be done by determining if a fragment is a greater distance from the light source than is recorded in the depthmap, if so then it is cast in ****** otherwise it is lit as normal.















Answer: SHADOW

Sources
http://devmaster.net/posts/3002/shader-effects-shadow-mapping