Wednesday, 11 December 2013

Vroom Vroom!

Over the last few blogs I have been talking about skeleton, AI, lighting, and essentially most things to do with games. But where are all these things held? or managed? Today most games make use of something called an engine. Most, if not all, graphically intensive games you play today were probably built on a game engine. But what is a game engine? a game engine is a set of facilities and libraries that deal with the general or usual needs of a game. For example: most games need animations so the game engine contains facilities that allow for animation. Though this is not to say that the engine will do the animations for you, the engine will provide the developers with the necessary tools and act as a base for the developers to build on and tweak in order to fulfill their specific needs of their game.

But what makes up a game engine? There are many different game engines, each aimed at different types of games and each developed with different goals in mind. But most engines include these components:

First off you have the device drivers component which is essentially a "layer" of code that allows the engine to interact smoothly with the hardware (example: graphics card) of the device (example: PS3) in which it is installed. 

Then you have the operating system component of the engine which is mainly related to the operating system (example: windows) of the device you are running the engine on. It's main purpose is to make sure that your game "behaves" with the operating system and the other programs associated with it. In other words, it makes sure that your game doesn't eat up all the resources that should be shared between programs in a computer, mainly memory and processing power.

There is also the Software Development Kits (SDKs)component of the engine. Most engines rely on a number of SDKs to provide a good number of the functionality and tools of the engine such as data structures, graphics, and AI. In other words, a game engine may make use of another engine specifically geared for a certain purpose, like a a physics engine, to provide some of the necessary tools for the developers.

Then there is the platform independence layer component of the engine which essentially makes sure that all the information coming from the above components, which may change between platforms (PS3, Xbox 360, Pc, etc.), and makes sure that they are all compatible with the other parts of the engine.



The core systems component of the engine more or less handles the "administrative" functions of the engine, mainly providing mathematical libraries, memory management, custom data structures, and assertions (essentially an error-checker).  

Resources Manager component makes sure that the rest of the engine or the game is capable of accessing all the different resources needed for the game regardless of their file type.

Now here is an interesting bit, the resource engine. Yes, I know we are already talking about a game engine. But you know what? Game engines are engine with engines who might also have engines. Engine-ception! But back on topic, this part of the engine deals with just about everything to do with making things appear on screen from the menus, to the 3D models, to the particle systems (everyone loves particles!).

You with particles.
You without particles.
Then there is the ever important profiling and debugging component of the engine which provides tools and functionality to the developers that allow then to closely analyze such things as memory usage and gameplay playback which help them to track down and stomp out bugs.

Found one!
Collision and physics, well this should be fairly obvious as to what this component covers. But in most game engines today have this component as an SDK. You are probably asking yourself why are engines using other engines to do their work? why can game developers make their own stuff? The answer is very simple... IT'S BECAUSE THEY ARE BUSY MAKING THE GAMES YOU PLAY!



Making these components takes time and resources both of which could be better used for making awesome games that will not only make the dev team and you happy but will also make sure that the team keeps making money which keeps the company afloat which makes the dev team really happy.

The animation component of course deals with how things move in the game. so in other words how a character walks is handled by the animation component of the engine.

The Human Interface Devices (HIDs) component mainly deals with the devices we, as humans, create input for the game. Essentially this is the part of the engine that deals with controllers. Whether they be mouse and keyboard, gamepad, or some other device, the HID component handles it.


The next part is the audio section which uuh deals with audio. Essentially making sure that the sounds and music are handled correctly to provide the best experience possible. Though audio is sometimes overlooked which is sad given the integral part that audio plays in a a game. Want proof? go into the options of a video game and turn down the music volume to zero and see how that effects the gameplay.
The next component of the engine deals with multiplayer  functionality of the engine. Meaning that it deals with split screens, the multiple inputs from the different players, and in regard to online play the networking of the separate consoles. This component is called...well... the multiplayer and networking component. Look! these are very efficient ok?

Next we have the Gameplay Foundation Systems. Where as the other components deal with how things act in certain ways, this component allows those things to act in that way. In other words this is the rule keeper of the engine, it dictates which things move or done move and what those things are capable of doing. So if your character can shoot fireballs, its because the Gameplay Foundation Systems have deemed it so. This component is sometimes written in a scripting language which allows developers to make changes to the game without having to recompile the game, which saves precious time.

Lastly, there is the Game-Specific Subsystems component which is more or less the creation of new parts for the engine that will produce the specific effect the developers needing for the game.

Finally! Now, of course what I went over in this blog is just an simple overview. There is so much more to these components and I encourage you to research further into them should you desire to do so.

Sunday, 1 December 2013

Spooky Scary Skeletons


Well if you haven't yet figured out the topic of today's blog then then you should get your brainbox checked. The topic of today's blog is skeletons! And by skeletons I mean the kind that you form inside of a 3D model in order to dictate the manipulation of the mesh and not the kind that keeps your body from looking like a pile of squishy, saggy, meat-flesh.

This is what happens when Frankenstein has too much fun with silly putty
Skeletal animation is currently the most popular technique used in animating movable objects in a game, at least when it comes to organic objects such as people, animals, and other creatures. Skeletal animation consists of 2 main components: the mesh, and the skeleton.

The mesh or model is a connected conglomeration of polygons that that make up the shape or body of the object to be animated. So if the object to be animated is a horse then the mesh would be in the shape of a horse.

The skeleton is a series of joints ordered in a hierarchy, positioned in the places where the object is supposed to bend or deform.
why is it that videogame babies usually look creepy instead of cute?
Even though there are shapes, called bones, connecting joints; these bones do not effect the animation of the character.Only the joints are taken into account when animating, bones are simply there to help the animator see the shape and hierarchy of the skeleton.

When the skeleton and the mesh are created and put in their proper positions, in other words the skeleton is placed correctly inside the mesh. The mesh is "skinned" to the skeleton. This means that the vertices of the mesh are bound to the joints of the skeleton so that when the orientation and position of the joints changes, it effects the position of the vertices. The amount of effect a joint has on a vertex, more commonly known as weight, can be changed so as to give the mesh a more desirable deformation as the skeleton is manipulated. It is very easy to have skin weights that create bad deformations so be vigilant.
That looks like it hurts
As mentioned before, only the vertices of the mesh are manipulated by the skeleton. Ultimately this is done to reduce the amount of date needed to make these animations happen. Now in an ideal world where we are able to do anything, at any time,  anywhere, the most perfect animation system would essentially be your body. Your body is made up of cells which in tern are made up of molecules, which can be broken down into atoms, which can be broken in subatomic particles which can be broken down into strings? or is it waves? I have no idea. The point I am trying to get at is that when you move you body each of those cells, molecules, atoms, etc. moves too. This can be though of as the ideal animation system, where we have objects made up of hundreds of thousands of points which make up the object and each of them being manipulated. Can you imagine the sheer MAGNITUDE of the amount of data required to do anything of that scale? UGH This is why the only the vertices are used, to save data.

Save data
If you were to use morph targets as well the amount of data used is even less because you are now restricting the degrees of movement to a discrete set of movements as oppose to allowing the object to be able to move all over the place.



Sources

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.

Soriano, Marc. "Skeletal Animation". Bourns College of Engineering. Retrieved November 2013.

Owen, Scott (march 1999). "A Practical Approach to Motion Capture: Acclaim's optical motion capture system: Skeletal Animation". Siggraph. Retrieval November 2013.

http://www.youtube.com/watch?v=K2rwxs1gH9w

http://files.coloribus.com/files/adsarchive/part_1400/14009805/biovision-blob-fish-600-69817.jpg

http://upload.wikimedia.org/wikipedia/commons/a/ab/Skeletal_animation_using_Blender_software.png

http://wiki.blender.org/uploads/thumb/e/e6/RST.65.png/600px-RST.65.png

http://images2.fanpop.com/image/photos/9400000/Lt-Commander-Data-star-trek-the-next-generation-9406565-1694-2560.jpg

https://www.youtube.com/watch?v=iRZ2Sh5-XuM

Saturday, 30 November 2013

PvP Fight



Portal 2 is a puzzle/adventure game which revolves around the concept of creating these gateways or "portals"  if you will that allow the player to enter one portal and exit out the other. Aside from the main story line, the game allows players to construct their own puzzles or "test chambers" which they can play and publish via the steam workshop which will allow other steam customers to play their level should they also own Portal 2.



Recently, I was able to play the level "gel training" by TinMan. The level opened up using the elevator which is very interesting in that I do not know how to do that. This feature adds to the level in that adds to the immersive feel to the level or at least makes you feel that the level is a part of the main game. Once you reach the actual level you are faced with two corridors, one with a laser grid at its end and the other with turrets lined up along it. You are also provided with a propulsion gel dispenser.I don't want to spoil too much of the level but in a nutshell you use the propulsion gel to get through the laser grid, which gives you access to a button which allows you to use the propulsion gel to get past the turrets and reach the exit.

The problem with the level was that it used the concept of "use A against B to get to C" in the exact same way. You can use this concept again but one MUST introduce new concepts or obstacles so that even though the player uses item A against obstacle B the get to C they are forced to use A in different ways that initially used because obstacle B requires a different approach. The level "gel training" did not do this. Now I know what you are going to say, "the laser grid and turrets are not the same thing" but they both use time. In the level the laser grid is timed so that when you push a button it remains off for a few seconds and the turrets do not fire are you for a few seconds when you appear in front of them. Hence, you approached both the turrets and the laser grid problems in the exact same way.

Another problem with the level was that it did not use its space effectively, it had a high ceiling but the player did not need much of it to reach the exit. This ties into the previous paragraph in that effective use of the level space can provide new methods of dealing with the same obstacle even though the player is using the same concept of item A against B to get to C.

Overall, the level was very simple and straight forward which can, in instances be fun, But that was not the case with "gel training", the level was so simple that I did feel satisfied when I completed it. It used the concept of use item A to get past obstacle B to reach point C which is perfectly fine when used as a building block to create more complex puzzles but when used by itself again and again in quick succession it becomes very old very fast and that is what happened in the case of "gel training". I believe that if TinMan takes these observeations into account his future levels will become much more interesting.


sources
http://www.blogcdn.com/www.joystiq.com/media/2010/08/portal2logobkgrnd.jpg

http://cloud-2.steampowered.com/ugc/685967544745286080/E0F6AE492D09C2855ED1156377C882ECBD338DCD/637x358.resizedimage


Saturday, 23 November 2013

Touch It Again I think It Moved!

Boss: Bob can you animate a character for me?

Bob: Sure boss. 

Boss: Nice! There's one catch though. The client wants it done "traditionally", that means moving the pixels around to the next pose. 

Bob: OoooK? what character do you want me to animate? 

Boss: This one: 

Bob: WHAT!! I can't do that! That guy has a TON of pixels to him! 

Boss: Well joke's on you cause pixels don't weight anything, and you have two weeks. Have fun! 

I hope you liked my little script there but they really did used to animate video games but moving around the pixels so that they formed each sequence of a single movement like a a walk or an attack. 
These separate posses would then be placed in a row where they can be loaded one after the other into the game at a fast enough rate that it gave the illusion of movement. This form of animation was known as sprite animation. 

it looked something like this
 Though that is not how things would stay. Eventually developers began to make 3D games, and no I don't mean 

I mean THIS

LOVE this game!!
and this

One of the best multiplayer experiences I ever had.
But with new graphics comes new ways of doing them such as rigid hierarchy animation. Which involved creating the models (3d objects in the game) as a set of pieces. These pieces were modeled after the parts of the body that typically would not bend, like the forearm or the thigh. These pieces were linked to each other in a hierarchical order that resembled the way the model would be put together in reality such as right hand  > forearm > upper arm > torso. This allowed the models to be moved freely with their limbs following like they should. But this method had a problem due to the models being made up of rigid parts. As the models were put into various poses they would show "cracks" around their joint areas, "cracks" being areas where the pieces would separate showing some spacing between pieces. You can typically see it in really old 3D games such as virtua fighter.


Though this is not so common now since we use different methods than rigid hierarchy. One such method was per-vertex animation which involved moving each of the vertices of a model. Vertices being the points on a model that make up a the polygons.  The motions of these vertices were then exported for use in the game. But calculating the motion of every vertex is very calculation heavy which is not good for the processor and can slow down the game. Which is why this method was not used very often for games. 

An alternate method is using morph target animation which is similar to per vertex animation in that it still involves the vertices but uses predetermined sets of positions so all that's needed to display the movements of a model is to linearly interpolate between the set positions which is much less calculation heavy. This method is usually used for facial expression as it allows the artist to create the various facial expressions which the model can them be morphed to as needed but moving the vertices themselves to make each pose for the animation can be quite trouble some so another method is used. 

This method called "skinned animation" uses a skeleton to manipulate the model. By attaching the model or "skin" to the skeleton the model moves as the skeleton does. The vertices of the "skin" keep tabs on the movements of the skeleton via its "joints", the areas of the skeleton between the bones that allow it to bend. You know, joints! 

That's what your knee looks like on the inside!
This method also made use of what's called a weight map which is essentially a texture that holds the amount of influence that each joint has in the vertices of the model. When applied, this method allowed the model to move more naturally. This produced results aesthetically pleasing and wasn't computation heavy which made it the most used method of animation today. 



Sources

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.







Thursday, 21 November 2013

Lights! And Effects That Have To Do With Lights! Part 2

Last time I talked about some general techniques regarding light such as the shadow volumes technique which used rays emanating from a light source to calculate areas of shadow within a scene, this in combination with the stencil buffer, frame buffer, and depth buffer generate the shadows within the scene when rendered.

I also went over HDR lighting or High Dynamic Range lighting which essentially calculates the light intensity of a scene without limiting the intensity range and saves it in a format that allows these kind of shenanigans. Then before the scene is rendered, the light is tone mapped, meaning that the intensities are are scaled down to within the range which the monitor or TV being used is capable of. This technique is essentially how you make bloom!

No, not that kind of bloom!!

There! That's the stuff!!

This time around I'll be going over a class of lighting called Image-based Lighting. But why is it called image-based lighting, you ask? That's because all the techniques essentially use pictures to create additional features to an object or scene. 

One problem with highly detailed models is that their high amount of polygons can take up an awful lot of processing power when it come time for the scene to be rendered, and that's because the more polygons a object has the more calculations are needed for that object. This can really slow down the frame rate (This is the amount of pictures shown in certain amount of time) which can turn your video game into a slideshow. One way to keep the frame rate to nice and high is to make use of low polygon models but of course the problem with that is that they look...uhh

So how does one use the low polygon model while maintaining the detail of the high polygon model? With the use of normal maps of course! But what is a normal? A normal is a vector that is perpendicular to a surface or another vector(s). These normals are used in calculating how light will react to a given polygon. So one can use the normals of a high polygon model (which have many more normals than the low polygon model) to generate a normal map which is essentially a texture that represents the normals in a grey scale gradient. Applying this texture to the low poly model essentially gives the impression that low poly model has many normals, this can make the effect of a flat surface look curved.


Another really useful technique is called specular mapping. This technique essentially makes an object or scene look reflective or glossy. It does this my applying the mathematical equation:


Ks =  How reflective object or scene is
R = Direction of reflected light vector
V = Direction of Observer
alpha = the power of the specular reflection

With the use of this equation and just three line of code you can make your object or scene as shiny as you want. But of course all things in moderation.


Though things in reality are not usually all covered in shininess.

usually
So, in order to control the amount of specular parts of an object one uses a specular map which stores the values of "Ks" for each texel (texture unit) of a texture map. With this, one can create localized reflective surfaces and mimic streaks of sweat or blood.

In the end these techniques help to create q more immersive and interesting world which we would rather live in than reality cause lets face it sometimes games are just too beautiful.





sources

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.




Tuesday, 19 November 2013

Lights! And effects that have to do with light!

lights! Along with cameras, lights are integral to video games. You can have all the cameras you want but if you don't have at least one light source you wont see anything but darkness, which is not what you want (unless your game is called "You see nothing but darkness", in which case, its perfect!). Though allowing the player to see is not light's only use, lights can be used, and often are, to establish mood. Anything from wild happiness to creeping terror, lights can make it happen.

candles make everything creepier 

One technique aimed at providing more realistic or at the very least interesting light is HDR or High Dynamic Range lighting. With HDR you could take the range of light beyond , which was (more or less) the limit. This allowed for much higher contrast between light and darkness, creating the effects such as that blurry bright light that you see when you stay in a dark area and look out into a very bright area.


It created this effect by calculating the lighting without making sure that the values stayed within the 0 to 1 range and then stored in a file format that allowed this. Then before the frame is displayed on screen the lighting underwent a process called "tone mapping" which then scaled the intensity of the light to within ability of the TV or monitor or other image viewing machine.


(Ooh just look at those lighting effects! The latest in technology!)

Another technique is called Global Illumination. Well, Global Illumination can be thought of more as a class, incorporating a few techniques to provide the desired effect. But what is global illumination about? Well, it involves techniques that take into account how light effects two or more objects and how the residual light from the objects effects the other objects in the scene, all this in relation to the camera.

Global illumination includes quite a few techniques, the most prevalent ones being those that produced shadows because you can't have light without shadows, it would be weird. One technique used to produce shadows is called "shadow volumes". This technique involves calculating the rays of light emanating from the light source, from the light source's perspective, to each of the objects in the scene, especially their edges (this can be done by shooting rays from the light source through the vertices of the object). The resulting volume(s) provides shadowed areas. To use these shadows effectively in games a stencil buffer is used which records single digit values for each pixel in a scene depending on whether or not they are lit. Then the scene is rendered without shadow in the frame buffer and coupled with a depth buffer. The stencil buffer, set to all zeros, is then rendered from the camera perspective, each value of the stencil buffer changing. Lit triangles increase the value, unlit triangles decrease the value, and unseen triangles are left at zero. So, on a third rendering pass, all the components of the scene are combined allowing for the shadows to be generated by darkening the areas where the shadows hit using values of the stencil buffer.

This is but one of the many techniques included in global illumination which is but one of the types of lighting possible within games. But all in all one can see the vast possibilities given to developers to produce unique and interesting lighting within games to give us that extra bit of detail that make helps to immerse us in the game.


sources

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.

http://www.oxmonline.com/files/u10/ds3a_screen_3.jpg

http://www.laurenscorijn.com/wp-content/uploads/2009/08/Farcryhdr.jpg

http://www.youtube.com/watch?v=dKEM5sYnOjE

http://upload.wikimedia.org/wikipedia/en/0/07/Doom3shadows.jpg

http://www.gamerzines.com/wp-content/uploads/2012/09/Moon.jpg

Sunday, 17 November 2013

Scripts!

So what is a scripting language? and how is it different than a programming language? Do NOT try to find a definitive answer to the second question on a forum! Especially if you're not really a programmer. I tried, it was like a firestorm of confusion in there. Basically, what you need to know is that a scripting language IS a programming language that allows the user the ability to customize the way a specific software application (a game) behaves. It is a high-level language (meaning that it is closer to human language rather than computer language) which is easy for both programmer and none-programmers to use and allow them access to most of the commonly used functions of the engine. Scripting allows these users to do anything from modding a current game to make a completely new game. Though do not think that scripting languages simply come on in one flavour, there are several different kinds of scripting languages that may have different functions or may focus on different characteristics.

One such distinction between languages is whether or not the language is interpreted or compiled. Compiled languages use a program called a compiler to transform or translate the script into machine code which can then be used by the CPU (though scripting languages are not compiled but instead interpreted as you will read later). Interpreted languages on the other hand can be parsed directly at runtime or it can be precompiled into byte code which is then processed by a virtual machine which is essentially a CPU but not really... it's like it exists but doesn't REALLY exist... it's like a holographic CPU O.K? Regardless, virtual machines can be put on almost machine but the downside is that they are slower than a real REAL CPU...sigh.

Moving on we come to functional languages where distinguishes its self from the other kinds of languages in that programs are defined by a collection of functions. They take in input data and run them through the functions one after the other until the desired output occurs.

Then there are procedural and object-oriented languages, where object-oriented languages focus on classes (structures that use functions to manage data) procedural languages focus on functions themselves to manage data as oppose to using classes which use functions.

These languages can also differ in whether they used a text based language such as LUA or a graphical based language like Unreal's Kismet. I am very interested in the kismet mainly because of the number of features focused on improving the user's workflow such as allowing the user to make changes to the script and see immediately the changes made while game is running.


This next video provides a more in depth look at what you can do with the kismet and what the Unreal Engine 4 is capable of (sadly, I dont think our school laptops are capable of providing that level of graphical fidelity without exploding).



Though regardless of the "flavour" of language you prefer, there are certain characteristics that all scripting languages share. Such that all scripting languages are interpreted as opposed to being compiled because this allows the scripting languages to be highly flexible and easily transferable because, as mentioned before they can be converted into byte code which can easily be put into memory instead of requiring the operating system as with compiled languages. Virtual machines also grant scripts a large amount flexibility regard when code is run because, as mentioned before, they are CPUs... but not really.

Scripting languages also tend to be rather simple and use very little memory because they have to be able to be put inside or embedded into a preexisting system. They are like a reverse tumour where you them in as oppose to taking them out and they are good for the system as oppose to bad, which is good...I think.

Another characteristic of scripting languages is that any changes you make to the code you have made do not require you to exit the game (if it is running), make your changes to the code, and then recompile the entire thing as with compiled languages. Though some scripting languages may require you to exit the game they do not require recompilation and some languages allow the scripts to be changing while the game is running like Unreal's Kismet. Though regardless of whether or not the language requires you to exit the game or not, they are all allow users to make changes to the code a lot faster than compiled languages.

Scripting languages are also easy to use and convenient, which is reflected by their simplicity. This is because scripting languages are used by programmers and none-programmers alike, specifically designers. So the scripting language must provide feature specific to the game the designer is creating such as the ability to pause and manipulate time, and finding game objects by name. This allows the designer and any other person using it to easily understand the language and make the necessary changes.

Scripts, as mentioned by Isla, act as a sort of "glue" or mortar between the components of code,  creating a sort of stone wall where the stone themselves (components) aim to be reusable for other walls (games), the scripts are specific to that wall.

But when it comes to writing a scripting language one must remember the characteristics of the scripting languages as mentioned above in that they must be flexible, easy to use, and they do not require compilation. Of course, writing ones own scripting language is not recommended especially if you have other important things to do. I hope the lead programmer of my game dev group does not read this cause I'm sure he would be all:



In which case I would promptly find a bridge to jump off of, preferably one with deep enough waters underneath that I could easily fake my own death and move to some far off country under the name of Pablo Escobar.


(facepalm* I googled that name and found out he was a big Colombian drug lord. Yup! Perfectly inconspicuous! That name won't get you selected for a "random" search by airport security at all!)

Despite all the difference mentioned before, the use of scripting languages are used to give more power to the designers and other non-programmers which is where scripts can do the most good because they allow the designer or artist or whomever to make easy, quick changes the the game so as to better bring out their vision of what the game should be.

sources

http://www.youtube.com/watch?v=IReehyN6iCc

http://www.youtube.com/watch?v=VitLyrynBgU

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.

Lewis, Mike."Flirting with The Dark Side: Domain Specific Languages and AI". File last modified 14 Nov.2013. Microsoft PowerPoint file.

Isla, Damian. "Scripting and AI: Flirting with The Dark Side". File last modified 16 Nov. 2013. Microsoft PowerPoint file.

http://images.wikia.com/animalcrossing/images/0/08/Challenge_accepted.png

http://collider.com/wp-content/uploads/pablo-escobar.jpg