Wednesday 11 December 2013

Vroom Vroom!

Over the last few blogs I have been talking about skeleton, AI, lighting, and essentially most things to do with games. But where are all these things held? or managed? Today most games make use of something called an engine. Most, if not all, graphically intensive games you play today were probably built on a game engine. But what is a game engine? a game engine is a set of facilities and libraries that deal with the general or usual needs of a game. For example: most games need animations so the game engine contains facilities that allow for animation. Though this is not to say that the engine will do the animations for you, the engine will provide the developers with the necessary tools and act as a base for the developers to build on and tweak in order to fulfill their specific needs of their game.

But what makes up a game engine? There are many different game engines, each aimed at different types of games and each developed with different goals in mind. But most engines include these components:

First off you have the device drivers component which is essentially a "layer" of code that allows the engine to interact smoothly with the hardware (example: graphics card) of the device (example: PS3) in which it is installed. 

Then you have the operating system component of the engine which is mainly related to the operating system (example: windows) of the device you are running the engine on. It's main purpose is to make sure that your game "behaves" with the operating system and the other programs associated with it. In other words, it makes sure that your game doesn't eat up all the resources that should be shared between programs in a computer, mainly memory and processing power.

There is also the Software Development Kits (SDKs)component of the engine. Most engines rely on a number of SDKs to provide a good number of the functionality and tools of the engine such as data structures, graphics, and AI. In other words, a game engine may make use of another engine specifically geared for a certain purpose, like a a physics engine, to provide some of the necessary tools for the developers.

Then there is the platform independence layer component of the engine which essentially makes sure that all the information coming from the above components, which may change between platforms (PS3, Xbox 360, Pc, etc.), and makes sure that they are all compatible with the other parts of the engine.



The core systems component of the engine more or less handles the "administrative" functions of the engine, mainly providing mathematical libraries, memory management, custom data structures, and assertions (essentially an error-checker).  

Resources Manager component makes sure that the rest of the engine or the game is capable of accessing all the different resources needed for the game regardless of their file type.

Now here is an interesting bit, the resource engine. Yes, I know we are already talking about a game engine. But you know what? Game engines are engine with engines who might also have engines. Engine-ception! But back on topic, this part of the engine deals with just about everything to do with making things appear on screen from the menus, to the 3D models, to the particle systems (everyone loves particles!).

You with particles.
You without particles.
Then there is the ever important profiling and debugging component of the engine which provides tools and functionality to the developers that allow then to closely analyze such things as memory usage and gameplay playback which help them to track down and stomp out bugs.

Found one!
Collision and physics, well this should be fairly obvious as to what this component covers. But in most game engines today have this component as an SDK. You are probably asking yourself why are engines using other engines to do their work? why can game developers make their own stuff? The answer is very simple... IT'S BECAUSE THEY ARE BUSY MAKING THE GAMES YOU PLAY!



Making these components takes time and resources both of which could be better used for making awesome games that will not only make the dev team and you happy but will also make sure that the team keeps making money which keeps the company afloat which makes the dev team really happy.

The animation component of course deals with how things move in the game. so in other words how a character walks is handled by the animation component of the engine.

The Human Interface Devices (HIDs) component mainly deals with the devices we, as humans, create input for the game. Essentially this is the part of the engine that deals with controllers. Whether they be mouse and keyboard, gamepad, or some other device, the HID component handles it.


The next part is the audio section which uuh deals with audio. Essentially making sure that the sounds and music are handled correctly to provide the best experience possible. Though audio is sometimes overlooked which is sad given the integral part that audio plays in a a game. Want proof? go into the options of a video game and turn down the music volume to zero and see how that effects the gameplay.
The next component of the engine deals with multiplayer  functionality of the engine. Meaning that it deals with split screens, the multiple inputs from the different players, and in regard to online play the networking of the separate consoles. This component is called...well... the multiplayer and networking component. Look! these are very efficient ok?

Next we have the Gameplay Foundation Systems. Where as the other components deal with how things act in certain ways, this component allows those things to act in that way. In other words this is the rule keeper of the engine, it dictates which things move or done move and what those things are capable of doing. So if your character can shoot fireballs, its because the Gameplay Foundation Systems have deemed it so. This component is sometimes written in a scripting language which allows developers to make changes to the game without having to recompile the game, which saves precious time.

Lastly, there is the Game-Specific Subsystems component which is more or less the creation of new parts for the engine that will produce the specific effect the developers needing for the game.

Finally! Now, of course what I went over in this blog is just an simple overview. There is so much more to these components and I encourage you to research further into them should you desire to do so.

Sunday 1 December 2013

Spooky Scary Skeletons


Well if you haven't yet figured out the topic of today's blog then then you should get your brainbox checked. The topic of today's blog is skeletons! And by skeletons I mean the kind that you form inside of a 3D model in order to dictate the manipulation of the mesh and not the kind that keeps your body from looking like a pile of squishy, saggy, meat-flesh.

This is what happens when Frankenstein has too much fun with silly putty
Skeletal animation is currently the most popular technique used in animating movable objects in a game, at least when it comes to organic objects such as people, animals, and other creatures. Skeletal animation consists of 2 main components: the mesh, and the skeleton.

The mesh or model is a connected conglomeration of polygons that that make up the shape or body of the object to be animated. So if the object to be animated is a horse then the mesh would be in the shape of a horse.

The skeleton is a series of joints ordered in a hierarchy, positioned in the places where the object is supposed to bend or deform.
why is it that videogame babies usually look creepy instead of cute?
Even though there are shapes, called bones, connecting joints; these bones do not effect the animation of the character.Only the joints are taken into account when animating, bones are simply there to help the animator see the shape and hierarchy of the skeleton.

When the skeleton and the mesh are created and put in their proper positions, in other words the skeleton is placed correctly inside the mesh. The mesh is "skinned" to the skeleton. This means that the vertices of the mesh are bound to the joints of the skeleton so that when the orientation and position of the joints changes, it effects the position of the vertices. The amount of effect a joint has on a vertex, more commonly known as weight, can be changed so as to give the mesh a more desirable deformation as the skeleton is manipulated. It is very easy to have skin weights that create bad deformations so be vigilant.
That looks like it hurts
As mentioned before, only the vertices of the mesh are manipulated by the skeleton. Ultimately this is done to reduce the amount of date needed to make these animations happen. Now in an ideal world where we are able to do anything, at any time,  anywhere, the most perfect animation system would essentially be your body. Your body is made up of cells which in tern are made up of molecules, which can be broken down into atoms, which can be broken in subatomic particles which can be broken down into strings? or is it waves? I have no idea. The point I am trying to get at is that when you move you body each of those cells, molecules, atoms, etc. moves too. This can be though of as the ideal animation system, where we have objects made up of hundreds of thousands of points which make up the object and each of them being manipulated. Can you imagine the sheer MAGNITUDE of the amount of data required to do anything of that scale? UGH This is why the only the vertices are used, to save data.

Save data
If you were to use morph targets as well the amount of data used is even less because you are now restricting the degrees of movement to a discrete set of movements as oppose to allowing the object to be able to move all over the place.



Sources

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.

Soriano, Marc. "Skeletal Animation". Bourns College of Engineering. Retrieved November 2013.

Owen, Scott (march 1999). "A Practical Approach to Motion Capture: Acclaim's optical motion capture system: Skeletal Animation". Siggraph. Retrieval November 2013.

http://www.youtube.com/watch?v=K2rwxs1gH9w

http://files.coloribus.com/files/adsarchive/part_1400/14009805/biovision-blob-fish-600-69817.jpg

http://upload.wikimedia.org/wikipedia/commons/a/ab/Skeletal_animation_using_Blender_software.png

http://wiki.blender.org/uploads/thumb/e/e6/RST.65.png/600px-RST.65.png

http://images2.fanpop.com/image/photos/9400000/Lt-Commander-Data-star-trek-the-next-generation-9406565-1694-2560.jpg

https://www.youtube.com/watch?v=iRZ2Sh5-XuM

Saturday 30 November 2013

PvP Fight



Portal 2 is a puzzle/adventure game which revolves around the concept of creating these gateways or "portals"  if you will that allow the player to enter one portal and exit out the other. Aside from the main story line, the game allows players to construct their own puzzles or "test chambers" which they can play and publish via the steam workshop which will allow other steam customers to play their level should they also own Portal 2.



Recently, I was able to play the level "gel training" by TinMan. The level opened up using the elevator which is very interesting in that I do not know how to do that. This feature adds to the level in that adds to the immersive feel to the level or at least makes you feel that the level is a part of the main game. Once you reach the actual level you are faced with two corridors, one with a laser grid at its end and the other with turrets lined up along it. You are also provided with a propulsion gel dispenser.I don't want to spoil too much of the level but in a nutshell you use the propulsion gel to get through the laser grid, which gives you access to a button which allows you to use the propulsion gel to get past the turrets and reach the exit.

The problem with the level was that it used the concept of "use A against B to get to C" in the exact same way. You can use this concept again but one MUST introduce new concepts or obstacles so that even though the player uses item A against obstacle B the get to C they are forced to use A in different ways that initially used because obstacle B requires a different approach. The level "gel training" did not do this. Now I know what you are going to say, "the laser grid and turrets are not the same thing" but they both use time. In the level the laser grid is timed so that when you push a button it remains off for a few seconds and the turrets do not fire are you for a few seconds when you appear in front of them. Hence, you approached both the turrets and the laser grid problems in the exact same way.

Another problem with the level was that it did not use its space effectively, it had a high ceiling but the player did not need much of it to reach the exit. This ties into the previous paragraph in that effective use of the level space can provide new methods of dealing with the same obstacle even though the player is using the same concept of item A against B to get to C.

Overall, the level was very simple and straight forward which can, in instances be fun, But that was not the case with "gel training", the level was so simple that I did feel satisfied when I completed it. It used the concept of use item A to get past obstacle B to reach point C which is perfectly fine when used as a building block to create more complex puzzles but when used by itself again and again in quick succession it becomes very old very fast and that is what happened in the case of "gel training". I believe that if TinMan takes these observeations into account his future levels will become much more interesting.


sources
http://www.blogcdn.com/www.joystiq.com/media/2010/08/portal2logobkgrnd.jpg

http://cloud-2.steampowered.com/ugc/685967544745286080/E0F6AE492D09C2855ED1156377C882ECBD338DCD/637x358.resizedimage


Saturday 23 November 2013

Touch It Again I think It Moved!

Boss: Bob can you animate a character for me?

Bob: Sure boss. 

Boss: Nice! There's one catch though. The client wants it done "traditionally", that means moving the pixels around to the next pose. 

Bob: OoooK? what character do you want me to animate? 

Boss: This one: 

Bob: WHAT!! I can't do that! That guy has a TON of pixels to him! 

Boss: Well joke's on you cause pixels don't weight anything, and you have two weeks. Have fun! 

I hope you liked my little script there but they really did used to animate video games but moving around the pixels so that they formed each sequence of a single movement like a a walk or an attack. 
These separate posses would then be placed in a row where they can be loaded one after the other into the game at a fast enough rate that it gave the illusion of movement. This form of animation was known as sprite animation. 

it looked something like this
 Though that is not how things would stay. Eventually developers began to make 3D games, and no I don't mean 

I mean THIS

LOVE this game!!
and this

One of the best multiplayer experiences I ever had.
But with new graphics comes new ways of doing them such as rigid hierarchy animation. Which involved creating the models (3d objects in the game) as a set of pieces. These pieces were modeled after the parts of the body that typically would not bend, like the forearm or the thigh. These pieces were linked to each other in a hierarchical order that resembled the way the model would be put together in reality such as right hand  > forearm > upper arm > torso. This allowed the models to be moved freely with their limbs following like they should. But this method had a problem due to the models being made up of rigid parts. As the models were put into various poses they would show "cracks" around their joint areas, "cracks" being areas where the pieces would separate showing some spacing between pieces. You can typically see it in really old 3D games such as virtua fighter.


Though this is not so common now since we use different methods than rigid hierarchy. One such method was per-vertex animation which involved moving each of the vertices of a model. Vertices being the points on a model that make up a the polygons.  The motions of these vertices were then exported for use in the game. But calculating the motion of every vertex is very calculation heavy which is not good for the processor and can slow down the game. Which is why this method was not used very often for games. 

An alternate method is using morph target animation which is similar to per vertex animation in that it still involves the vertices but uses predetermined sets of positions so all that's needed to display the movements of a model is to linearly interpolate between the set positions which is much less calculation heavy. This method is usually used for facial expression as it allows the artist to create the various facial expressions which the model can them be morphed to as needed but moving the vertices themselves to make each pose for the animation can be quite trouble some so another method is used. 

This method called "skinned animation" uses a skeleton to manipulate the model. By attaching the model or "skin" to the skeleton the model moves as the skeleton does. The vertices of the "skin" keep tabs on the movements of the skeleton via its "joints", the areas of the skeleton between the bones that allow it to bend. You know, joints! 

That's what your knee looks like on the inside!
This method also made use of what's called a weight map which is essentially a texture that holds the amount of influence that each joint has in the vertices of the model. When applied, this method allowed the model to move more naturally. This produced results aesthetically pleasing and wasn't computation heavy which made it the most used method of animation today. 



Sources

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.







Thursday 21 November 2013

Lights! And Effects That Have To Do With Lights! Part 2

Last time I talked about some general techniques regarding light such as the shadow volumes technique which used rays emanating from a light source to calculate areas of shadow within a scene, this in combination with the stencil buffer, frame buffer, and depth buffer generate the shadows within the scene when rendered.

I also went over HDR lighting or High Dynamic Range lighting which essentially calculates the light intensity of a scene without limiting the intensity range and saves it in a format that allows these kind of shenanigans. Then before the scene is rendered, the light is tone mapped, meaning that the intensities are are scaled down to within the range which the monitor or TV being used is capable of. This technique is essentially how you make bloom!

No, not that kind of bloom!!

There! That's the stuff!!

This time around I'll be going over a class of lighting called Image-based Lighting. But why is it called image-based lighting, you ask? That's because all the techniques essentially use pictures to create additional features to an object or scene. 

One problem with highly detailed models is that their high amount of polygons can take up an awful lot of processing power when it come time for the scene to be rendered, and that's because the more polygons a object has the more calculations are needed for that object. This can really slow down the frame rate (This is the amount of pictures shown in certain amount of time) which can turn your video game into a slideshow. One way to keep the frame rate to nice and high is to make use of low polygon models but of course the problem with that is that they look...uhh

So how does one use the low polygon model while maintaining the detail of the high polygon model? With the use of normal maps of course! But what is a normal? A normal is a vector that is perpendicular to a surface or another vector(s). These normals are used in calculating how light will react to a given polygon. So one can use the normals of a high polygon model (which have many more normals than the low polygon model) to generate a normal map which is essentially a texture that represents the normals in a grey scale gradient. Applying this texture to the low poly model essentially gives the impression that low poly model has many normals, this can make the effect of a flat surface look curved.


Another really useful technique is called specular mapping. This technique essentially makes an object or scene look reflective or glossy. It does this my applying the mathematical equation:


Ks =  How reflective object or scene is
R = Direction of reflected light vector
V = Direction of Observer
alpha = the power of the specular reflection

With the use of this equation and just three line of code you can make your object or scene as shiny as you want. But of course all things in moderation.


Though things in reality are not usually all covered in shininess.

usually
So, in order to control the amount of specular parts of an object one uses a specular map which stores the values of "Ks" for each texel (texture unit) of a texture map. With this, one can create localized reflective surfaces and mimic streaks of sweat or blood.

In the end these techniques help to create q more immersive and interesting world which we would rather live in than reality cause lets face it sometimes games are just too beautiful.





sources

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.




Tuesday 19 November 2013

Lights! And effects that have to do with light!

lights! Along with cameras, lights are integral to video games. You can have all the cameras you want but if you don't have at least one light source you wont see anything but darkness, which is not what you want (unless your game is called "You see nothing but darkness", in which case, its perfect!). Though allowing the player to see is not light's only use, lights can be used, and often are, to establish mood. Anything from wild happiness to creeping terror, lights can make it happen.

candles make everything creepier 

One technique aimed at providing more realistic or at the very least interesting light is HDR or High Dynamic Range lighting. With HDR you could take the range of light beyond , which was (more or less) the limit. This allowed for much higher contrast between light and darkness, creating the effects such as that blurry bright light that you see when you stay in a dark area and look out into a very bright area.


It created this effect by calculating the lighting without making sure that the values stayed within the 0 to 1 range and then stored in a file format that allowed this. Then before the frame is displayed on screen the lighting underwent a process called "tone mapping" which then scaled the intensity of the light to within ability of the TV or monitor or other image viewing machine.


(Ooh just look at those lighting effects! The latest in technology!)

Another technique is called Global Illumination. Well, Global Illumination can be thought of more as a class, incorporating a few techniques to provide the desired effect. But what is global illumination about? Well, it involves techniques that take into account how light effects two or more objects and how the residual light from the objects effects the other objects in the scene, all this in relation to the camera.

Global illumination includes quite a few techniques, the most prevalent ones being those that produced shadows because you can't have light without shadows, it would be weird. One technique used to produce shadows is called "shadow volumes". This technique involves calculating the rays of light emanating from the light source, from the light source's perspective, to each of the objects in the scene, especially their edges (this can be done by shooting rays from the light source through the vertices of the object). The resulting volume(s) provides shadowed areas. To use these shadows effectively in games a stencil buffer is used which records single digit values for each pixel in a scene depending on whether or not they are lit. Then the scene is rendered without shadow in the frame buffer and coupled with a depth buffer. The stencil buffer, set to all zeros, is then rendered from the camera perspective, each value of the stencil buffer changing. Lit triangles increase the value, unlit triangles decrease the value, and unseen triangles are left at zero. So, on a third rendering pass, all the components of the scene are combined allowing for the shadows to be generated by darkening the areas where the shadows hit using values of the stencil buffer.

This is but one of the many techniques included in global illumination which is but one of the types of lighting possible within games. But all in all one can see the vast possibilities given to developers to produce unique and interesting lighting within games to give us that extra bit of detail that make helps to immerse us in the game.


sources

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.

http://www.oxmonline.com/files/u10/ds3a_screen_3.jpg

http://www.laurenscorijn.com/wp-content/uploads/2009/08/Farcryhdr.jpg

http://www.youtube.com/watch?v=dKEM5sYnOjE

http://upload.wikimedia.org/wikipedia/en/0/07/Doom3shadows.jpg

http://www.gamerzines.com/wp-content/uploads/2012/09/Moon.jpg

Sunday 17 November 2013

Scripts!

So what is a scripting language? and how is it different than a programming language? Do NOT try to find a definitive answer to the second question on a forum! Especially if you're not really a programmer. I tried, it was like a firestorm of confusion in there. Basically, what you need to know is that a scripting language IS a programming language that allows the user the ability to customize the way a specific software application (a game) behaves. It is a high-level language (meaning that it is closer to human language rather than computer language) which is easy for both programmer and none-programmers to use and allow them access to most of the commonly used functions of the engine. Scripting allows these users to do anything from modding a current game to make a completely new game. Though do not think that scripting languages simply come on in one flavour, there are several different kinds of scripting languages that may have different functions or may focus on different characteristics.

One such distinction between languages is whether or not the language is interpreted or compiled. Compiled languages use a program called a compiler to transform or translate the script into machine code which can then be used by the CPU (though scripting languages are not compiled but instead interpreted as you will read later). Interpreted languages on the other hand can be parsed directly at runtime or it can be precompiled into byte code which is then processed by a virtual machine which is essentially a CPU but not really... it's like it exists but doesn't REALLY exist... it's like a holographic CPU O.K? Regardless, virtual machines can be put on almost machine but the downside is that they are slower than a real REAL CPU...sigh.

Moving on we come to functional languages where distinguishes its self from the other kinds of languages in that programs are defined by a collection of functions. They take in input data and run them through the functions one after the other until the desired output occurs.

Then there are procedural and object-oriented languages, where object-oriented languages focus on classes (structures that use functions to manage data) procedural languages focus on functions themselves to manage data as oppose to using classes which use functions.

These languages can also differ in whether they used a text based language such as LUA or a graphical based language like Unreal's Kismet. I am very interested in the kismet mainly because of the number of features focused on improving the user's workflow such as allowing the user to make changes to the script and see immediately the changes made while game is running.


This next video provides a more in depth look at what you can do with the kismet and what the Unreal Engine 4 is capable of (sadly, I dont think our school laptops are capable of providing that level of graphical fidelity without exploding).



Though regardless of the "flavour" of language you prefer, there are certain characteristics that all scripting languages share. Such that all scripting languages are interpreted as opposed to being compiled because this allows the scripting languages to be highly flexible and easily transferable because, as mentioned before they can be converted into byte code which can easily be put into memory instead of requiring the operating system as with compiled languages. Virtual machines also grant scripts a large amount flexibility regard when code is run because, as mentioned before, they are CPUs... but not really.

Scripting languages also tend to be rather simple and use very little memory because they have to be able to be put inside or embedded into a preexisting system. They are like a reverse tumour where you them in as oppose to taking them out and they are good for the system as oppose to bad, which is good...I think.

Another characteristic of scripting languages is that any changes you make to the code you have made do not require you to exit the game (if it is running), make your changes to the code, and then recompile the entire thing as with compiled languages. Though some scripting languages may require you to exit the game they do not require recompilation and some languages allow the scripts to be changing while the game is running like Unreal's Kismet. Though regardless of whether or not the language requires you to exit the game or not, they are all allow users to make changes to the code a lot faster than compiled languages.

Scripting languages are also easy to use and convenient, which is reflected by their simplicity. This is because scripting languages are used by programmers and none-programmers alike, specifically designers. So the scripting language must provide feature specific to the game the designer is creating such as the ability to pause and manipulate time, and finding game objects by name. This allows the designer and any other person using it to easily understand the language and make the necessary changes.

Scripts, as mentioned by Isla, act as a sort of "glue" or mortar between the components of code,  creating a sort of stone wall where the stone themselves (components) aim to be reusable for other walls (games), the scripts are specific to that wall.

But when it comes to writing a scripting language one must remember the characteristics of the scripting languages as mentioned above in that they must be flexible, easy to use, and they do not require compilation. Of course, writing ones own scripting language is not recommended especially if you have other important things to do. I hope the lead programmer of my game dev group does not read this cause I'm sure he would be all:



In which case I would promptly find a bridge to jump off of, preferably one with deep enough waters underneath that I could easily fake my own death and move to some far off country under the name of Pablo Escobar.


(facepalm* I googled that name and found out he was a big Colombian drug lord. Yup! Perfectly inconspicuous! That name won't get you selected for a "random" search by airport security at all!)

Despite all the difference mentioned before, the use of scripting languages are used to give more power to the designers and other non-programmers which is where scripts can do the most good because they allow the designer or artist or whomever to make easy, quick changes the the game so as to better bring out their vision of what the game should be.

sources

http://www.youtube.com/watch?v=IReehyN6iCc

http://www.youtube.com/watch?v=VitLyrynBgU

Gregory, Jason.Game Engine Architecture. Boca Raton: CRC Press, 2009.

Lewis, Mike."Flirting with The Dark Side: Domain Specific Languages and AI". File last modified 14 Nov.2013. Microsoft PowerPoint file.

Isla, Damian. "Scripting and AI: Flirting with The Dark Side". File last modified 16 Nov. 2013. Microsoft PowerPoint file.

http://images.wikia.com/animalcrossing/images/0/08/Challenge_accepted.png

http://collider.com/wp-content/uploads/pablo-escobar.jpg

Thursday 14 November 2013

AI In Games

Ever since the advent of video games there existed the concept of automating behaviour. Where an NPCs (enemies and allies) does things of their own "will". Of course we all know NPCs in video games do not have their own wills (or do they?) but instead enact certain actions depending on a predefined set of parameters.
"GOOOOMBAA! GOOOOMBAAA!"
A good example is the goomba in Super Mario Bros. Within the game the goomba's behaviour is rather simple, after it spawned or loaded into the game it acts in a specific way depending on preexisting parameters. Those most likely being:
-  move in given direction (left or right) until gooba collides with wall (or pipe) then reverse movement direction
- if collides with player, deal damage. UNLESS the player collides with the goomba from the top A.K.A the player jumps on the goomba, in that case DIE (stop movement, play death animation, and remove goomba from the game).

But as time went on and games became more and more sophisticated they required the NPCs to perform more complex behaviours such as flanking in an FPS or getting offended by a player's response in an RPG. Eventually These behaviours got around to being called artificial intelligence even though the behaviours are still being based on these predefined parameters.

Though there are still problems that arose with the increase in complexity of NPC AI such as:


OR


The field of AI is constantly improving and expanding, striving to improve itself to make NPCs more dynamic in performing complex and simple actions, such as navigating a level (but when I say "simple" I mean in terms of the action itself not the design and implementation). An old method of AI navigation through a level was to create path or a set of points or nodes in a level which the NPC would follow like a bread crumbs, though with the nodes the NPC would have to either already have the location of the next node or they would detect the location of the next node. This was created very rigid AI in that the NPCs were limited in there behaviour to a certain path. But don't get me wrong, this does not mean that this method is bad. In fact, this can be quite useful for sequence heavy games, in other words games that progress in a very controlled manner such as the Sherlock Holmes games where the player cannot really influence the environment outside what the what the game allows. Facepalm* ugh, I can't believe I just wrote that. I was going to erase that sentence but I decided to leave it in, let the world bear witness to my shame! Of course, the player can't influence the environment outside what the game allows! That would be cheating! What I meant to say was that the player cannot do much other than what the game wants them to do.



But what about games such as Resistance 2 where players are expected to move freely within a level? These games use something called a nav mesh which is, essentially, a very low poly version of the level. This mesh, which is invisible to the player, is laid on top of the actual level and is used by the NPCs to navigate the level. Each polygon of the mesh acts like a node which the NPCs can follow. But instead of being limited to one node such as in the previous method, the NPCs are capable of using most if not all the polygons on the mesh. These polygons can grouped together, sometimes denoted by different colours, to signify different features of the level such as elevated surfaces. These groups can be used to trigger behaviours such as climbing up onto a platform.



All in all, game AI helps to provides players with games where players can act freely with their environment, immersing them deeper into the game world as they walk with, talk with, and sometimes horribly murder NPCs.

sources
http://www.gdcvault.com/play/1014700/Finding-Your-Way-with-Havok

http://www.insomniacgames.com/navigation-gdc11/

http://images3.wikia.nocookie.net/__cb20121207200146/nintendo/en/images/2/2a/New_Super_Mario_Bros_Wii_Goomba.jpeg

http://www.youtube.com/watch?v=sYmSi9JWFgw

http://www.youtube.com/watch?v=MI8NxYb8A0M

http://www.youtube.com/watch?v=ns4yPqrlOdk

http://www.youtube.com/watch?v=4KtklVO84no

Tuesday 12 November 2013

Seas of Sand


Journey is an adventure game that was released on the PlayStation Network on March 19th, 2012. Since the time of its release it has received international acclaim and, personally, I think it can be used as very strong evidence for the argument that games are art or at least can achieve the level of art.

Recently I was finally able to play a bit the game and found that it provided a compelling and awe inspiring experience. Right from the beginning the story hooked me immediately with its mystical and mysterious elements. The questions of who you are? What you are? And what happened to the mysterious civilization that now laid in ruin and near buried by the oceans of sand drove me further into the world of the game.

Though a good part of what made the game so compelling was the amazing visuals that the game provided through its unique art design. But the most beautiful part of the game was not the flowing scarf or the landscape/architecture, it was the sand. It was how it flowed, how it billowed and waved. It looked so natural yet at the same time seemed magical in how it glittered and moved. When I saw it I wondered how they did it and came to the conclusion that they probably used some height map to get that wavy effect then used an assortment of shaders to get the glitter effect of the sand. But how did they really do it? Thanks to the miracle of the internet I was able to find a PowerPoint presentation that details how they did it. To be honest my answer was mostly correct but then again my answer was so broad I can’t see how my answer could not be at least partially right. It’s like asking someone “how did people achieve flight?” and that person answering “with a flying machine”.

So, how did they create those mystifying sands? First used a detailed mipmap to bring out and maintain the grainy look of the sand at varying distances from the camera. This is different that using a normal mipmap because in a normal mipmap the details of the texture get smaller or less defined as you downsample, this did not create the desired effect so in order to fix this doubled the normals when they downsampled which maintained graininess of the sand textures.

Next they wanted to simulate the sparkling effect of sand as light is reflected off those tiny glass pebbles into your eyes. Initially they tried using specular reflection but found that the glitter points seemed to be too spread out, it was like being “clubbed in the head” as John Edwards put it. To rectify this they instead use an “ambient specular” in relation to the “normal dot view” which created less spread. But there was a problem with this, and that was in some areas of the level the specular shader goes a little crazy. This is where the anisotropic filtering they game uses fails, and removing the filtering all together is not possible since it made the world look like it was covered in dollar store glitter. They fixed this by covering up wherever they filtering failed with a texture that provided predictable values given perfect anisotropic filtering. Then when they added in the effects from the other shaders, the problem areas were displayed in black and were covered therefore fixing their filtering problem.



But this left their glittery sand not as glittery. So how did they fix this? By adding MORE specular! They added in a column of specular reflection, this effect is typically seen on large bodies of water but never on sand. But you know what? It looked good, so who cares.

ocean of sand
They also used a modified diffused shader since they found that a lambert shader was boring. Essentially they used hacky method that increased the contrast between the light side of and the dark side of objects which is the prime characteristic of another method they wanted to use called the Oren-Nayer method but it was too expensive in terms calculations.

you can really see that contrast in the dune to the right
Lastly, they used detailed height map. The main dunes in Journey were created using a low resolution height map but for those smaller waves the dev team used four “tiles” of height maps which they lerped between, depending the values of each vertex’s x or z values.


Overall, the dev team over at That Game Company did a spectacular job in creating the amazing sands that mesmerized all who played their great game.  

sources 
Edwards, John. "Sand Rendering In Journey". Date Last Modified 12 Nov. 2013. Microsoft PowerPoint file.

Sunday 10 November 2013

God of Cameras

They are by far one of the most important elements of a game because without them, you wouldn’t be able to see anything! Yet cameras seem to be one of the most overlooked elements of games from the player’s standing when done right, and that’s the point. Cameras provide (literally) a view into the world of the game and the lives of the characters within it by providing a dynamic, fluid experience that, with the help of other game elements, draws the player into the game. When done right, the players should forget that the cameras even exist but when something goes wrong this illusion is broken which usually results in disgruntled players and badly made “let’s play” YouTube videos.

A great example of a game that makes good use of cameras is God of War 3 in that it provides an exciting, empowering experience that often leaves the player in awe. But what do cameras have to do with beating the snot out of monsters? The Answer? EVERYTHING! God of War 3 strives to a cinematic experience, so everything from those epic boss fights to those amazing landscape shots, which are characteristic of the God of War games, would be lessened in greatness if not for the heavy scripting of the cameras.


But how did the development team do it? Well it first starts with the level layout coming from the level designer and ending up in the hands of the camera designers. These are the guys the place the cameras within the levels which give you those changing views within the game. Once the camera designers have finished placing the cameras within the scene, they have it play-tested to make sure they provide an enjoyable experience. They then hand the level off to the art department where the level undergoes a super amazing makeover and is sent back to the camera designers for final adjustment of the cameras. The adjustments are needed due to the level being changed slightly by the cosmetic surgery done by the art department. The camera designers also have to take into account and show off those amazing environment shots that we all know and love (and which the art department slaved over to make). Then when the game runs, the code takes in the camera positions in the level and simply takes the view from the current camera and smoothly interpolates it to the next camera position which results in those smooth changes between camera view angles we see in game.




In closing, God of War 3 relies heavily on camera scripting, controlling what the player sees, and how they see it in game. This is crucial in providing the kind of awe inspiring, adrenaline pumping, and rage inducing experience that God of War 3 gives you.  



sources

Friday 18 October 2013

Grind Quest: Information Structures

Information, it is crucial to the creation and playing of games. With it players form their plans and strategies and make their decisions. How this information is given to the player as well as how much of it is given greatly effects how a player acts within game. There are four main types of information structures: open, hidden, mixed, and dynamic; each giving a different feel to games. Traditionally, when one thinks of board games one thinks of chess or checkers. These games are a great example of open information structures as all the information is laid bare for the players, there is nothing hidden to anyone. While games such as, guess who and charades, which consistently keep information from the players can be though of as using hidden information structures as these type of games require the players to guess, deceive, and/or bluff their way through the game. Though most games, especially video games, use some sort of mixture of open and hidden information structures, giving the players some information (such as info on their own resources) while consistently keeping the rest hidden (such as info on their opponent's resources). Dynamic information structures on the other hand change throughout the game, revealing information one moment then hiding it the next. A good example of this is the Age of Empires game where a player can reveal portions of the map by moving units to that location but when those units leave the area is shrouded in darkness once again.


Far Cry 3 is a first person shooter that allows you to explore and traverse a sweeping landscape as you battle the bandits and hunt exotic game. This uses a mixed information structure as it reveals all the information required for the player to finish the game but keeps some information hidden from them such as secret stash locations.


Minecraft is an adventure/building game where you explore the world and gather resources to construct whatever you can imagine. Minecraft uses a open information system as all the information that player needs to play the game is made known to them. This includes the information the player gets form wikis as it would be a mess to put all the information regarding materials and crafts into the game, especially because of the widespread use of mods which can add new materials and crafts to the game. So having a common place that can clearly display any information the player could want and still be easily updated is perfectly justifiable. The specific locations of materials within the game world does not count as hidden information because making something hidden is intentional and the locations is the materials is intentional since they are randomly generated at the creation of the world. It would be like rolling a D20 and it landing on 20 and you saying that you meant to do that.


Uncharted is and action/ adventure game where you take control of the treasure hunter Nathan Drake on his hunt for the the lost city of gold. The game uses a mixed information structure as the information that the player makes use of changes throughout the game as they have a shootout with enemies, then climb an ancient statue, then solve a mysterious puzzle.


Madden NFL 13 is a sports game where you take control of a football team as they face off against another team, either computer controlled or player controlled. This game uses a open information structure as all the information of what is happening in the game is given to the player. This is much like chess in that, despite not know opponent strategies or future moves, both players are fully aware of the positions and movements of their opponent's pieces as well as their own.


Lemmings is a 2D puzzle game where the player must use the skills of their lemmings to create a safe path that leads the lemmings to the end of the level. This game uses an open information structure as there is nothing hidden from the player. All information regarding the skills of their lemmings is given to them and the player is able to survey the entire level so that they are able to plan their actions ahead of time.


Scrabble is a board game that requires the players to form words from the randomly letters they are given in order to obtain points. The game uses a open information system as all the letters that you have, as well as the letters your opponents have can be made known to you and all the words formed on the board can be seen by all players.


Mastermind is a board game that requires the player to guess a sequence of colours. Mastermind uses a hidden information structure because it consistently keeps some information (the colour sequence) hidden from the player forcing them to guess the colour sequence.


Clue is a board game that requires the player to find the murderer of a, well, murder. This uses a hidden information structure as the information regarding who committed the murder and with what weapon is hidden from the player until the end of the game.


http://www.gamingcounter.com/wp-content/uploads/2013/04/Far-Cry-3-wallpaper.jpg
http://assets1.ignimgs.com/vid/thumbnails/user/2013/09/03/minecraft.jpg
http://www.konsolekingz.com/blog/wp-content/uploads/2012/03/LOGOVAR_16.jpg
http://ps3hits.ru/wp-content/gallery/uncharted-drakes-fortune/uncharted_drakes_fortune_1080p_008.jpg
http://s3.amazonaws.com/rapgenius/filepicker/qzNpSWhQiIhcBbkPMsAv_scrabble.jpg
http://siriusbuzz.com/wp-content/uploads/2012/08/Lemmings.jpg
http://www.tnelson.demon.co.uk/mastermind/images/mastermind14.jpg
http://www.misanthropista.com/wp-content/uploads/2012/09/Clue1.jpg

Wednesday 16 October 2013

Blog Quest 2:Design Your Game Item Part 2



Amnesia:The Dark Descent is a survival horror game developed by Frictional Games and released on September 8th, 2010 for the PC. The game begins as you wake on a the stone floor of a mysterious castle, you don't know where you are or, more importantly, who you are. Explore the deeper levels of the castle by solving daunting puzzles on your quest to uncover the mystery behind the dark castle in which you are trapped and, perhaps, find out who you are. All the while avoiding the horrible monsters that hunt you.

Amnesia: The Dark Descent relies heavily on mechanics regarding light as all the items that you can collect, aside from puzzle items, relates to either providing you light through the use of tinderboxes or your lantern, or maintaining the light that you have through the consumption of lamp oil. The game usually requires the player to navigate through darkened areas using lit objects or their lantern to aid them. Though the light you create can also hinder you as it makes you more visible to the monsters in the area. This mechanic forces the player to choose between seeing what they are doing and monsters being able to see them or being hidden from monsters and potentially going insane in the darkness.


Within the game there are only two ways to deal with the monsters that roam the halls of the dark castle, to run or to hide. This creates a sense of helplessness within the player which maintains or even heightens the sense for fear and anxiety that the player feels while playing the game. Though I think the addition of another item, string, can add some new mechanics for the developers to play with and subject the players to new levels of hell and horror.

The string could be unraveled over a distance and then pulled or re-raveled again from a distant spot (like from a hiding place). Players can then pile on items in the world, such as books and bottles, onto the string so that when it is pulled/ re-raveled the pile topples over creating a loud noise. This would allow developers to create situations where the player must distract the monster in order to progress. So despite this item allowing the player to take a more proactive role in their goal to avoid the monsters it maintains the sense of fear and anxiety already prevalent in the game as players will be forced out their hiding spots and into open where they could be seen by the monsters. The string can also give the player a sense of hopelessness and despair as the pile of items on top of the string must be assembled by hand so the player runs the risk of toppling the pile themselves, making a loud noise, and therefore drawing the monster to their location. This combined with the mechanics surrounding the need for light can create some truly terrifying situations.

Imagine piling a few these on top of string, without making a single sound, in complete darkness, just avoid a monster a few meters away. You sweating bullets yet?


Sources
http://amnesia.wikia.com/wiki/Amnesia:_The_Dark_Descent
http://valvearg.com/w/images/9/93/Amnesia_Title.jpg
http://www.nag.co.za/wp-content/uploads/2010/11/Amnesia03.jpg
http://s284355199.websitehome.co.uk/wordpress2/wp-content/uploads/2010/09/Ryan-Amnesia.jpg