1. Industry
Send to a Friend via Email

Your suggestion is on its way!

An email with a link to:

http://gameindustry.about.com/od/game-development/a/3d-Modeling-Fundamentals-Part-2.htm

was emailed to:

Thanks for sharing About.com with others!

3D Modeling Fundamentals - Part 2

By

In part one of this series, we discussed a number of general concepts relating to the geometry used in 3D games. Now, we’ll discuss techniques used to give those 3D models detail and realistic shading.

Shaders

Shaders are the math behind the visuals. Shader programs run on the graphics processing unit (GPU) to determine the visual appearance of meshes in a scene using shading algorithms such as the ones listed below.

Shading algorithms

There are a number of algorithms for determining the appearance of shaded polygons, but the two most common are Phong and Lambert.

Phong shading is one of the most basic methods for simulating round objects which are constructed out of flat polygons.  Based on the angles of the polygons in a surface, the renderer calculates what the intermediate values would be to generate a smooth surface which follows the contours of the geometry. The more polygons the renderer has to work with, the smoother, and more predictable the result will be.

Lambert shading is used for surfaces which do not have a glossy appearance with a sharp specular highlight (the bright circle you see on a shiny object from a reflected light source). Lambert shaders are frequently used for natural-looking surfaces, such as wood, stone, or skin.

Texturing

A good quality mesh of an object goes a long way toward making a realistic or visually interesting object, however a good texture can make even a mediocre mesh look amazing. Texturing a mesh can be far more complicated than it once was. Where once there was the simple application of color to a mesh, now multiple levels of textures or shaders can be applied, granting artists a wide range of visual effects and flexibility to create techniques never seen before.

Diffuse textures are what many people consider to be the entirety of the “texture” element of a mesh. The coordinates of a raster image are mapped onto the coordinates of a 3D model, the texture is painted to match the desired appearance, and then applied to the mesh. This process is called UV Mapping. This is the basic color layer of a mesh, and provides the majority of the macro detail for the object.

Bump maps are applied to a mesh using the same coordinates as a diffuse map, however bump maps are traditionally grayscale, containing values of 0 (black) to 255 (white). The more white on a given pixel in the bump map, the higher the “bump” applied to the diffuse texture. The renderer uses this data to give the illusion that there are more polygons on the mesh than there actually are. This technique is less computationally expensive than using the additional polygons in the mesh, and can add significant realism to an object.

Normal maps are the evolution of bump maps. Normal maps use a full-color bitmap, where the red, green, and blue channels are used not to determine the height of the surface change, but the angle at which light will strike the surface. This allows for far more complicated changes to the apparent geometry of the object, and is used extensively by modern games to give the maximum amount of detail to a scene.

Displacement maps are similar to bump maps in theory, but actually deform the geometry. They can be used in conjunction with normal maps to significantly improve the detail of the object.

Specular maps allow you to control the shininess of the surface on a granular level. For example, specular maps can be used to help create a rusty metal surface, where some of the object has a glossy surface still, however the rusted areas are diffuse only.

Emissive maps control the simulation of glowing elements on the surface of the mesh. These are frequently used for detail lights on characters and weapons, neon signs, and other such luminous elements in scenes.

Modern optimization techniques

As mentioned above, normal maps are widely used in modern high-end games. By effectively using normal maps, it is possible to take a multi-million polygon mesh and turn it to something in the hundreds or tens of thousands without significant loss of visual fidelity.

Other ways that this can be used are for environmental meshes. A standard stone wall, for example, would traditionally simply be a texture on a flat plane. A good artist could spend a while finessing the texture to be as realistic as possible, but there are limits. With the use of normal maps and procedurally-driven shaders, it is possible to quickly create a realistic surface that appears to have physical texture, without a significant amount of effort and skill.

Ambient occlusion in conjunction with global illumination allows for a significant reduction in the time required for artists to produce realistic lighting for a scene. A greatly simplified definition of ambient occlusion is the effect that occurs where light levels reduce in the corners of rooms as light loses its energy. (image here)

For example, in earlier-generation games, in order to simulate that a bright white light was shining on a green sphere while sitting on a white plane, the artist would need to place the white light in the scene and tune it appropriately, and then place a dim green light to simulate the bounce of the white light off of the green surface. In addition, there was no way to simulate ambient occlusion easily, so objects lost some of the tangible realism from this small, but significant, detail.

We’ll cover more topics in detail later on. If you have suggestions for a concept you’d like to see explained or demonstrated, drop an email to the contact email above.

  1. About.com
  2. Industry
  3. Game Industry
  4. Game Development
  5. 3D Modeling Fundamentals - Part 2

©2014 About.com. All rights reserved.