1. Industry
Send to a Friend via Email

Your suggestion is on its way!

An email with a link to:

http://gameindustry.about.com/od/game-development/a/3d-Modeling-Fundamentals-Part-1.htm

was emailed to:

Thanks for sharing About.com with others!

3D Modeling Fundamentals - Part 1

By

Getting started in 3D modeling can be daunting. There are a lot of concepts, techniques, and vocabulary which present a significant barrier of entry to those new to the field.

Polygonal geometry

Computer-generated 3D imagery is comprised of triangles (sometimes referred to as tris) grouped into solid objects. Each triangle is made up of three vertices, three edges, and one face. So, to render a rectangle, two triangles are joined by sharing the hypotenuse of both triangles. This results in a total of four vertices, five edges, and two faces. The resulting shapes created are referred to as meshes.

These totals are important when creating 3D models for games. Overall game performance is limited by how many polygons are drawn on the screen at one time. Animation performance is limited by how many vertices are in motion at once. So, the more complicated the scene, the slower a game is going to run. For this reason, it is optimal to make meshes with as few faces and vertices as possible, while maintaining the detail necessary for your game.

Lighting and shading

Initial implementations of real-time 3D worlds were distinctive for their low polygon counts, but also for the lack of lighting information. Flat-shaded polygons with no lighting information lacked key depth cues, and hurt the immersive experience. Once the mathematics for real-time calculation of light data were developed and computer hardware advanced to the point where it was able to handle these calculations, the visual fidelity of games dramatically improved.

iD Software's Quake was the first major release to introduce dynamic lighting into games, and was followed by a sequel (Quake II) and Epic Games' (then, Epic Megagames) Unreal. Quake II introduced colored lighting, while Unreal raised it to a new level by not only using it to greater dramatic effect, but also introduced a software-based renderer (ie, a renderer which did not require dedicated 3D graphics hardware, as modern games do) which allowed a wider audience to experience the new technology.

Lighting in games is generally handled in two ways: static and dynamic lighting.

Static lightingis pre-computed during the game's development. Light sources are placed in the level, and a process is run which determines where light and shadow falls in the level. This data is compiled into a collection of light maps, which are overlaid on the level's geometry when the game is run. In this way, it appears that there is far more lighting data being rendered in real time.

Dynamic lighting is handled in real time by the renderer, and is very expensive from a computational point of view. Until recently, computer hardware has required that very few dynamic lights are active at one time in a game because of the processor power required to perform the lighting calculations. (Recently, Unreal Engine 4 has demonstrated that this is not the limitation it once was.)

Light Sources

The way that lighting works in 3D rendering can be similar to how it functions in the real world (depending on the technique used). For example, a light source emits rays outward from an origin, which then strike polygons in the scene. Using a basic renderer, the process stops here. The values of light, based on the original luminosity value and distance traveled are computed, and the target polygon is illuminated accordingly.

Using a more complicated renderer, such as one that handles global illumination (GI) calculations, the light will then bounce a pre-defined number of times to provide a more realistic simulation of lighting for a scene. (IMAGE HERE)

Game engines usually support two types of light sources (although 3D modeling applications have a variety of other options).

Point lights are exactly what they sound like: a light source origin that is the minimum size supported by the internal light calculation math of the game. Games typically use this light source for the majority of light calculation. A point light emits light in all directions, with no parallel rays. Some renderers support changing the radius of these lights, however, which gives them a spherical shape, and grants more natural-looking lighting by using more than one origin point for the rays.

Area or directional lights are used to simulate large light sources, usually sunlight or other outdoor environmental light. Directional lights are frequently represented by a rectangle or a single arrow pointing in the direction of the light. Directional lights work by casting parallel rays along an axis, or to use the rectangle paradigm, as if the rectangle was an infinitely large surface emitting light with parallel rays. The use of parallel ray emission simulates sunlight, and creates shadows to match.

In the next article, we'll cover shaders, texturing, and lighting.

  1. About.com
  2. Industry
  3. Game Industry
  4. Game Development
  5. 3D Modeling Fundamentals - Part 1

©2014 About.com. All rights reserved.