World Library  
Flag as Inappropriate
Email this Article

Real-time rendering

Article Id: WHEBN0009639177
Reproduction Date:

Title: Real-time rendering  
Author: World Heritage Encyclopedia
Language: English
Subject: Computer animation, 3D rendering, Computer graphics, Altadyn, Render farm
Collection: Computer Graphics
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Real-time rendering

Real-time rendering is one of the interactive areas of computer graphics, it means creating synthetic images fast enough on the computer so that the viewer can interact with a virtual environment. The most common place to find real-time rendering is in video games. The rate at which images are displayed is measured in frames per second (frame/s) or hertz (Hz). The frame rate is the measurement of how quickly an imaging device produces unique consecutive images.

Contents

  • The graphics rendering pipeline 1
    • Architecture 1.1
    • Application stage 1.2
    • Geometry stage 1.3
      • Model and view transform 1.3.1
      • Lighting 1.3.2
      • Projection 1.3.3
      • Clipping 1.3.4
      • Screen mapping 1.3.5
    • Rasterizer stage 1.4
  • References 2

The graphics rendering pipeline

Graphics rendering pipeline is known as the rendering pipeline or simply the pipeline. It is the foundation of real-time graphics. Its main function is to generate, or render, a two-dimensional image, given a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures, and more.

Architecture

The architecture of the real-time rendering pipeline can be divided into three conceptual stages as shown as in the figure below. These stages include application, geometry, and rasterizer. This structure is the core which is used in real-time computer graphics applications.

Application stage

The application stage is driven by the application where "it begins the image generation process that results in the final scene of frame of animation. Therefore creating a base filled with simple images, that then later on build up into a bigger, more clear image". The application is implemented in the software thus giving the developers total control over the implementation in order to change the performance. This stage may, for example, contain collision detection, speed-up techniques, animations, force feedback, etc. One of the processes that is usually implemented in this stage is collision detection. Collision detection is usually includes algorithms that detects whether two objects collide. After a collision is detected between two objects, a response may be generated and sent back to the colliding objects as well as to a force feedback device. Other processes implemented in this stage included texture animation, animations via transforms, geometry morphing, or any kind of calculations that are not performed in any other stages. At the end of the application stage, which is also the most important part of this stage, the geometry to be rendered is fed to the next stage in the rendering pipeline. These are the rendering primitives that might eventually end up on the output device, such as points, lines, and triangles, etc.

Geometry stage

The geometry stage is responsible for the majority of the per-polygon operations or per-vertex operation; it means that this stage computes what is to be drawn, how it should be drawn, and where it should be drawn. In some case, this stage might be defined as one pipeline stage or several different stages, mainly due to the different implementation of this stage. However, in this case, this stage is further divided into different functional group.

Model and view transform

Before the final model is shown on the output device, the model is transformed into several different spaces or coordinate systems. That is, when an object is being moved or manipulated, the object's vertices are what are being transformed.

Lighting

In order to give the model a more realistic appearance, one or more light sources are usually equipped during the scene of transforming the model. However, this stage cannot be reached without completing the 3D scene being transformed into the view space; the view space is where the camera is placed at the origin and aimed in a way that the camera is looking in the direction of the negative z-axis, with the y-axis pointing upwards and the x-axis pointing to the right.

Projection

There are two types of projection, orthographic (also called parallel) and perspective projection. Orthographic projection is used to represent a 3D model in a two dimensional (2D) space. The main characteristic of orthographic projection is that the parallel lines remain parallel even after the transformation without distorting them. Perspective projection is where when a camera is farther away from the model, the smaller the model it appears. Essentially, perspective projection is the way that we see things from our eyes.

Clipping

Clipping is the process of removing primitives that are outside of the view box in order to continue on to the rasterizer stage. Primitives that are outside of the view box are removed or "clipped" away. Once the primitives that are outside of the view box are removed, the primitives that are still inside of the view box will be drawn into new triangles to be proceeded to the next stage.

Screen mapping

The purpose of screen mapping, as the name implies, is to find out the coordinates of the primitives that were determined to be on the inside of the view box in the clipping stage.

Rasterizer stage

Once all of the necessary steps are completed from the two previous stages, all the elements, including the lines that have been drawn and the models that have been transformed, are ready to enter the rasterizer stages. Rasterizer stage means turning all of those elements into pixels, or picture elements, and adding color onto them.

References

  • Möller, Tomas, and Eric Haines. Real-Time Rendering. 1st ed. Natick, MA: A K Peters, Ltd., 1999.
  • Salvator, Dave. "3D Pipeline". http://www.extremetech.com 21 June 2001. Extreme Tech. 2 Feb 2007 http://www.extremetech.com/article2/
  • Malhotra, Priya. "Issues involved in Real-Time Rendering of Virtual Environments". July 2002: 20-31. College of Architecture and Urban Studies, Blacksburg, VA. 31 January 2007
  • Haines, Eric. Real-Time Rendering Resources. 1 February 2007. 12 Feb 2007
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.