Digital Arts Guild



Glossary of CGI terms

PDF:  print format
PDF:  screen format (white text)
 


aliasing • Undesirable jagged lines, edges, or surfaces in computer images. Often described with the term “pixelated”. See antialiasing.

alpha channel • Transparency data in a bitmap image, generally used for compositing. True alpha is a grayscale image stored as an independent data channel in the file, as are the red, green, and blue color channels. Some bitmap formats, such as .PNG, use an alternate method called embedded transparency. A true alpha channel is always better, because it can be edited. Sometimes alpha is saved in a separate file, a technique called split alpha.

algorithm • In computer science, a procedure or program that performs a task, often with little or no user intervention.

antialiasing • An algorithm to correct jagged, aliased computer images.

aspect ratio • The proportion of image width to image height. Common aspect ratios include 16:9 (1.777) for high definition video, 1.85 for widescreen theatrical 35mm film, and 3:2 (1.5) for still photography. The number of pixels is independent of the aspect ratio, and pixels don’t need to be square. See pixel ratio.

attribute • A property that can be adjusted. Attributes are sometimes called parameters, and they are usually animatable. Examples of attributes include the dimensions of a primitive, or the base color of a material. Attributes are contained within nodes in a scene graph.

base color • In physically-based rendering, the dominant color of a material. Influences the color of rough, diffuse surfaces, and the color of metallic surfaces. Does not influence the color of non-metallic specular highlights, or of transparency.

Bézier • A type of spline curve with adjustable tangent handles to control the shape. Named after Pierre Bézier, an engineer at the Renault auto company in the 1960s.

bit • The smallest amount of information possible: a binary value of either zero or one. In a binary number, the range of possible values is equal to two raised to the power of the number of bits. E.g. an eight-bit number can express 28, or 256, different values. These can be written as binary numbers from 00000000 (zero) to 11111111 (255).

bitmap • A still image produced by a fixed matrix of pixels. Bitmap file formats include
.BMP (Microsoft Bitmap), .GIF (Graphics Interchange Format), .JPG (Joint Photographic Experts Group), .PNG (Portable Network Graphics), .EXR (OpenEXR Extended Range).

Boolean • In computer graphics, a Boolean is a compound model that combines two or more models, called operands. The most common operations are union, intersection, and subtraction. Named after George Boole, the mathematician who developed symbolic logic.

bump map • A type of relief map that gives the appearance of a rough and bumpy surface. It works by deviating the shading effects of scene lighting. A bump map is a grayscale height map that doesn’t deform the surface like a displacement map would.

byte • A package of eight binary bits. Kilobyte = 1,024 bytes. Megabyte = ~1,000,000 bytes. Gigabyte = ~1,000,000,000 bytes.

Cartesian coordinate system • Method of locating points in 2D or 3D space by measuring distances from straight-line axes. Allows algebraic operations on geometric figures. The coordinate axes are perpendicular to one another, defining a 2D grid or 3D cubic lattice. First published by Rene Descartes in the 17th century.

compile • In computer programming, the process of converting human-readable source code to another form of code. Most commonly, this means converting to binary machine code that can be directly executed by a computer. Source code can also be converted to another human-readable language, or to an intermediate machine language called bytecode.

component • In 3D modeling, a part of a surface or shape. Polygon meshes have component types such as vertex, edge, face, polygon, and UV. Spline curves have component types like control vertex, segment, and spline or path.

compositing • The combination of two or more images to create a new image. A simple example is a foreground character superimposed over a background.

displacement map • A type of relief map that deforms a surface. It can be implemented as a modeling tool or a rendering effect. Deviates the positions of vertices on the model, either in the 3D scene or at render time. Can be implemented as a grayscale height map or an RGB vector displacement map. Often combined with tessellation to increase the fidelity of small details on surfaces. Displacement is much more realistic than a bump map or normal map, because it accurately reproduces silhouettes, shading, shadowing, and occlusion. It consumes much more system resources, and is much slower to calculate.

dynamics • A system of procedural animation that simulates the motion of real-world objects. Also known as a physics engine. Rigid body dynamics simulate reactions to forces such as gravity and collisions. Soft body dynamics additionally simulate deformations of non-rigid objects such as cloth. Fluid dynamics model the behavior of non-solid phenomena such as liquids, gases, fire, and smoke.

edge • In polygon mesh modeling, a straight line connecting two vertices. An edge should also be the side of at least one polygon face.

effects animation • Broadly speaking, any animation that involves special visual effects. Contrasted with character animation. Examples include particle systems and dynamics.

expression • A simple statement in a scripting language, commonly taking the form of an equation. Expressions generate or manipulate data to set the value of a parameter or attribute, often for animation. For example, particle systems can employ expressions to determine the behavior of the system. Rigging can use expressions to automate the motion of objects.

exposure • Setting the basic range of brightness values. In photography, this is accomplished by controlling the sensitivity of the sensor and the amount of light reaching it. Exposure value (EV) in computer graphics is equivalent to the sensitivity (ISO) of a real camera.

extrude • In 3D modeling, the extension of a polygon mesh from a shape. This can be a curve, or a selection of polygon components or sub-objects. Extruding polygon faces creates branching structures on the model.

face • A renderable 2D shape in a polygon mesh. The narrowest definition of a face is a triangle, the simplest possible 2D shape. Face can also be a synonym for polygon.

fidelity • In 3D graphics, a subjective perception of how closely an asset such as a model visually resembles the thing it represents. In other words, “realism”. An asset does not necessarily represent anything in the real world. It has high fidelity if it closely resembles the design specification. Poor fidelity assets exhibit artifacts such as sharp, harsh edges due to low level of detail.

fluids • A category of dynamic simulations of non-solid materials such as liquids, smoke, and fire. Fluid simulations can use numerous techniques, including volumetrics, particle systems, and dynamic meshes.

frame buffer • The portion of a computer or peripheral’s memory that temporarily holds an image so it can be displayed, stored, and/or recorded. A GUI window or monitor that displays rendered output.

fractal • In computer graphics, a procedurally generated image, pattern, or shape that exhibits properties of chaos, complexity, and self-similarity. Fractal algorithms are widely used to generate complex patterns such as landscapes, noisy textures, and clouds. These algorithms are iterative, meaning they are calculated repeatedly to achieve greater detail.

function curve • A graph of the relationship between two varying values. The most common computer graphics application is the visualization of animation data. The value of a parameter or attribute is displayed in the vertical dimension of the graph, and time is displayed in the horizontal dimension. The slope of the graph indicates the rate of change. Function curve is often abbreviated to “fcurve”.

gamma • The contrast curve of a recorded image. Gamma is applied at many stages of image creation and display. For example, eight-bit images are gamma corrected to store more information in the darker tones, corresponding to the greater sensitivity of human vision in that tonal range.

global illumination (GI) • In technical CGI usage, any rendering algorithm that reproduces the transfer of light among multiple surfaces. Indirect bounce light, mirror reflections, and transparent refractions all require some form of global illumination. In common CGI usage, global illumination often refers to indirect diffuse light bouncing off of matte finish surfaces; not specular highlights, reflections, or refractions. This usage of terminology is a holdover from the days when indirect bounce light and mirror reflections were calculated using separate algorithms.

grooming • 3D production tasks of modeling and animating hair and fur. A combination of procedural and manual techniques.

hierarchy • A structure of linkages among objects, necessary for animation of systems with moving parts, such as characters and vehicles. Also known as animation hierarchy, scene hierarchy, parenting, linking, and forward kinematics. Child objects inherit transforms from their parent. An object may have many children, but can have only one parent. See inverse kinematics.

high dynamic range imaging (HDRI) • Realistic simulation of light intensities in bitmaps. Stores a much wider range of values to describe the brightness of pixels. In other words, it has greater bit depth. Ordinary images store eight bits per color channel, and can reproduce a contrast ratio of about 1000:1. HDR images store 32 bits per channel, and can reproduce the full range of real-world light intensities, for contrast ratios in the range of 100,000:1. This enables two powerful techniques: image-based lighting and changing exposure in post-production.

image-based lighting (IBL) • A lighting method based upon high dynamic range imaging. The pixel values of an HDR image represent physical intensities of light. These intensity values are used to emit light into the 3D scene, usually from a spherical panorama. IBL provides exceptional realism, especially when the HDR image is captured from a real environment. See light probe.

interpolation • Calculation of values between known values. For example, a smooth spline curve can be described using just a few control vertices.

inverse kinematics (IK) • An intuitive alternative to standard hierarchical animation. The position and orientation of parent objects are calculated backward from the position of the last child, known as the end effector. The resulting pose is called the IK solution.

keyframe • In computer animation, a user-defined value for a parameter at a certain point in time. The animator sets keyframes at significant moments, and the software calculates the in-between values via interpolation.

layout • The production process of merging assets into a scene and placing them in their approximate positions. Layout seeks to achieve an aesthetically desirable composition. A prime consideration is the theatrical blocking of scene elements: how they visually relate to one another in screen space. Layout is usually designed relative to one or more cameras, and so “camera and layout” is often considered a single production process. See previz.

level of detail (LOD) 1. The density of a polygon mesh model. Technically, LOD is simply the numerical count of components. A high component count does not ensure visual fidelity. Often described with the analogy of weight. A high LOD model is called “heavy”. 2. One of a series of optimized models. A scene object with two LOD’s has two different models of the same thing, each with a different component count. Performance is optimized by displaying low LOD objects in the distance.

light probe 1. In image-based lighting, a real photograph of a reflective sphere in an environment. This technique captures all of the light in the environment, even the light behind the sphere! The photograph is mapped to a spherical panorama to illuminate a 3D scene. 2. In interactive real-time 3D games and simulations, a helper object that optimizes calculations of light propagation through the scene. This permits real-time rendering of computationally expensive effects such as bounce light.

lookdev • Abbreviation of look development. The term “look” is a noun meaning “appearance” or “style”. Like much CGI jargon, lookdev has multiple meanings, and its definitions and usage are often vague. 1. Pre-production design process of developing basic concept art into a consistent visual style. 2. Specific production tasks of defining materials and shaders. Lookdev artists often write custom shaders in a scripting language. 3. General process of developing a basic 3D scene into a polished, renderable scene. Can include not only materials, but set dressing, grooming, lighting, atmospheric effects, etc.

map • In 3D graphics, an image or pattern used within a material. Its function is to vary some material attribute across a surface. For example, a texture map varies the color of an object, and a bump map simulates roughness. 2D maps require UV mapping coordinates, which tell the renderer how to project the map onto a 3D object. Procedural textures often do not require UV’s, because they are applied in a different space such as world or screen coordinates.

material1. The shading properties of a surface, such as color, reflectivity, and transparency. More precisely, the specific node type that defines those surface properties. 2. An entire material definition: a network of connected shading nodes, including maps and utility nodes. A material is sometimes known as a shader. The network is also called a tree or graph, leading to various synonyms such as shader tree, shading graph, and material network.

mesh • A polygon mesh is a 3D model composed of triangular faces. A mesh surface has no true curvature. The appearance of curvature is achieved with many small faces (level of detail), and by edge smoothing during render time.

model • A geometric object in a 3D scene. Models define the forms of objects, but not their material properties or how they move.

motion capture • Recording the 3D movement of things in the real world, usually the human body. Often abbreviated as mocap. Also known as performance capture.

motion graphics • The art form of animating graphical forms as opposed to characters and effects. Often abbreviated as mograph. Motion graphics are often abstract shapes and text, and are often combined with live action video and/or character animation. A simple example of motion graphics is the so-called flying logo. Motion graphics is a poorly defined term, and it overlaps with the techniques of effects animation. Procedural tools such as particle systems, dynamics, and scripting are commonly applied in motion graphics productions.

node • In computer science, a container for information. Nodes are connected to form a network. In 3D graphics, this is called a scene graph. Nodes hold attributes to store data. For example, in Maya, a primitive object such as a sphere is made of three nodes, each storing a different type of data. A primitive node defines creation parameters such as dimensions and level of detail. A shape node outputs that data as a polygon mesh. A transform node sets the position, rotation, and scale of the mesh.

normal map • A type of relief map that simulates detail with better fidelity than a bump map. The RGB color channels of a normal map represent the height and orientation of the surface. Can often be used in place of polygon mesh detail, vastly improving performance. Does not change the polygon mesh like a displacement map does. Normal and bump mapping are purely lighting and shading effects.

NURBS • Non-Uniform Rational Basis Spline. A type of spline with weighted control points. Commonly used in CAD, but fallen out of favor in media and entertainment applications. NURBS curves don’t have tangent handles like Bézier splines. NURBS is less intuitive and more complex.

orthographic projection • A flat, 2D view of a 3D scene. Technical drawings, blueprints and floor plans are examples. Whereas a 3D perspective view shows visual distortions of size, position, and distance, an orthographic view gives no indication of distance or depth. Two identical objects are displayed the same size, regardless of distance. Parallel lines remain parallel, and do not converge. Often abbreviated as ortho view, it’s a specific type of parallel projection. The viewplane is orthogonal (at right angles to) one of the world axes. This gives the familiar Front, Top, and Side views in 3D programs. But the viewplane of a parallel projection can be rotated to any orientation, giving other rendering options such as isometric and axonometric projections.

parametric model • A model whose form is determined by relatively simple programmatic instructions. All properties of the model, such as size, shape, and topology, are specified using attributes or parameters. When the scene is opened, the 3D software follows the instructions and builds the model accordingly. Parametric modeling makes changes very easy, and these changes are reversible and non-destructive. Contrast with non-parametric modeling, in which the shape is stored as a raw collection of component data, and changes are relatively difficult, non-reversible, and destructive. Parametric modeling is an option in both solid modeling and surface modeling. It works well in modeling hard surface, human-made objects, but often does not work very well in organic character and creature modeling.

particle system • A type of procedural animation and a rendering technique that generates numerous particles within a single object. Applications include effects such as smoke and liquids, flocking and crowd systems, scientific visualizations, etc. Particles are also employed to render point clouds, such as 3D scans of objects and environments.

patch • A deformable parametric surface, useful for modeling curved objects. It can be based on Bézier or NURBS splines. The curvature is controlled by the position of control vertices.

path tracing • Probabilistic global illumination algorithm that excels at photorealism. Similar to ray tracing, but uses random sampling and statistical probability to optimize calculations. Path tracing can send rays from the camera, or from light sources, or both. It is an integrated approach to light propagation that easily simulates many optical effects such as indirect bounce light and refraction. However, it is inefficient in special cases such as caustics and subsurface scattering. Ray tracing can be better at those specific effects, but overall, path tracing is more efficient and simpler to implement.

perspective projection • A 3D view that reproduces the optical phenomena of linear perspective and foreshortening. Parallel lines appear to converge at a vanishing point. Objects appear smaller in the distance. Simulated rays of light from the 3D scene, called lines of projection, all converge at a single point: the virtual camera.

pivot point • The center of an object’s transforms, and the center of its local Cartesian coordinate system. An object moves, rotates, and scales relative to the location and orientation of its pivot point. Also known as “anchor point” in some computer applications.

pixel • Abbreviation of picture element: the smallest element of a bitmap image. Pixels are point samples arranged in a fixed 2D grid. Many small pixels blend together to give the illusion of a continuous image.

pixel ratio • The proportion of pixel width to pixel height. In most cases, pixels are square, giving a pixel ratio of 1.0. However, some formats use non-square pixels, such as DVD, with a pixel ratio of 0.9. Technical issues can arise when working with non-square pixel formats on hardware that only displays square pixels, such as personal computers.

polygon • In computer graphics, a closed 2D plane figure bounded by straight line edges. Polygons can be connected in 3D space to create mesh objects. In some computer graphics applications, face and polygon are synonyms. In other contexts, the term face refers specifically to a triangle. A polygon is composed of one or more triangles. E.g., a four-sided polygon is made up of two triangular faces. In most cases, all points on a polygon are supposed to be on the same plane.

previz • Abbreviation of pre-visualization. The pre-production process of manifesting concept art such as designs and storyboards as a very basic 3D scene. A “first draft” 3D scene with camera angles and basic animation serves two functions. It gives the director and other stakeholders feedback on the scene or shot before the lengthy and expensive production process begins. It also serves as a guide for artists, particularly regarding composition and timing. Previz can be a real-time process using a game or simulation engine combined with motion capture. It can also seamlessly blend into production, where previz scenes are developed into final production scenes.

procedural Algorithmic data generation within a graphics application. E.g., procedural 3D textures are patterns that do not require UV mapping coordinates. Procedural animation generates motion, and may not involve keyframes. Procedural modeling employs simple rules to generate complex structures. Procedural processes are often scripted or compiled as application plugins. Proceduralism is the foundation of entire applications such as Houdini, as well as procedural content frameworks such as Blueprints, Bifrost, and TyFlow. Each has a node-based interface for building complex graphs, known as a visual programming environment.

production • In live action motion pictures, the processes involved in recording real events with a camera. In CGI, production encompasses processes such as modeling, materials, lighting, animation, and rendering. Pre-production is the process of designing things before they are made: scripting, storyboarding, concept art, etc. Post-production is the process of combining and polishing production assets into a finished piece: compositing, editing, color correction, etc.

raster • A 2D image composed of horizontal scanlines arranged in vertical rows. Digital raster images are comprised of a grid matrix of pixels. All bitmap images are raster-based. Converting a vector graphic to a bitmap image is called rasterization.

ray tracing • Deterministic global illumination algorithm that excels at photorealism. It draws 3D vectors out of the camera, through each pixel, and into the scene. The ray projects into the scene until it hits something. Additional rays can bounce around and interact in complex ways. Ray tracing was first applied to mirror reflections and transparent refractions. It has since been extended in numerous ways to realistically simulate nearly all optical phenomena. See path tracing.

refraction • Bending of light rays passing from one physical medium (air) into another (glass). Index of refraction (IOR) is a real-world measurement of the density of a material.

relief map • A map that gives the effect of surface detail. A bump map or normal map simulates detail with lighting and shading. A displacement map actually deforms the surface.

rendering • The process of producing a 2D image from a 3D scene. Rendering can occur in real time, or not. Realism and complexity take longer. Reasonable ranges for render time of a single frame can vary from milliseconds to days.

rigging • The process of setting up interactive controls for 3D characters or complex mechanisms. It’s a moderately technical process, often involving scripting. Sometimes called character setup. A good character rig makes it easier for an animator to focus on creative expression.

scene graph • The abstract structure of a 3D scene. The rendered scene is the output of a complex network of interconnected nodes. The scene graph is the sum total of all nodes in the scene. It contains many sub-graphs for domains such as transform hierarchies, material networks, and mesh shape generation and deformation.

script • A high-level program interpreted and executed in real time by a host application. Human-readable and does not need to be compiled. Examples of 3D scripting languages: Python, MEL, MAXScript. Scripting saves time and effort by automating tasks that may be tedious or repetitive. Can also add functionality, similar to an application plugin.

set dressing • The phase of layout in which the modeling aspect of a scene is fully developed to enhance its credibility. The artist fine-tunes the layout and adds incidental geometry. E.g., an interior scene needs household objects, known as entourage. An outdoor scene needs natural objects such as rocks and leaves. Set dressing can be a combination of manual and procedural techniques.

skinning • A rigging process of setting up complex deformations, usually on an organic character model. The surface or skin of the character is deformed by non-renderable control objects such as bones. Movement of the control objects causes the model to change shape.

shader1. A specific shading node type, often called a material node. It holds shading attributes such as base color and transparency. 2. An entire material definition, including all nodes in the shading network, such as maps and utility nodes. 3. A generic shading algorithm that describes how a 3D surface responds to simulated light. E.g. Blinn shader. 4. A program, plugin, or script that generates an image, map, shading properties, animation, or even a full 3D scene. Scripted shading languages in common use today include GLSL, HLSL and OSL.

smoothing1. A rendering algorithm that approximates smooth surfaces on mesh objects. Without smoothing, all polygon models would have a faceted appearance. Also known as edge smoothing or face smoothing. Usually accomplished through the orientation of vertex normals. 2. A subdivision surface algorithm.

solid model • A type of 3D model that simulates solid objects with physical properties such as mass. Commonly used in CAD, almost never used in media and entertainment applications. See surface model.

specular • A type of highlight on a surface. Glossy surfaces have small, intense specular hotspots. Rough, matte surfaces have large, dim specular highlights, or none at all. Specular highlights are sometimes treated separately from mirror reflections.

spline • In computer graphics, a line whose curvature is determined by control vertices.

subdivision surface • A modeling and rendering algorithm to increase level of detail. A polygon mesh is subdivided (tessellated) into smaller polygons, and the angles of adjacent faces are averaged. The original Catmull-Clark subdivision algorithm has evolved into an open standard called OpenSubdiv.

supersampling • A rendering technique or algorithm for antialiasing. An image is rendered at a resolution higher than the delivery format. Then it is scaled down to the delivery resolution. This process averages and filters the image data, and can compensate for rendering problems such as aliasing and moire patterns. Also known as oversampling.

surface model • A type of 3D model that only accounts for the outer surface. The volume within the surface is an empty void. A polygon mesh is a type of surface model. See solid model.

surface normal • On a 3D model, a non-rendering line that projects out from one side of a face, perpendicular to the surface. Also known as a face normal or polygon normal. Orientation of the surface normal determines which side of the face is renderable. A backface is a face whose normal is pointed away from the camera. Backface culling optimizes rendering calculations by ignoring surfaces that aren’t pointed toward the camera.

surface of revolution • A 3D modeling technique that revolves a spline around an axis. Commonly employed to create objects such as bottles. Also known as a lathe.

tessellation • In computer graphics, the division of polygons or faces into smaller ones. One type of tessellation is the subdivision surface algorithm.

texture map • A bitmap or algorithm to render surface color. Also known as a base color or diffuse color map. Ironically, texture maps do not simulate roughness. Relief maps produce the illusion of roughness.

topology 1. In technical CGI usage, the mathematical structure of a 3D model. Specifically, the connections among components. 3D models must obey certain technical rules of topology, or else the model may become corrupt. 2. In common CGI usage, the connections among components that influence fidelity. E.g. an optimally constructed subdivision surface mesh exhibits good edge flow, avoids polygons with more than four sides, and minimizes the number of poles. A mesh with these qualities would be said to have “good topology”.

transform • Mathematical reassignment of points to new locations. It’s the abbreviated form of transformation. The three transforms are position (also known as translation), rotation, and scale. These control the location and orientation of objects in a graphics application.

UV • Mapping coordinates applied to surfaces, defining the placement of maps. U and V are the horizontal and vertical dimensions of a 2D image. Vertices on polygon models carry UV data to place 2D maps onto 3D surfaces. UV unwrapping and UV layout are production processes of precisely defining UV coordinates.

vector 1. In computer graphics, a curve or line in 2D or 3D space. See vector graphic.
2. In mathematics, a straight line with a length and an orientation. May be defined by any two points in space. Also commonly defined by a single point, one or two angles, and a distance.
3. In Maya, a specific data type: a group of three numbers, such as the X, Y, and Z positions of a point.

vector graphic • Method for calculating or displaying data based on lines and curves rather than pixels or voxels. 3D models, 2D illustration paths, and digital typefaces are all examples of vector graphics. 3D models are constructed in 3D vector space and projected into 2D pixel space for display. Usually the term “vector graphic” refers to 2D formats such as Illustrator .AI, .SVG (Scalable Vector Graphics), .EPS, and fonts. Other historical examples of vector graphics: computer plotters, test equipment such as oscilloscopes, vector displays for aviation, space, science and medicine, and video games such as Tempest and Battlezone.

vertex • A fancy word for “point”. Points are merely markers in space; they have no dimension whatsoever – no length, no area, no volume. Vertices serve many functions in 3D graphics, such as defining the shapes of models and the values of keyframes.

vertex normal • Lines pointing out from each vertex of a 3D model. The orientation of vertex normals determines how much light the surrounding surface can receive. If a vertex normal is pointing toward a light source, then the surface is illuminated. A vertex usually has several normals, one for each face shared by the vertex. Vertex normals are important in edge smoothing. If all normals on a single vertex are pointing in the same direction, the renderer will smooth the connected edges. Otherwise, the renderer will not smooth the edges, resulting in a faceted appearance.

volumetric • Generally, any calculation of a 3D space. Specifically, a type of 3D algorithm that divides space into cubes called voxels. The volumetric space is a bitmap in three dimensions. Contrast with polygon models, which are vector-based. Voxels carry many properties such as density, color, and temperature. Volumetrics work well to simulate fluids such as liquids, clouds, smoke, and fire.

voxel • Contraction of volume element. Compare to pixel. A voxel is a cubic section of a volume, used for calculating volumetric effects such as fluid dynamics.

z-buffer • A pixel-based method for managing depth information in a 3D scene. Optimizes rendering by determining which surfaces to render, and in what order. For example, fully occluded surfaces are ignored. The z-buffer is a high dynamic range image in which each pixel records the distance from the camera to the closest surface. When rendered to an appropriate HDR file format such as .EXR, it enables clever image manipulation in post-production. A commonly added post effect is distance blur, for a shallow depth of field.



Home