Brink technical analysis

This post describes some of the technical details of Brink. The PC version of Brink allows a unique look under the hood trough various developer console commands. (See the end of this post for a list of interesting commands). I previously worked on the idTech 4 engine at Splash Damage when they were developing Enemy Territory Quake Wars (ETQW) so it is also a look at how the idTech 4 engine (Doom3, Quake4, ETQW, Wolfenstein, Prey) has evolved over the last few years to bring it up to date with the latest rendering technologies. It is important to note that all the information in this post is speculation and that it was derived by playing around with the game and the developer console commands.

One of the most interesting new technologies in Brink is the virtual texturing (VT) technology. Although there hasn’t been much public attention to this, Brink is one of the first games to use VT in a released commerical product. Other companies have announced games using this technology. However, these projects are still in (pre-) production. VT is a technology that lets the engine intelligently decide what parts of the texture to load at what resolution. In the distance textures can be loaded at a lower resolution, and if only part of a texture is visible only that part of the texture will need to be loaded. This gives artists more freedom to use textures and frees them from a lot of texture management tasks. This post assumes some familiarity with VT however if you are new to this there are lots of articles describing the tech.
We will first describe the VT implementation used in Brink and will then compare it to the technology used by other games.

Virtual Texturing Analysis


Visualization of the virtual texture tiles.
Low res preview of the ‘per-level’ virtual texture.

Brink’s renderer seems to use two virtual textures. One for the current map and one texture for the weapons and characters. The level textures are usually 32k x 32k while the characters texture is 64k x 64k. Textures have several channels: diffuse color (RGB), specular color(RGB), normals (XY) and a specular exponent. These are stored on disc in a custom DCT based compression format. These textures are split into tiles of 120 x 120 pixels. To ensure high quality texture filtering at the tile borders pages have an 4 pixel overlap at the borders. So the total data in each tile is 128 x 128 pixels.
The tiles from these textures are then copied to a set of cache textures stored on the GPU. The tiles stored on disc are coded using a custom DCT based compression format (a-la JPEG) and trancoded to DXTn textures on the fly. This is probably very similar to the ETQW implementation which is available as open source here (dxt encoding) and here (jpeg decoding). On the GPU the diffuse and specular colors use the DXT1 texture format (4 bpp each). Normals are a bit more involved. They are combined into a single DXT5 texture together with the specular exponent. Normal X is stored in the alpha channel. This has the advantage that the alpha channel is coded separately from the RGB data, using 8 bits of precision. See for a description of a similar technique used by Doom 3. Normal Y is stored in the green channel (which has 6 bits in 565 color formats) and the specular exponent is stored in the red channel (i.e. with 5 bits of precision). The blue channel is not used. The normal Z channel can then be reconstructed in the shaders. This set-up leads to a total of 2 bytes per pixel in the tile cache. The texture cache itself seems to be 8192 x 8192 on my GeForce 480 thus leading to a 128mb footprint for the whole tile cache. It is obvious that on consoles the cache will have to be much smaller, e.g. Lionhead’s Mega Meshes (discussed below) use a 4096 x 2048 texture (on a 360?). All “solid” surfaces in Brink use the virtual texturing system. This means that all level geometry, game objects, … can be textured from this single 8192² texture independent of the total amount of textures used by all the objects in the level. Only the players (see below) and transparent surfaces such as particles and overlays (some graffiti etc…) use “traditional” hardware textures. Besides game assets there are of course a whole bunch of render target textures for various post effects.



Low res preview of the ‘objects’ virtual texture.

In the introduction I mentioned that the engine “intelligently selects” the set of texture tiles to load in. Brink determines this set by rendering the scene to a low resolution render target where every pixel encodes the texture tile needed. It seems to be statically sized at 96 x 64 pixels (around 1/16² th of the resolution). Since the values stored in this buffer may exceed the 0-255 range it is allocated as a floating point texture. Although bit packing could help to fit everything in a 32bpp image, the buffer is sufficiently low res that it doesn’t matter much. I assume it then uses the traditional method of reading this buffer back to the CPU and analyzing the pixel buffer there.
To map the tiles in the virtual address space to the cache texture space a translation table is used. Brink has two translation tables (one for each of the two virtual textures). The translation table uses a RGBA texture format.
It appears that the VT in Brink is used to store traditionally produced assets in an efficient manner, i.e. several unwraps are packed together in one virtual texture and then loaded from this texture at run time. These textures are then reused an tiled throughout the world. This seems to go against the promise of VT of “unique textures everywhere” but this is probably mainly a production time, disc space and game world tradeoff than an actual technical limitation.
One final note is that the mipbias seems to be set slightly too high (at -2 by default on my machine), leading to more texture resolution being loaded than strictly needed. Setting this to -1 (“vt_lodBias -1” on the console) offers an easy streaming boost at almost no visual quality reduction (theoretically 0 should be enough but then you start to clearly see the mipmap transitions).

Comparing Brink tech with ETQW tech (Megatexture)

The main difference between the virtual texturing technology as found in Brink compared with the “Megatexture” technology found in ETQW is that is can be used to texture arbitrary geometry. The ETQW tech was mainly limited to planar-esque geometry (i.e. landscapes without much overhangs and caves). The rest of ETQW (particularly the indoor scenes) still used traditional textures (i.e. lots of separate texture files loaded at level start-up). Another important difference is that Brink streams full diffuse, specular and normal information while ETQW used only a single RGB diffuse map with a precomputed dot product to the sun stored in the alpha channel.
Probably driver by Brink’s closed quarter gamplay it doesn’t have the large open landscapes of ETQW. Open landscapes are really where VT technology could shine, I guess we’ll have to wait for Rage for that :).
On a side note the landscapes shouldn’t be too open with current generation VT (Also escribed by Carmack here…). This is logical since these files do get pretty big several 100’s of megabytes per level. Using these for very large worlds is just not feasible for today’s consoles. Nobody wants to get up and put new discs in their consoles whenever they go to another neighbourhood or forest.

Comparing Brink tech with Rage tech

As far as I know Brink and rage use very similar technologies. Although it is likely they have been developed in parallel instead of sharing the same code. As mentioned above, the main difference is probably the “artistic” use of the technology and the production tools. While the tools are not as advanced as Lionhead’s Mega Meshes, Rage’s tools seem to allow a lot of freedom to let the artists uniquely cover the world in detail. Also rage tech seems to have a lot more research and optimizations devoted to it. E.g. the support for CUDA “Compute Shaders” acceleration (as described here on PC hardware. This additional tech may help with speeding up the streaming process and thus reducing visual popping.

Comparing Brink tech with Lionhead’s Mega Meshes tech

Lionhead recently described their Mega Meshes(MM) system (link MM is basically a virtual texturing system with a clever production tool that helps to produce the textures. Artists can sculpt any part of the world and the system then bakes it out to tiled images for the renderer. The tech they describe seems very similar to Brink’s tech. Again it boils down to artistic use of the texture, they also seem to focus more on unique detail and their clever baking tool could certainly help with making the production of that detail economically viable.

Character customization

The character customization system does not seem to use the virtual texturing tech. Instead it uses a dynamic texture composition system to create the character textures. The game preallocates 16 slots (for the maximum of 16 players) for every slot two sets of diffuse, spec and normal textures are allocated. One set with the character base (face, torso legs) sized 1024² and one set for any optional attachments (hats, hair, facemasks, …) sized 512² . These textures are then updated at run time depending on the character customization. For example, tattoos are blended over a base skin and an optional T-Shirt is then blended over the tattoos, …
I assume a similar approach is used for the models, being careful not to break the texture coordinates between the different versions of the model. E.g. there is a version of the face with and without beard geometry but the rest of the face uses exactly the same topology and unwrap as the unbearded face.
By mixing and matching models and textures together this way, they are efficient to render (around two batches per character, not counting any optional batches for trancparencies etc…) the blended textures are also dynamically DXT compressed. They use a slightly different channel layout compared to the virtual textures by storing the exponent in the specular alpha. Leading to a total cost of 20 bits per pixel on the character textures. For all 16 characters this amounts to around 66 megabytes of texture data at run time (mipmaps and everything).


The origial idTech4 engine used a multipass lighting approach where an additive pass was drawn for every surface affected by a light source. To shadow the world it used the stencil buffer technique. Brink seems to have replaced this system with the deferred shading approach and shadowmaps. It seems a single shadow map is reused for all the lights. To accumulate the lights they use a traditional deferred buffer set up with four render targets, depth 24, RGB diffuse + alpha helper, RGB specular + alpha exponent, normal XYZ + alpha helper.
Other than that, the lighting seems pretty straightforward. A lot of the distinct Brink look seems to come from clever color grading in post processing. It also seems that some of the surfaces use diffuse color ramps (as described here…)


The different deferred shading bufferrs.

Forensic console commands

To finish this post, here are some interesting console command you can use to check some of the cool tech out for yourself.

  • clearVirtualCache: Clear all the tiles in the cache and reload the needed tiles.
  • vt_showPageNumbers 0/1: Bake the page borders and page numbers into the loaded pages (used to make the screenshot shown above). Only newly loaded pages will be updated so use clearVirtualCache afterwards.
  • writeVirtualPageFile: Write a virtual texture to a .tga file. Note that this will crash if you use level 0 because the game can’t allocate enough memory to contain the whole virtual texture at once. I guess they have an in-house 64 bit version for that 😉 Levels 2 – n work fine though.
  • writeImage: Write any named texture in the game. This can be used to write textures loaded onto the GPU to a tga file. The interesting part it that it works on any texture including render targets and the virtual texturing helper textures such as the cache.