The winter isn’t over, but our Winter Fan Jam is! For those unfamiliar, we held a modding jam over December and January, prompting the modders to explore the themes of Winter and/or Hibernation.
The jam produced a total of 6 mods, 3 using Amnesia: The Dark Descent as a basis, (HPL2) and 3 using SOMA (HPL3). We have picked a winner for both of these, meaning two in total! The winners will receive a key for the upcoming game, as well as prints signed by the Malmö portion of Frictional team.
Since the number of submissions was manageable, the other submissions will also be featured below, as will asset contributions.
Once again – huge thanks to all participants for telling new stories and exploring new themes! If you couldn’t finish or participate, we hope to see you in the next modding jam. And for those in other disciplines – we will have a jam for you, too, at a later date!
You pay little mind to the reports of bear attacks in the mountains, until one day your wife goes missing. But is it really the bears, or is there something else?
OddStuff’s Permafrost Mountain shows that a story doesn’t have to be long to be effective. The puzzles are hard enough to make you think but make sense, giving you that little a-ha! moment. A rambling note of a lunatic sets a great tone and stays with you afterwards. And of course, there’s the bears…
Design by TiMan, writing by Darkfire and Slanderous.
Sound effects by PaulDB.
A nuclear attack that has been pulled back. A counterattack that hasn’t. You are awoken to prevent a catastrophe, but will you make it in time?
TiMan has established himself as a champion of modding with a second win in a row. This time he’s joined by Darkfire and Slanderous to form a powerhouse of veteran modders.
Life Freezes Over lets you put together a haunting story using environmental storytelling and sparse notes. The maps are practical but beautiful, adding to the sense of an unease. A custom enemy disrupts the familiarity of the research facility.
My name is Samuel Justice. I became audio lead here at Frictional Games in February of this year. However, I’ve been directly and indirectly involved with the studio for 3 years. I work from home in a place called Worthing in England – a small seaside town that’s fairly sleepy.
When I was about 10 or 11 I started to get almost unhealthily obsessed with videogames; this carried on throughout my teens. My mother played piano and my father the bass guitar and both enjoyed listening to a lot of music, so I was always surrounded by that as well.
During high school the obsession with games shifted from playing them to developing them. I would sit and help make Half Life 2 mods and levels whilst watching whatever documentaries I could find about game makers. This became my own little escape. When I left school I went to college to do standard A-levels. College in the UK is unlike the USA – in the UK you can finish high school at 16 and study for 2 years at college, then move on to university. I hated what I was doing and after 5 weeks dropped out and joined a music production course, as I had no idea what I wanted to do longer term. It was during that course that I developed a passion for audio production and sound. And then it clicked! The obsession I had with making games and sound finally could cross paths, and I began to venture into sound design for games.
So during the nights and evenings I experimented, plugging sounds into these mods and seeing what my experiments produced. I joined a few modding teams during this time (Off-Limits, Nuclear Dawn and Iron Grip The Oppression being just a few that I helped sound design). I got really lucky and landed a small contract out of college through my mod links which sustained me for a year. After which I had no money left and saw vacancies in the police – the pay was okay and I saw it as a way to continue doing what I loved on the side and it was great because I was able to afford audio equipment with the pay as well! Not much, mind you.
I continued working in the police for a few years but never fully embraced it – it had never been an ambition of mine. About two years in I started to enjoy it more, and began to think that maybe the police was my calling after all. But I was wrong.
A source modder got in touch and asked if I’d do audio for a title of his. I worked on that, and then the next title, and suddenly I was springboarding from one title to another. 3 months later I made the choice to leave the police: as this was what I was so desperate to do that I wanted to grasp the opportunity!
This led me to finding myself being audio lead on Amnesia: A Machine for Pigs and working with Frictional for the first time. I then joined the fantastic audio team at DICE in Sweden and worked on Battlefield 4 and a number of its expansions. After 18 months at DICE I was feeling quite homesick so I decided to return to the UK. Jens had taken the reigns of maintaining Frictional Games as a company, and there was a gap that needed to be filled. I jumped on board knowing that SOMA was an extremely unique title and something that’s going to be quite special. I’d been working on SOMA in the background during my work on A Machine for Pigs. So even though I have only joined the studio full time recently, I have been involved in the title from the early stages.
And that brings me to today… here… typing this post to you guys.
Life at Frictional
What does an audio lead do on a daily basis at Frictional Games, I hear you ask? No? Well, tough, I’ll tell you anyway.
The bulk of my time is working directly on SOMA and making sure we can deliver the best-sounding game available within the timeframe. But I also manage the small band of Frictional Audio compadres. We have one sound designer (the great and mysterious Tapio Liukkonen), an intern/junior sound designer who goes by the name of Mike Benzie and composer Mikko Tarmia who are all working extremely hard to make sure SOMA sounds great.
So my time is also split managing their workloads, giving feedback, listening to their feedback and ideas and keeping the lines of communication wide open which is vital when working for a virtual studio.
Once those duties are taken care of I love to get my hands dirty and dive right in and create and implement sound for the game. To a lot of people sound design is a dark art – they understand the process of pointing a microphone at something. But how does it go from that raw recording to a big sound effect… and then how does that get in to the game?
The best analogy for creating a sound is to compare it to cooking – in-game sounds aren’t made from single source sounds, but instead mixed from multiple sources. We’ve created the SOMA sound library at Frictional which contains a large number of custom recordings for us to use as our ingredients.
So, when you have a library of ingredients, the second phase is to think and to ask questions. You need to gather an understanding of the sound you want to make. What kind of environment is it in? What kind of story do I want to tell with this sound? What other sounds does it effect?
Once this is done, the next stage is one of the most important – just listening to the source material. We use a program here called Basehead that is our SFX database and auditioner, for this we can type in (like Google) the kind of sound we want, and it’ll search the SOMA sound library and give us results (we also have to name the files, which makes it vitally important that they are named correctly and comprehensively). This is the “picking ingredients” stage. Once I’ve selected a few sounds that I think are interesting and which could convey the story I want to tell, I’ll drop them into my DAW (Digital Audio Workstation). I use a program called Nuendo – there are a lot out there (ProTools, Cubase, Logic, Fruity Loops etc. etc.) and they all do the same basic thing. Using Nuendo I then manipulate each ingredient until I have something that resembles the sound in my head.
Now how do we get this in game? I’m sure many of you reading this are aware that Frictional has a proprietary engine and toolset called HPL (version 3 is being used for SOMA). However the audio side is handled by third-party software called FMOD. HPL and FMOD talk to each other and FMOD provides the toolset to import the sound and attach parameters to it (such as volume, how far away the player should hear it, should it have in game echo etc.). Once this is done, FMOD encodes and generates a file that HPL is then able to read – and we trigger the sound from that file using the scripting system in HPL. Thanks to the fact that HPL updates script on the fly, it makes it very easy to tweak a sound in Nuendo, drop it into FMOD and test it in the game without having to restart anything. Workflow chain is absolutely the most important part when it comes to implementation – otherwise it can take hours just to test a single sound.
So there we have it! Now leave me alone: I need to go away and make sounds that will contribute towards a national diaper shortage.
I’m Peter Wester and I have been an Engine Programmer here at Frictional Games since late 2011.
I work from my apartment in Stockholm, Sweden. I used to have a nice big desk, but after getting a PS4 Devkit it has become cramped.
My gaming interest started as a kid when my parents bought a Sega Megadrive and I became obsessed with Sonic the Hedgehog.
On my 12th birthday I got a program called Multimedia Fusion. It was a 2D game maker that didn’t need any coding knowledge. Instead you placed objects on a canvas and gave them existing behaviors to get them to move or collide. I used this to try and recreate my favorite 2D games. The most memorable one was a GTA clone with the goal of killing as many civilians as possible before the timer was up.
This got me interested in how games were made and I started to look for tools to modify other games. Me and my friend would replace all the voice acting in Worms with our own recordings or make custom maps for Counter-Strike.
It wasn’t until high school that I got into programming. After taking a programming course and learning basic C++ I downloaded the Doom 3 SDK and tried to understand the code; eventually I started helping out on a few overly ambitious mods that never got close to being released.
After high school I applied to a game development education at Stockholm’s University. It didn’t turn out to be the best education, but I met a lot of people and started making games from scratch. Three years later me and three of my friends dropped out and started a game company.
We made games for Android and iOS and I was in charge of game design and programming. After releasing two games we got the chance to go to China and meet up with a contact and start a subsidiary there. We made some money and got a few awards but after two years we decided to shut down the company to focus on other things.
I started looking for a job and saw a blog post about Frictional hiring an engine programmer. Knowing that Frictional had their own engine and that I wanted to focus on programming I decided to apply.
What do I do?
As an Engine Programmer I take care of the code that makes up the foundation of the game. The game is built on top of this. We’ve separated the engine and game code; this means that the engine can be used for multiple different games. In fact, most of the engine code used for Amnesia: The Dark Descent is still in HPL3 (our latest engine version) and could run the game with a few tweaks.
What an engine needs to provide is different for each title. For instance, SOMA requires a way to simulate physics, to render a believable 3D world, to play sound effects and to support fast iteration of level creation. My job is to make sure all those exist and work as they should.
An Engine Programmer’s job can be broken down to two basic parts: adding features, and supporting existing features. Adding a new feature takes about 1-2 months and goes something like this:
When I added Depth of Field to the engine I started out by researching the subject. I read up on tech blogs and research papers to find the best implementations of Depth of Field. I decided to try out two versions, an expensive bokeh version and a more standard blur based one. After implementing both and getting feedback I decided to go with the blur based version since it was cheaper and fit with our underwater aesthetic. Once completed I added script functions and made a helper class so that the gameplay programmers could add it where it was needed.
Some tech features also need to work in the editor. When I’m done with such a feature I hand it over to Luis who later adds it to the editor in a user-friendly way.
The closer a project gets to the end, more of my time gets spent on supporting and improving the code. This could mean fixing bugs that have been reported or optimizing code to make the game run faster.
I’ll test the game on different hardware and make sure it runs as fast as it should. If it doesn’t I’ll try and figure out what’s causing the game to be slow and then find a solution to that problem.
Hey! Today I’m sharing the results of the last two weeks of work on the ParticleEditor with you. I’ve had loads of time for additions and improvements — I’m just gonna go over the most notable ones briefly:
Live update of particles: No more resetting the whole particle system to see the effect that little parameter you changed actually has!
Control of the update speed: Something looks weird but you can’t spot what it is? Just slow the whole thing down. Works everytime.
Easing functions for fading values: We don’t have tweakable curve controls just yet, but these work like a charm in the meantime.
Helper graphs: A really nice addition so you can preview how fades are going to work. Together with easing functions, a slider and the live update, this is fun just to play around with.
Of course, a little video works way better than words for showing these off, so here it is:
HPL3 is our first engine to support both PC and consoles. To make it easy to support multiple platforms and multiple shading languages we have decided to use our own shading language called HPSL. Shader code written in HPSL goes through a shader parser to translate it to the language used by the hardware.
The shader written in HPSL is loaded into the engine at runtime, the code is then run through a preprocess parser that strips away any code that is not needed by the effect or material. After that the stripped code is translated to the language used by the hardware (GLSL #330 on PC and PSSL on the PS4) and then compiled.
HPSL uses the same syntax as the scripting or engine code. HPSL is based on GLSL #330 but some of the declarations are closer to HLSL.
All the shader code used in SOMA is handwritten. In order to keep all the relevant code at the same place and to be able to quickly optimize shaders HPL3 uses a preprocessing step. This has been used for our previous games as well. A preprocessor goes thorugh the code and removes large chunks that are not needed or used by the effect or material. The lighting shader used in SOMA contains code used by all the different light types. Changing a preprocess variable can change a light from a point light to a spotlight or can be used to enable shadow mapping. The preprocessor strips blocks of code that are not used, this increases performance since code that has no visual effects is removed completely. Another feature of the preprocess parser is the ability to change the value of a constant variable, this can be used to change the quality of an effect.
// SSAO code
for(float d = 0.0; d < $kNumSamples; d+=4.0)
// perform SSAO…
The preprocessor makes it easy to do complex materials with multiple textures and shading properties while only performing the heavy computations for the materials that need it.
After the preprocess strips the code it is ready to get translated. In the first step all the variable types and special functions are converted to the new language. Then the main entry function is created and all the input and output is bound to the correct semantics. In the last step the translated code is scanned for texture and buffers that get bound to the correct slot.
The translated code is then compiled. If a compilation error occurred the translated code is printed to the log file along with the error message and corresponding row for easy debugging.
In order to deliver the same visual experience to all platforms and to make development faster we decided on using our own shading language. The code is translated to the language used by the hardware and compiled at runtime. Supporting other shading languages in the future will be very easy since we only need to add another converter.
HPSL translates to GLSL #330 which requires OpenGL 3.3 (DirectX 10 feature set). This means that SOMA will require a DirectX 10 or newer graphic card.
Modders will still be able to write shader code directly in GLSL if they chose to.
HPSL uses the same syntax used by the scripting language.
32 bit signed integer
32 bit unsigned integer
Stores true or false
32 bit float
64 bit float
Vector of floats
Vector of signed integers
Vector of unsigned intergers
Square float matrix
Non-square matrix (Ex cMatrix2x4f)
Container of multiple variables that get set by the CPU
Single dimension texture
Standard 2D texture
A large single dimension texture used to store variables
A 2D render target with MSAA support
A shadow map texture used for comparison operations
Array of cTextureX textures
A texture contains both the image and information about what happens when it is sampled. If you are used to OpenGL/GLSL then this is nothing new. DirectX uses a different system for storing this information. It uses a texture for storing the data and a separate sampler_state that controls filtering and clamping. Using the combined format makes it easy to convert to either GLSL or HLSL.
Textures need to be bound to a slot at compilation time. Binding is done by using the “:” semantic after the texture name.
A constant value that must be initialized in the declaration and can’t be changed
Entry Point and Semantics
The entry point of a shader program is the “void main” function. Input and output of the shader is defined as arguments to this function. The input to the vertex shader comes from the mesh that is rendered. This might be information like the position, color and uv mapping of a vertex. What the vertex shader outputs is user defined, it can be any kind of information that the pixel shader needs. The output of the vertex shader is what gets sent to the pixel shader as input. The variables are interpolated between the vertices of the triangle. The input of the pixel shader and the output of the vertex shader must be the same or else the shaders won’t work together. Finally the output of the pixel shader is what is shown on the screen. The pixel shader can output to a of maximum 4 different render targets at the same time.
Some of the input and output are System defined semantics. System Semantics are set or used by the hardware.
Vertex position output. Pixel shader input as screen position. This is required by all shaders
Vertex (out), Pixel (in)
Output color slot, where X must be in the range 0-3
Index of the current vertex
Index of the current instance
Index of the triangle this pixel belongs to
Indicates if the pixel belongs to the front or back of the primitive
Input to the vertex shader is user defined. HPL3 has a few user defined semantics that work with our mesh format.
Position of the vertex
Primary UV coord
Secondary UV coord
World space normal
World space tangent, w contains binormal direction
Index of the bones used to modify this vertex
Weight to multiply the bones with
It is possible to add more user defined semantics if needed:
uniform cMatrixf a_mtxModelViewProjection;
void main(in cVector4f vtx_vPosition,
in cVector4f vtx_vColor,
in cVector4f vtx_vTexCoord0,
out cVector4f px_vColor,
out cVector4f px_vTexCoord0,
out cVector4f px_vPosition)
px_vPosition = mul(a_mtxModelViewProjection, vtx_vPosition);
px_vColor = vtx_vColor;
px_vTexCoord0 = vtx_vTexCoord0;
uniform cTexture2D aColorMap : 0;
void main(in cVector4f px_vPosition,
in cVector4f px_vColor,
in cVector4f px_vTexCoord0,
out cVector4f out_vColor : 0)
out_vColor = px_vColor * sample(aColorMap, px_vTexCoord0.xy);
Performs texture lookup and compares it with the comparison value and returns result
Gets the value of a texel at the integer position
Returns the width and height of the texture lod
Gets the lod that would get sampled if that uv coord is used
Gets the number of MipMap levels
It is also possible to use language specific code directly. Some languages and graphic cards might have functions that are more optimized for those systems and then it might be a good idea to write code specific for that language.
So I’m back with another HPL LevelEditor feature update, the Poser EditMode.
Back in the Amnesia days, whenever we wanted to add details which implied a unique skeletal model with different poses, we had to go all the way back to the modelling tool, set them up there and save them as a different mesh files. So we pretty much ended up with lots of replicated and redundant data (the corpse piles come to mind as I’m writing this), plus the burden of having to prepare everything outside the editor.
So, how to fix this? This is where the poser mode comes in. In a nutshell, it takes a skeletal mesh and exposes its bones to be translated and rotated in whatever way you fancy. This is useful not only for organic and creature geometry, but to create details like cables, piping and anything you can add a skeleton to.
There’s not much to be added, so see it in action in this little video.
Some links in this article have expired and have been removed.
Screen space ambient occlusion (SSAO) is the standard solution for approximating ambient occlusion in video games. Ambient occlusion is used to represent how exposed each point is to the indirect lighting from the scene. Direct lighting is light emitted from a light source, such as a lamp or a fire. The direct light then illuminates objects in the scene. These illuminated objects make up the indirect lighting. Making each object in the scene cast indirect lighting is very expensive. Ambient occlusion is a way to approximate this by using a light source with constant color and information from nearby geometry to determine how dark a part of an object should be. The idea behind SSAO is to get geometry information from the depth buffer.
There are many publicised algorithms for high quality SSAO. This tech feature will instead focus on improvements that can be made after the SSAO has been generated.
SOMA uses a fast and straightforward algorithm for generating medium frequency AO. The algorithm runs at half resolution which greatly increases the performance. Running at half resolution doesn’t reduce the quality by much, since the final result is blurred.
For each pixel on the screen, the shader calculates the position of the pixel in view space and then compares that position with the view space position of nearby pixels. How occluded the pixel gets is based on how close the points are to each other and if the nearby point is in front of the surface normal. The occlusion for each nearby pixel is then added together for the final result.
SOMA uses a radius of 1.5m to look for nearby points that might occlude. Sampling points that are outside of the 1.5m range is a waste of resources, since they will not contribute to the AO. Our algorithm samples 16 points in a growing circle around the main pixel. The size of the circle is determined by how close the main pixel is to the camera and how large the search radius is. For pixels that are far away from the camera, a radius of just a few pixels can be used. The closer the point gets to the camera the more the circle grows – it can grow up to half a screen. Using only 16 samples to select from half a screen of pixels results in a grainy result that flickers when the camera is moving.
Blurring can be used to remove the grainy look of the SSAO. Blur combines the value of a large number of neighboring pixels. The further away a neighboring pixel is, the less the impact it will have on the final result. Blur is run in two passes, first in the horizontal direction and then in the vertical direction.
The issue with blurring SSAO this way quickly becomes apparent. AO from different geometry leaks between boundaries causing a bright halo around objects. Bilateral weighting can be used to fix the leaks between objects. It works by comparing the depth of the main pixel to the depth of the neighboring pixel. If the distance between the depth of the main and the neighbor is outside of a limit the pixel will be skipped. In SOMA this limit is set to 2cm.
To get good-looking blur the number of neighboring pixels to sample needs to be large. Getting rid of the grainy artifacts requires over 17×17 pixels to be sampled at full resolution.
Temporal Filtering is a method for reducing the flickering caused by the low number of samples. The result from the previous frame is blended with the current frame to create smooth transitions. Blending the images directly would lead to a motion-blur-like effect. Temporal Filtering removes the motion blur effect by reverse reprojecting the view space position of a pixel to the view space position it had the previous frame and then using that to sample the result. The SSAO algorithm runs on screen space data but AO is applied on world geometry. An object that is visible in one frame may not be seen in the next frame, either because it has moved or because the view has been blocked by another object. When this happens the result from the previous frame has to be discarded. The distance between the points in world space determines how much of the result from the previous frame should be used.
Temporal Filtering introduces a new artifact. When dynamic objects move close to static objects they leave a trail of AO behind. Frostbite 2’s implementation of Temporal Filtering solves this by disabling the Temporal Filter for stable surfaces that don’t get flickering artifacts. I found another way to remove the trailing while keeping Temporal Filter for all pixels.
I came up with a new way to use Temporal Filtering when trying to remove the trailing artifacts. By combining two passes of cheap blur with Temporal Filtering all flickering and grainy artifacts can be removed without leaving any trailing.
When the SSAO has been rendered, a cheap 5×5 bilateral blur pass is run on the result. Then the blurred result from the previous frame is applied using Temporal Filtering. A 5×5 bilateral blur is then applied to the image. In addition to using geometry data to calculate the blending amount for the Temporal Filtering the difference in SSAO between the frames is used, removing all trailing artifacts.
Applying a blur before and after the Temporal Filtering and using the blurred image from the previous frame results in a very smooth image that becomes more blurred for each frame, it also removes any flickering. Even a 5×5 blur will cause the resulting image to look as smooth as a 64×64 blur after a few frames.
Because the image gets so smooth the upsampling can be moved to after the blur. This leads to Temporal Blur being faster, since running four 5×5 blur passes in half resolution is faster than running two 17×17 passes in full resolution.
All of the previous steps are performed in half resolution. To get the final result it has to be scaled up to full resolution. Stretching the half resolution image to twice its size will not look good. Near the edges of geometry there will be visible bleeding; non-occluded objects will have a bright pixel halo around them. This can be solved using the same idea as the bilateral blurring. Normal linear filtering is combined with a weight calculated by comparing the distance in depth between the main pixel and the depth value of the four closest half resolution pixels.
Combining SSAO with the Temporal Blur algorithm produces high quality results for a large search radius at a low cost. The total cost of the algoritm is 1.1ms (1920×1080 AMD 5870). This is more than twice as fast as a normal SSAO implementation.
SOMA uses high frequency AO baked into the diffuse texture in addition to the medium frequency AO generated by the SSAO.
Temporal Blur could be used to improve many other post effects that need to produce smooth-looking results.
Ambient Occlusion is only one part of the rendering pipeline, and it should be combined with other lighting techniques to give the final look.
// SSAO Main loop
//Scale the radius based on how close to the camera it is
float fStepSize = afStepSizeMax * afRadius / vPos.z;
float fStepSizePart = 0.5 * fStepSize / ((2 + 16.0));
for(float d = 0.0; d < 16.0; d+=4.0)
// Sample four points at the same time
vec4 vOffset = (d + vec4(2, 3, 4, 5))* fStepSizePart;
// Rotate the samples
vec2 vUV1 = mtxRot * vUV0;
vUV0 = mtxRot * vUV1;
vec3 vDelta0 = GetViewPosition(gl_FragCoord.xy + vUV1 * vOffset.x) - vPos;
vec3 vDelta1 = GetViewPosition(gl_FragCoord.xy - vUV1 * vOffset.y) - vPos;
vec3 vDelta2 = GetViewPosition(gl_FragCoord.xy + vUV0 * vOffset.z) - vPos;
vec3 vDelta3 = GetViewPosition(gl_FragCoord.xy - vUV0 * vOffset.w) - vPos;
vec4 vDistanceSqr = vec4(dot(vDelta0, vDelta0),
vec4 vInvertedLength = inversesqrt(vDistanceSqr);
vec4 vFalloff = vec4(1.0) + vDistanceSqr * vInvertedLength * fNegInvRadius;
vec4 vAngle = vec4(dot(vNormal, vDelta0),
dot(vNormal, vDelta3)) * vInvertedLength;
// Calculates the sum based on the angle to the normal and distance from point
fAO += dot(max(vec4(0.0), vAngle), max(vec4(0.0), vFalloff));
// Get the final AO by multiplying by number of samples
fAO = max(0, 1.0 - fAO / 16.0);
// Upsample Code
vec2 vClosest = floor(gl_FragCoord.xy / 2.0);
vec2 vBilinearWeight = vec2(1.0) - fract(gl_FragCoord.xy / 2.0);
float fTotalAO = 0.0;
float fTotalWeight = 0.0;
for(float x = 0.0; x < 2.0; ++x)
for(float y = 0.0; y < 2.0; ++y)
// Sample depth (stored in meters) and AO for the half resolution
float fSampleDepth = textureRect(aHalfResDepth, vClosest + vec2(x,y));
float fSampleAO = textureRect(aHalfResAO, vClosest + vec2(x,y));
// Calculate bilinear weight
float fBilinearWeight = (x-vBilinearWeight .x) * (y-vBilinearWeight .y);
// Calculate upsample weight based on how close the depth is to the main depth
float fUpsampleWeight = max(0.00001, 0.1 - abs(fSampleDepth – fMainDepth)) * 30.0;
// Apply weight and add to total sum
fTotalAO += (fBilinearWeight + fUpsampleWeight) * fSampleAO;
fTotalWeight += (fBilinearWeight + fUpsampleWeight);
// Divide by total sum to get final AO
float fAO = fTotalAO / fTotalWeight;
// Temporal Blur Code
// Get current frame depth and AO
vec2 vScreenPos = floor(gl_FragCoord.xy) + vec2(0.5);
float fAO = textureRect(aHalfResAO, vScreenPos.xy);
float fMainDepth = textureRect(aHalfResDepth, vScreenPos.xy);
// Convert to view space position
vec3 vPos = ScreenCoordToViewPos(vScreenPos, fMainDepth);
// Convert the current view position to the view position it
// would represent the last frame and get the screen coords
vPos = (a_mtxPrevFrameView * (a_mtxViewInv * vec4(vPos, 1.0))).xyz;
vec2 vTemporalCoords = ViewPosToScreenCoord(vPos);
// Get the AO from the last frame
float fPrevFrameAO = textureRect(aPrevFrameAO, vTemporalCoords.xy);
float fPrevFrameDepth = textureRect(aPrevFrameDepth, vTemporalCoords.xy);
// Get to view space position of temporal coords
vec3 vTemporalPos = ScreenCoordToViewPos(vTemporalCoords.xy, fPrevFrameDepth);
// Get weight based on distance to last frame position (removes ghosting artifact)
float fWeight = distance(vTemporalPos, vPos) * 9.0;
// And weight based on how different the amount of AO is (removes trailing artifact)
// Only works if both fAO and fPrevFrameAO is blurred
fWeight += abs(fPrevFrameAO - fAO ) * 5.0;
// Clamp to make sure atleast 1.0 / FPS of a frame is blended
fWeight = clamp(fWeight, afFrameTime, 1.0);
fAO = mix(fPrevFrameAO , fAO , fWeight);
Hey there! So I finally dropped by to write a quick post on our stuff after quite a long hiatus (exactly 3 years, 3 months and 8 days long according to my last post) and Thomas pretty much took over the blog. Right now most of you might be wondering who’s typing even . The name is Luis and I mainly code the tools we use here at Frictional (also the ones you might have been using for making your own Amnesia custom stories) After the quick introduction, here comes a little editor feature showoff.
If you ever used the Amnesia tools yourself, it’s more than probable that you already know about this little buddy here:
One could say it is functional. I mean, you actually can pick a color by using it. But it had a little problem. It sucked, and real bad I must add.
When you are editing a level and need to pick a color for anything, be it the diffuse color for a light or the color multiplier for some illumination texture, you are probably going to need a nice system that allows for quick tweaking with not much sweat. This is definitely not the case of our old color picker, mainly because of two reasons: the RGB color space and lack of immediate preview of changes.
Selecting a color in the RGB color space is pretty straightforward you might think. And it is indeed, you only need to set the values for the Red, Green and Blue components and you are all set. That’s it. Okay, but what if you need to choose from a range for a particular color tonality? How are you supposed to know which RGB values you need to set for that? Summing up, RGB is a pretty uninintuitive system when it comes to edition, so we pretty much had to update the color picker to use the HSB color model as well.
This model describes colors as follows:
the H value, ranging from 0 to 360, controls the hue for the resulting color;
S, which stands for saturation and ranges from 0 to 100, indicates, in layman terms, how washed the color will look, with 0 being white and 100 equalling the original hue color (for a B value of 100);
and finally B, standing for brightness and ranging from 0 to 100 as well, is quite self explanatory and sets how bright the final color will be, with 0 meaning black and 100 meaning the color that is set by the other 2 parameters.
As for the immediate preview, anyone who used the old picker knows that you had to exit the picker in order to apply and see the changes. When you need to tweak things, which involves constantly changing values and checking out how it looks, this setup is bound to get old and annoying way quicker than you would want it to. To fix this, the new picker can change the target color on the fly, letting the user see the end result in the scene while modifying the values.
Another upgrade that speeds things up a lot is going as visual as possible. No matter which parameter you are editing, being able to change it via a slider or some other graphical tool that you can click on will always help tons. The drawback of these, though, is that it’s very likely that you spend more time on them than needed just because they are so fun to use.
After all this long-ish introduction, just say hi to our new color picker:
Originally posted by Peter. Some links in this article have expired and have been removed.
Linear-space lighting is the second big change that has been made to the rendering pipeline for HPL3. Working in a linear lighting space is the most important thing to do if you want correct results.
It is an easy and inexpensive technique for improving the image quality. Working in linear space is not something the makes the lighting look better, it just makes it look correct.
Notice how the cloth in the image to the right looks more realistic and how much less plastic the specular reflections are.
Doing math in linear space works just as you are used to. Adding two values returns the sum of those values and multiplying a value with a constant returns the value multiplied by the constant.
This seems like how you would think it would work, so why isn’t it?
Monitors do not behave linearly when converting voltage to light. A monitor follows closer to an exponential curve when converting the pixel value. How this curve looks is determined by the monitor’s gamma exponent. The standard gamma for a monitor is 2.2, this means that a pixel with 100 percent intensity emit 100 percent light but a pixel with 50 percent intensity only outputs 21 percent light. To get the pixel to emit 50 percent light the intensity has to be 73 percent.
The goal is to get the monitor to output linearly so that 50 percent intensity equals 50 percent light emitted.
Gamma correction is the process of converting one intensity to another intensity which generates the correct amount of light.
The relationship between intensity and light for a monitor can be simplified as an exponential function called gamma decoding.
To cancel out the effect of gamma decoding the value has to be converted using the inverse of this function.
Inversing an exponential function is the inverse of the exponent. The inverse function is called gamma encoding.
Applying the gamma encoding to the intensity makes the pixel emit the correct amount of light.
Here are two images that use simple Lambertian lighting (N * L) .
The left image has a really soft falloff which doesn’t look realistic. When the angle between the normal and light source is 60 degrees the brightness should be 50 percent. The image on the left is far too dim to match that. Applying a constant brightness to the image would make the highlight too bright and not fix the really dark parts. The correct way to make the monitor display the image correctly is by applying gamma encoding it.
Using textures introduces the next big problem with gamma correction. In the left image the color of the texture looks correct but the lighting is too dim. The right image is corrected and the lighting looks correct but the texture, and the whole image, is washed out and desaturated. The goal is to keep the colors from the texture and combining it with the correct looking lighting.
Pictures taken with a camera or paintings made in Photoshop are all stored in a gamma encoded format. Since the image is stored as encoded the monitor can display it directly. The gamma decoding of the monitor cancels out the encoding of the image and linear brightness gets displayed. This saves the step of having to encode the image in real time before displaying it.
The second reason for encoding images is based on how humans perceive light. Human vision is more sensitive to differences in shaded areas than in bright areas. Applying gamma encoding expands the dark areas and compresses the highlights which results in more bits being used for darkness than brightness. A normal photo would require 12 bits to be saved in linear space compared to the 8 bits used when stored in gamma space. Images are encoded with the sRGB format which uses a gamma of 2.2.
Images are stored in gamma space but lighting works in linear space, so the image needs to be converted to linear space when they are loaded into the shader. If they are not converted correctly there will be artifacts from mixing the two different lighting spaces. The conversion to linear space is done by applying the gamma decoding function to the texture.
Mixing light spaces
Gamma correction a term is used to describe two different operations, gamma encoding and decoding. When learning about gamma correction it can be confusing because word is used to describe both operations.
Correct results are only achieved if both the texture input is decoded and then the final color is encoded. If only one of the operations is used the displayed image will look worse than if none of them are.
Implementing gamma correction is easy. Converting an image to linear space is done by appling the gamma decoding function. The alpha channel should not be decoded, as it is already stored in linear space.
Any hardware with DirectX 10 or OpenGL 3.0 support can use the sRGB texture format. This format allows the hardware to perform the decoding automatically and return the data as linear. The automatic sRGB correction is free and give the benefit of doing the conversion before texture filtering.
To use the sRGB format in OpenGL just pass GL_SRGB_EXT instead of GL_RGB to glTexImage2D as the format.
After doing all calculations and post-processing the final color should then to be correct by applying gamma encoding with a gamma that matches the gamma of the monitor.
For most monitors a gamma of 2.2 would work fine. To get the best result the game should let the player select gamma from a calibration chart.
This value is not the same gamma value that is used to decode the textures. All textures are be stored at a gamma of 2.2 but that is not true for monitors, they usually have a gamma ranging from 2.0 to 2.5.
When not to use gamma decoding
Not every type of texture is stored as gamma encoded. Only the texture types that are encoded should get decoded. A rule of thumb is that if the texture represents some kind of color it is encoded and if the texture represents something mathematical it is not encoded.
Diffuse, specular and ambient occlusion textures all represent color modulation and need to be decoded on load
Normal, displacement and alpha maps aren’t storing a color so the data they store is already linear
Working in linear space and making sure the monitor outputs light linearly is needed to get properly rendered images. It can be complicated to understand why this is needed but the fix is very simple.
When loading a gamma encoded image apply gamma decoding by raising the color to the power of 2.2, this converts the image to linear space
After all calculations and post processing is done (the very last step) apply gamma encoding to the color by raising it to the inverse of the gamma of the monitor
If both of these steps are followed the result will look correct.