Winner showcase of Frictional Winter Modding Jam 2019

The winter isn’t over, but our Winter Fan Jam is! For those unfamiliar, we held a modding jam over December and January, prompting the modders to explore the themes of Winter and/or Hibernation.

The jam produced a total of 6 mods, 3 using Amnesia: The Dark Descent as a basis, (HPL2) and 3 using SOMA (HPL3). We have picked a winner for both of these, meaning two in total! The winners will receive a key for the upcoming game, as well as prints signed by the Malmö portion of Frictional team.

Since the number of submissions was manageable, the other submissions will also be featured below, as will asset contributions.

Once again – huge thanks to all participants for telling new stories and exploring new themes! If you couldn’t finish or participate, we hope to see you in the next modding jam. And for those in other disciplines – we will have a jam for you, too, at a later date!

Winners

HPL2 – Permafrost Mountain by OddStuff

https://www.moddb.com/mods/permafrost-mountain

Additional voices by Macgyverthehero.

You pay little mind to the reports of bear attacks in the mountains, until one day your wife goes missing. But is it really the bears, or is there something else?

OddStuff’s Permafrost Mountain shows that a story doesn’t have to be long to be effective. The puzzles are hard enough to make you think but make sense, giving you that little a-ha! moment. A rambling note of a lunatic sets a great tone and stays with you afterwards. And of course, there’s the bears…

It ain’t those bears, I tells ya!

HPL3 – Life Freezes Over by HPL Chad Gang

https://steamcommunity.com/sharedfiles/filedetails/?id=1979207413

Design by TiMan, writing by Darkfire and Slanderous.

Sound effects by PaulDB.

A nuclear attack that has been pulled back. A counterattack that hasn’t. You are awoken to prevent a catastrophe, but will you make it in time?

TiMan has established himself as a champion of modding with a second win in a row. This time he’s joined by Darkfire and Slanderous to form a powerhouse of veteran modders.

Life Freezes Over lets you put together a haunting story using environmental storytelling and sparse notes. The maps are practical but beautiful, adding to the sense of an unease. A custom enemy disrupts the familiarity of the research facility.

…2501…

Other entries

HPL2 – Oghma Infinium by Sabatu

https://www.moddb.com/mods/amnesia-oghma-infinium

Additional voices by Trivvy, music by Tosha.

An ancient treasure hides in a cave rigged with challenges.

Sabatu delivers another monster of a mod, interesting environments and challenging puzzles spanning over several maps.

HPL2 – The Woodpecker by Yanka

https://www.moddb.com/mods/the-woodpecker

End theme by Mikson.

The Woodpecker takes you to investigate a different attempt to stop the comet Telos, spiced up with mystic powers of S.T.A.L.K.E.R. franchise.

HPL3 – Broken Light by justus954

https://steamcommunity.com/sharedfiles/filedetails/?id=1988601213

Contributions by LadyEspurr and DaRGonLerD

A dark tower looms ominously over you, beckoning you to come inside and find the answers.

HPL3 – Elysium by artificialparanoia

:munshi_happy:

Special thanks

HPL3 – SOMA Winter modding pack by Draugemalf

https://www.moddb.com/mods/soma-winter-asset-pack

Custom winter assets for HPL3.

Includes sounds by Mikson.

HPL2 – Winter assets for HPL2 by NutBoi

https://www.moddb.com/mods/winter-assets-for-hpl2

Draugemalf’s HPL3 assets converted to HPL2.

People of Frictional: Samuel Justice

Who Am I?

My name is Samuel Justice. I became audio lead here at Frictional Games in February of this year. However, I’ve been directly and indirectly involved with the studio for 3 years. I work from home in a place called Worthing in England – a small seaside town that’s fairly sleepy.

Background

When I was about 10 or 11 I started to get almost unhealthily obsessed with videogames; this carried on throughout my teens. My mother played piano and my father the bass guitar and both enjoyed listening to a lot of music, so I was always surrounded by that as well. 

During high school the obsession with games shifted from playing them to developing them. I would sit and help make Half Life 2 mods and levels whilst watching whatever documentaries I could find about game makers. This became my own little escape. When I left school I went to college to do standard A-levels. College in the UK is unlike the USA – in the UK you can finish high school at 16 and study for 2 years at college, then move on to university. I hated what I was doing and after 5 weeks dropped out and joined a music production course, as I had no idea what I wanted to do longer term. It was during that course that I developed a passion for audio production and sound. And then it clicked! The obsession I had with making games and sound finally could cross paths, and I began to venture into sound design for games.

So during the nights and evenings I experimented, plugging sounds into these mods and seeing what my experiments produced. I joined a few modding teams during this time (Off-Limits, Nuclear Dawn and Iron Grip The Oppression being just a few that I helped sound design). I got really lucky and landed a small contract out of college through my mod links which sustained me for a year. After which I had no money left and saw vacancies in the police – the pay was okay and I saw it as a way to continue doing what I loved on the side and it was great because I was able to afford audio equipment with the pay as well! Not much, mind you.

I continued working in the police for a few years but never fully embraced it – it had never been an ambition of mine. About two years in I started to enjoy it more, and began to think that maybe the police was my calling after all. But I was wrong.

A source modder got in touch and asked if I’d do audio for a title of his. I worked on that, and then the next title, and suddenly I was springboarding from one title to another. 3 months later I made the choice to leave the police: as this was what I was so desperate to do that I wanted to grasp the opportunity!

This led me to finding myself being audio lead on Amnesia: A Machine for Pigs and working with Frictional for the first time. I then joined the fantastic audio team at DICE in Sweden and worked on Battlefield 4 and a number of its expansions. After 18 months at DICE I was feeling quite homesick so I decided to return to the UK. Jens had taken the reigns of maintaining Frictional Games as a company, and there was a gap that needed to be filled. I jumped on board knowing that SOMA was an extremely unique title and something that’s going to be quite special. I’d been working on SOMA in the background during my work on A Machine for Pigs. So even though I have only joined the studio full time recently, I have been involved in the title from the early stages.

And that brings me to today… here…  typing this post to you guys.

Life at Frictional

What does an audio lead do on a daily basis at Frictional Games, I hear you ask? No? Well, tough, I’ll tell you anyway.

The bulk of my time is working directly on SOMA and making sure we can deliver the best-sounding game available within the timeframe. But I also manage the small band of Frictional Audio compadres. We have one sound designer (the great and mysterious Tapio Liukkonen), an intern/junior sound designer who goes by the name of Mike Benzie and composer Mikko Tarmia who are all working extremely hard to make sure SOMA sounds great. 

So my time is also split managing their workloads, giving feedback, listening to their feedback and ideas and keeping the lines of communication wide open which is vital when working for a virtual studio.

Once those duties are taken care of I love to get my hands dirty and dive right in and create and implement sound for the game. To a lot of people sound design is a dark art – they understand the process of pointing a microphone at something. But how does it go from that raw recording to a big sound effect… and then how does that get in to the game?

The best analogy for creating a sound is to compare it to cooking – in-game sounds aren’t made from single source sounds, but instead mixed from multiple sources. We’ve created the SOMA sound library at Frictional which contains a large number of custom recordings for us to use as our ingredients.

So, when you have a library of ingredients, the second phase is to think and to ask questions. You need to gather an understanding of the sound you want to make. What kind of environment is it in? What kind of story do I want to tell with this sound? What other sounds does it effect?

Once this is done, the next stage is one of the most important – just listening to the source material. We use a program here called Basehead that is our SFX database and auditioner, for this we can type in (like Google) the kind of sound we want, and it’ll search the SOMA sound library and give us results (we also have to name the files, which makes it vitally important that they are named correctly and comprehensively). This is the “picking ingredients” stage. Once I’ve selected a few sounds that I think are interesting and which could convey the story I want to tell, I’ll drop them into my DAW (Digital Audio Workstation). I use a program called Nuendo – there are a lot out there (ProTools, Cubase, Logic, Fruity Loops etc. etc.)  and they all do the same basic thing. Using Nuendo I then manipulate each ingredient until I have something that resembles the sound in my head.

A typical SFX session, each coloured block is a different recording!

Now how do we get this in game? I’m sure many of you reading this are aware that Frictional has a proprietary engine and toolset called HPL (version 3 is being used for SOMA). However the audio side is handled by third-party software called FMOD. HPL and FMOD talk to each other and FMOD provides the toolset to import the sound and attach parameters to it (such as volume, how far away the player should hear it, should it have in game echo etc.). Once this is done, FMOD encodes and generates a file that HPL is then able to read – and we trigger the sound from that file using the scripting system in HPL. Thanks to the fact that HPL updates script on the fly, it makes it very easy to tweak a sound in Nuendo, drop it into FMOD and test it in the game without having to restart anything. Workflow chain is absolutely the most important part when it comes to implementation – otherwise it can take hours just to test a single sound.

This image shows the logic within FMOD for underwater movement sound – this is just one type of movement on one surface!

So there we have it! Now leave me alone: I need to go away and make sounds that will contribute towards a national diaper shortage.

Peter Wester: Engine programmer

Who Am I?

I’m Peter Wester and I have been an Engine Programmer here at Frictional Games since late 2011.

I work from my apartment in Stockholm, Sweden. I used to have a nice big desk, but after getting a PS4 Devkit it has become cramped.

Background

My gaming interest started as a kid when my parents bought a Sega Megadrive and I became obsessed with Sonic the Hedgehog.

On my 12th birthday I got a program called Multimedia Fusion. It was a 2D game maker that didn’t need any coding knowledge. Instead you placed objects on a canvas and gave them existing behaviors to get them to move or collide. I used this to try and recreate my favorite 2D games. The most memorable one was a GTA clone with the goal of killing as many civilians as possible before the timer was up.

This got me interested in how games were made and I started to look for tools to modify other games. Me and my friend would replace all the voice acting in Worms with our own recordings or make custom maps for Counter-Strike.

It wasn’t until high school that I got into programming. After taking a programming course and learning basic C++ I downloaded the Doom 3 SDK and tried to understand the code; eventually I started helping out on a few overly ambitious mods that never got close to being released.

After high school I applied to a game development education at Stockholm’s University. It didn’t turn out to be the best education, but I met a lot of people and started making games from scratch. Three years later me and three of my friends dropped out and started a game company.

Phoenix Spirit – Our second game

We made games for Android and iOS and I was in charge of game design and programming. After releasing two games we got the chance to go to China and meet up with a contact and start a subsidiary there. We made some money and got a few awards but after two years we decided to shut down the company to focus on other things.

Mana Chronicles – Made by our Chinese subsidiary

I started looking for a job and saw a blog post about Frictional hiring an engine programmer. Knowing that Frictional had their own engine and that I wanted to focus on programming I decided to apply.

What do I do?

As an Engine Programmer I take care of the code that makes up the foundation of the game. The game is built on top of this. We’ve separated the engine and game code; this means that the engine can be used for multiple different games. In fact, most of the engine code used for Amnesia: The Dark Descent is still in HPL3 (our latest engine version) and could run the game with a few tweaks.

What an engine needs to provide is different for each title. For instance, SOMA requires a way to simulate physics, to render a believable 3D world, to play sound effects and to support fast iteration of level creation. My job is to make sure all those exist and work as they should.

An Engine Programmer’s job can be broken down to two basic parts: adding features, and supporting existing features. Adding a new feature takes about 1-2 months and goes something like this:

When I added Depth of Field to the engine I started out by researching the subject. I read up on tech blogs and research papers to find the best implementations of Depth of Field. I decided to try out two versions, an expensive bokeh version and a more standard blur based one. After implementing both and getting feedback I decided to go with the blur based version since it was cheaper and fit with our underwater aesthetic. Once completed I added script functions and made a helper class so that the gameplay programmers could add it where it was needed.

Depth of Field – blur visible in the background

Some tech features also need to work in the editor. When I’m done with such a feature I hand it over to Luis who later adds it to the editor in a user-friendly way.

The closer a project gets to the end, more of my time gets spent on supporting and improving the code. This could mean fixing bugs that have been reported or optimizing code to make the game run faster.

I’ll test the game on different hardware and make sure it runs as fast as it should. If it doesn’t I’ll try and figure out what’s causing the game to be slow and then find a solution to that problem.

Editor Fly Mode

Originally posted by Luis.

Hey there! Vacation time is really close now and there’s so much to do before it arrives, but I still wanted to drop by and keep you posted on new editor stuff, so I’m gonna keep it short this time.

We have a fly mode for the perspective camera now! Yes, and it works nicely enough. No video today, but enjoy this animated GIF instead:

Particle Editor updates

Originally posted by Luis.

Hey! Today I’m sharing the results of the last two weeks of work on the ParticleEditor with you. I’ve had loads of time for additions and improvements — I’m just gonna go over the most notable ones briefly:

  • Live update of particles: No more resetting the whole particle system to see the effect that little parameter you changed actually has!
  • Control of the update speed: Something looks weird but you can’t spot what it is? Just slow the whole thing down. Works everytime.
  • Easing functions for fading values: We don’t have tweakable curve controls just yet, but these work like a charm in the meantime. 
  • Helper graphs: A really nice addition so you can preview how fades are going to work. Together with easing functions, a slider and the live update, this is fun just to play around with.

Of course, a little video works way better than words for showing these off, so here it is:

Tech Feature: HPSL Shading Language

Originally posted by Peter.

HPL3 is our first engine to support both PC and consoles. To make it easy to support multiple platforms and multiple shading languages we have decided to use our own shading language called HPSL. Shader code written in HPSL goes through a shader parser to translate it to the language used by the hardware.

The shader written in HPSL is loaded into the engine at runtime, the code is then run through a preprocess parser that strips away any code that is not needed by the effect or material. After that the stripped code is translated to the language used by the hardware (GLSL #330 on PC and PSSL on the PS4) and then compiled.

HPSL uses the same syntax as the scripting or engine code. HPSL is based on GLSL #330 but some of the declarations are closer to HLSL.

// Example code

@ifdef UseTexture

uniform cTexture2D aColorMap : 0;

@endif



void main(in cVector4f px_vPosition,

                in cVector4f px_vColor,

                in cVector4f px_vTexCoord0,

                out cVector4f out_vColor : 0)

{

          cVector4f vColor = px_vColor;





@ifdef UseTexture

                    vColor *= sample(aColorMap, px_vTexCoord0.xy);

          @endif



          out_vColor = vColor;

}



//Preproccess step

void main(in cVector4f px_vPosition,

                in cVector4f px_vColor,

                in cVector4f px_vTexCoord0,

                out cVector4f out_vColor : 0)

{

          cVector4f vColor = px_vColor;



          out_vColor = vColor;

}




// Translation step

#version 330

#extension GL_ARB_explicit_attrib_location : enable



in vec4 px_vColor;

in vec4 px_vTexCoord0;

layout(location = 0) out vec4 out_vColor;



void main()

{

          vec4 px_vPosition = gl_FragCoord;

          bool px_bFrontFacing = gl_FrontFacing;

          int px_lPrimitiveID = gl_PrimitiveID;



          vec4 vColor = px_vColor;



          out_vColor = vColor;

}

Preprocessing

All the shader code used in SOMA is handwritten. In order to keep all the relevant code at the same place and to be able to quickly optimize shaders HPL3 uses a preprocessing step. This has been used for our previous games as well. A preprocessor goes thorugh the code and removes large chunks that are not needed or used by the effect or material. The lighting shader used in SOMA contains code used by all the different light types. Changing a preprocess variable can change a light from a point light to a spotlight or can be used to enable shadow mapping. The preprocessor strips blocks of code that are not used, this increases performance since code that has no visual effects is removed completely. Another feature of the preprocess parser is the ability to change the value of a constant variable, this can be used to change the quality of an effect.

// SSAO code

for(float d = 0.0; d < $kNumSamples; d+=4.0)
{
          // perform SSAO…
}

The preprocessor makes it easy to do complex materials with multiple textures and shading properties while only performing the heavy computations for the materials that need it.

Translation

After the preprocess strips the code it is ready to get translated. In the first step all the variable types and special functions are converted to the new language. Then the main entry function is created and all the input and output is bound to the correct semantics. In the last step the translated code is scanned for texture and buffers that get bound to the correct slot. 

Compilation

The translated code is then compiled. If a compilation error occurred the translated code is printed to the log file along with the error message and corresponding row for easy debugging.

Summary

In order to deliver the same visual experience to all platforms and to make development faster we decided on using our own shading language. The code is translated to the language used by the hardware and compiled at runtime. Supporting other shading languages in the future will be very easy since we only need to add another converter. 

HPSL translates to GLSL #330 which requires OpenGL 3.3 (DirectX 10 feature set). This means that SOMA will require a DirectX 10 or newer graphic card.

Modders will still be able to write shader code directly in GLSL if they chose to.

HPSL Reference

Syntax

HPSL uses the same syntax used by the scripting language.

Variable TypeDescription
int32 bit signed integer
uint32 bit unsigned integer
boolStores true or false
float32 bit float
double64 bit float
cVectorXfVector of floats
cVectorXlVector of signed integers
cVectorXuVector of unsigned intergers
cMatrixXfSquare float matrix
cMatrixXxXfNon-square matrix (Ex cMatrix2x4f)
cBufferContainer of multiple variables that get set by the CPU
Texture TypeDescription
cTexture1DSingle dimension texture
cTexture2DStandard 2D texture
cTexture3DVolume texture
cTextureCubeCubemap texture
cTextureBufferA large single dimension texture used to store variables
cTexture2DMSA 2D render target with MSAA support
cTextureXCmpA shadow map texture used for comparison operations
cTextureXArrayArray of cTextureX textures

A texture contains both the image and information about what happens when it is sampled. If you are used to OpenGL/GLSL then this is nothing new. DirectX uses a different system for storing this information. It uses a texture for storing the data and a separate sampler_state that controls filtering and clamping. Using the combined format makes it easy to convert to either GLSL or HLSL.

Textures need to be bound to a slot at compilation time. Binding is done by using the “:” semantic after the texture name.

//bind diffuse map to slot 0
uniform cTexture2D aDiffuseMap : 0;
Variable Type ModifierDescription
uniformA variable or texture that is set by the CPU
inRead only input to a function
outOutput of a function
inoutRead and write input and output to a function
constA constant value that must be initialized in the declaration and can’t be changed

Entry Point and Semantics

The entry point of a shader program is the “void main” function. Input and output of the shader is defined as arguments to this function. The input to the vertex shader comes from the mesh that is rendered. This might be information like the position, color and uv mapping of a vertex. What the vertex shader outputs is user defined, it can be any kind of information that the pixel shader needs. The output of the vertex shader is what gets sent to the pixel shader as input. The variables are interpolated between the vertices of the triangle. The input of the pixel shader and the output of the vertex shader must be the same or else the shaders won’t work together. Finally the output of the pixel shader is what is shown on the screen. The pixel shader can output to a of maximum 4 different render targets at the same time.

Some of the input and output are System defined semantics. System Semantics are set or used by the hardware. 

System SemanticDescriptionTypeShader Type
px_vPositionVertex position output. Pixel shader input as screen position. This is required by all shaderscVector4fVertex (out), Pixel (in)
: XOutput color slot, where X must be in the range 0-3cVector4Pixel (out)
vtx_lVertexIDIndex of the current vertexintVertex (in)
vtx_lInstanceIDIndex of the current instanceintVertex (in)
px_lPrimitiveIDIndex of the triangle this pixel belongs tointPixel (in)
px_bFrontFacingIndicates if the pixel belongs to   the front or back of the primitiveboolPixel (in)

Input to the vertex shader is user defined. HPL3 has a few user defined semantics that work with our mesh format.

Mesh SemanticDescriptionType
vtx_vPositionPosition of the vertexcVector4f
vtx_vTexCoord0Primary UV coordcVector4f
vtx_vTexCoord1Secondary UV coordcVector4f
vtx_vNormalWorld space normalcVector3f
vtx_vTangentWorld space tangent, w contains binormal directioncVector4f
vtx_vColorColorcVector4f
vtx_vBoneIndicesIndex of the bones used to modify this vertexcVector4l
vtx_vBoneWeightWeight to multiply the bones withcVector4f

It is possible to add more user defined semantics if needed:

//vertex shader

uniform cMatrixf a_mtxModelViewProjection;



void main(in cVector4f vtx_vPosition,

               in cVector4f vtx_vColor,

               in cVector4f vtx_vTexCoord0,

               out cVector4f px_vColor,

               out cVector4f px_vTexCoord0,

              out cVector4f px_vPosition)

{                          

          px_vPosition = mul(a_mtxModelViewProjection, vtx_vPosition);

          px_vColor = vtx_vColor;

                             px_vTexCoord0 = vtx_vTexCoord0;

}



//pixel shader

uniform cTexture2D aColorMap : 0;



void main(in cVector4f px_vPosition,

               in cVector4f px_vColor,

               in cVector4f px_vTexCoord0,

               out cVector4f out_vColor : 0)

{

         out_vColor = px_vColor * sample(aColorMap, px_vTexCoord0.xy);

}

Functions

HPSL is based on OpenGL 3.3 and GLSL version 330 and supports almost all of the GLSL arithmetic functions.

There are some functions that are different from GLSL. This is to make it easier to support HLSL and PSSL.

Arithmetic FunctionDescription
mul(x, y)Multiplies two matrices together (multiplying by using * not supported for matrices)
lerp(x, y, t)Interpolates between two values

Texture sampling use functions specific to the HPSL language.

Texture FunctionDescription
sample(texture, uv)sample(texture, uv, offset)Samples a texture at the specified uv coordinate. Can be used with an integet offset
sampleGather(texture, uv)sampleGather(texture, uv, offset)Samples a texture but returns only the red component of each texel corner
sampleGrad(texture, uv, dx, dy)sampleGrad(texture, uv, dx, dy, offset)Performs texture lookup with explicit gradients
sampleLod(texture, uv, lod)sampleLod(texture, uv, lod, offset)Samples the texture at a specific mipmap level
sampleCmp(texture, uv, comp_value)sampleCmp(texture, uv, comp_value, offset)Performs texture lookup and compares it with the comparison value and returns result
load(texture, position)Gets the value of a texel at the integer position
getTextureSize(texture, lod)Returns the width and height of the texture lod
getTextureLod(texture, uv)Gets the lod that would get sampled if that uv coord is used
getTextureLevelCountGets the number of MipMap levels

It is also possible to use language specific code directly. Some languages and graphic cards might have functions that are more optimized for those systems and then it might be a good idea to write code specific for that language.

@ifdef Lang_GLSL
                  vec4 vModifier = vec4(lessThan(vValue, vLimit));
@else
                  cVector4f vModifier = step(vValue, vLimit);
@endif

LevelEditor Feature: Poser EditMode

Originally posted by Luis.

So I’m back with another HPL LevelEditor feature update, the Poser EditMode.

Back in the Amnesia days, whenever we wanted to add details which implied a unique skeletal model with different poses, we had to go all the way back to the modelling tool, set them up there and save them as a different mesh files. So we pretty much ended up with lots of replicated and redundant data (the corpse piles come to mind as I’m writing this), plus the burden of having to prepare everything outside the editor.

So, how to fix this? This is where the poser mode comes in. In a nutshell, it takes a skeletal mesh and exposes its bones to be translated and rotated in whatever way you fancy. This is useful not only for organic and creature geometry, but to create details like cables, piping and anything you can add a skeleton to.

There’s not much to be added, so see it in action in this little video.

Tech Feature: SSAO and Temporal Blur

Some links in this article have expired and have been removed.

Screen space ambient occlusion (SSAO) is the standard solution for approximating ambient occlusion in video games. Ambient occlusion is used to represent how exposed each point is to the indirect lighting from the scene. Direct lighting is light emitted from a light source, such as a lamp or a fire. The direct light then illuminates objects in the scene. These illuminated objects make up the indirect lighting. Making each object in the scene cast indirect lighting is very expensive. Ambient occlusion is a way to approximate this by using a light source with constant color and information from nearby geometry to determine how dark a part of an object should be. The idea behind SSAO is to get geometry information from the depth buffer.

There are many publicised algorithms for high quality SSAO. This tech feature will instead focus on improvements that can be made after the SSAO has been generated.

SSAO Algorithm

SOMA uses a fast and straightforward algorithm for generating medium frequency AO. The algorithm runs at half resolution which greatly increases the performance. Running at half resolution doesn’t reduce the quality by much, since the final result is blurred.

For each pixel on the screen, the shader calculates the position of the pixel in view space and then compares that position with the view space position of nearby pixels. How occluded the pixel gets is based on how close the points are to each other and if the nearby point is in front of the surface normal. The occlusion for each nearby pixel is then added together for the final result. 

SOMA uses a radius of 1.5m to look for nearby points that might occlude. Sampling points that are outside of the 1.5m range is a waste of resources, since they will not contribute to the AO. Our algorithm samples 16 points in a growing circle around the main pixel. The size of the circle is determined by how close the main pixel is to the camera and how large the search radius is. For pixels that are far away from the camera, a radius of just a few pixels can be used. The closer the point gets to the camera the more the circle grows – it can grow up to half a screen. Using only 16 samples to select from half a screen of pixels results in a grainy result that flickers when the camera is moving.

Grainy result from the SSAO algorithm

Bilateral Blur

Blurring can be used to remove the grainy look of the SSAO. Blur combines the value of a large number of neighboring pixels. The further away a neighboring pixel is, the less the impact it will have on the final result. Blur is run in two passes, first in the horizontal direction and then in the vertical direction.

The issue with blurring SSAO this way quickly becomes apparent. AO from different geometry leaks between boundaries causing a bright halo around objects. Bilateral weighting can be used to fix the leaks between objects. It works by comparing the depth of the main pixel to the depth of the neighboring pixel. If the distance between the depth of the main and the neighbor is outside of a limit the pixel will be skipped. In SOMA this limit is set to 2cm.

To get good-looking blur the number of neighboring pixels to sample needs to be large. Getting rid of the grainy artifacts requires over 17×17 pixels to be sampled at full resolution.

Temporal Filtering 

Temporal Filtering is a method for reducing the flickering caused by the low number of samples. The result from the previous frame is blended with the current frame to create smooth transitions. Blending the images directly would lead to a motion-blur-like effect. Temporal Filtering removes the motion blur effect by reverse reprojecting the view space position of a pixel to the view space position it had the previous frame and then using that to sample the result. The SSAO algorithm runs on screen space data but AO is applied on world geometry. An object that is visible in one frame may not be seen in the next frame, either because it has moved or because the view has been blocked by another object. When this happens the result from the previous frame has to be discarded. The distance between the points in world space determines how much of the result from the previous frame should be used.

Explanation of Reverse Reprojection used in Frostbite 2 [2]

Temporal Filtering introduces a new artifact. When dynamic objects move close to static objects they leave a trail of AO behind. Frostbite 2’s implementation of Temporal Filtering solves this by disabling the Temporal Filter for stable surfaces that don’t get flickering artifacts. I found another way to remove the trailing while keeping Temporal Filter for all pixels.

Shows the trailing effect that happens when a dynamic object is moved. The Temporal Blur algorithm is then applied and most of the trailing is removed.

Temporal Blur

(A) Implementation of Temporal Filtered SSAO (B) Temporal Blur implementation 

I came up with a new way to use Temporal Filtering when trying to remove the trailing artifacts. By combining two passes of cheap blur with Temporal Filtering all flickering and grainy artifacts can be removed without leaving any trailing. 

When the SSAO has been rendered, a cheap 5×5 bilateral blur pass is run on the result. Then the blurred result from the previous frame is applied using Temporal Filtering. A 5×5 bilateral blur is then applied to the image. In addition to using geometry data to calculate the blending amount for the Temporal Filtering the difference in SSAO between the frames is used, removing all trailing artifacts. 

Applying a blur before and after the Temporal Filtering and using the blurred image from the previous frame results in a very smooth image that becomes more blurred for each frame, it also removes any flickering. Even a 5×5 blur will cause the resulting image to look as smooth as a 64×64 blur after a few frames.

Because the image gets so smooth the upsampling can be moved to after the blur. This leads to Temporal Blur being faster, since running four 5×5 blur passes in half resolution is faster than running two 17×17 passes in full resolution. 

Upsampling

All of the previous steps are performed in half resolution. To get the final result it has to be scaled up to full resolution. Stretching the half resolution image to twice its size will not look good. Near the edges of geometry there will be visible bleeding; non-occluded objects will have a bright pixel halo around them. This can be solved using the same idea as the bilateral blurring. Normal linear filtering is combined with a weight calculated by comparing the distance in depth between the main pixel and the depth value of the four closest half resolution pixels.

Summary 

Combining SSAO with the Temporal Blur algorithm produces high quality results for a large search radius at a low cost. The total cost of the algoritm is 1.1ms (1920×1080 AMD 5870). This is more than twice as fast as a normal SSAO implementation.

SOMA uses high frequency AO baked into the diffuse texture in addition to the medium frequency AO generated by the SSAO.

Temporal Blur could be used to improve many other post effects that need to produce smooth-looking results.

Ambient Occlusion is only one part of the rendering pipeline, and it should be combined with other lighting techniques to give the final look.

References

  1. http://gfx.cs.princeton.edu/pubs/Nehab_2007_ARS/NehEtAl07.pdf
  2. http://dice.se/wp-content/uploads/GDC12_Stable_SSAO_In_BF3_With_STF.pdf
Code Appendix

 // SSAO Main loop



//Scale the radius based on how close to the camera it is

 float fStepSize = afStepSizeMax * afRadius / vPos.z; 
 float fStepSizePart = 0.5 * fStepSize / ((2 + 16.0));    



 for(float d = 0.0; d < 16.0; d+=4.0)
 {
        //////////////
        // Sample four points at the same time
        vec4 vOffset = (d + vec4(2, 3, 4, 5))* fStepSizePart;
        

        //////////////////////
        // Rotate the samples
        vec2 vUV1 = mtxRot * vUV0;
        vUV0 = mtxRot * vUV1;

        vec3 vDelta0 = GetViewPosition(gl_FragCoord.xy + vUV1 * vOffset.x) - vPos;
        vec3 vDelta1 = GetViewPosition(gl_FragCoord.xy - vUV1 * vOffset.y) - vPos;
        vec3 vDelta2 = GetViewPosition(gl_FragCoord.xy + vUV0 * vOffset.z) - vPos;
        vec3 vDelta3 = GetViewPosition(gl_FragCoord.xy - vUV0 * vOffset.w) - vPos;

        vec4 vDistanceSqr = vec4(dot(vDelta0, vDelta0),
                                 dot(vDelta1, vDelta1),
                                 dot(vDelta2, vDelta2),
                                 dot(vDelta3, vDelta3));

        vec4 vInvertedLength = inversesqrt(vDistanceSqr);

        vec4 vFalloff = vec4(1.0) + vDistanceSqr * vInvertedLength * fNegInvRadius;

        vec4 vAngle = vec4(dot(vNormal, vDelta0),
                            dot(vNormal, vDelta1),
                            dot(vNormal, vDelta2),
                            dot(vNormal, vDelta3)) * vInvertedLength;


        ////////////////////
        // Calculates the sum based on the angle to the normal and distance from point
        fAO += dot(max(vec4(0.0), vAngle), max(vec4(0.0), vFalloff));

}



//////////////////////////////////
// Get the final AO by multiplying by number of samples

fAO = max(0, 1.0 - fAO / 16.0);



------------------------------------------------------------------------------ 



// Upsample Code

 
vec2 vClosest = floor(gl_FragCoord.xy / 2.0);
vec2 vBilinearWeight = vec2(1.0) - fract(gl_FragCoord.xy / 2.0);

float fTotalAO = 0.0;
float fTotalWeight = 0.0;

for(float x = 0.0; x < 2.0; ++x)
for(float y = 0.0; y < 2.0; ++y)
{
       // Sample depth (stored in meters) and AO for the half resolution  
       float fSampleDepth = textureRect(aHalfResDepth, vClosest + vec2(x,y)); 
       float fSampleAO = textureRect(aHalfResAO, vClosest + vec2(x,y));

       // Calculate bilinear weight 
       float fBilinearWeight = (x-vBilinearWeight .x) * (y-vBilinearWeight .y);
       // Calculate upsample weight based on how close the depth is to the main depth
       float fUpsampleWeight = max(0.00001, 0.1 - abs(fSampleDepth – fMainDepth)) * 30.0;

       // Apply weight and add to total sum 
       fTotalAO += (fBilinearWeight + fUpsampleWeight) * fSampleAO;
       fTotalWeight += (fBilinearWeight + fUpsampleWeight);
}

// Divide by total sum to get final AO
float fAO = fTotalAO / fTotalWeight;



-------------------------------------------------------------------------------------



// Temporal Blur Code



//////////////////
// Get current frame depth and AO
vec2 vScreenPos = floor(gl_FragCoord.xy) + vec2(0.5);
float fAO = textureRect(aHalfResAO, vScreenPos.xy);

float fMainDepth = textureRect(aHalfResDepth, vScreenPos.xy);   



//////////////////
// Convert to view space position

vec3 vPos = ScreenCoordToViewPos(vScreenPos, fMainDepth);



/////////////////////////
// Convert the current view position to the view position it 

// would represent the last frame and get the screen coords
vPos = (a_mtxPrevFrameView * (a_mtxViewInv * vec4(vPos, 1.0))).xyz;

vec2 vTemporalCoords = ViewPosToScreenCoord(vPos);
        
//////////////
// Get the AO from the last frame
float fPrevFrameAO = textureRect(aPrevFrameAO, vTemporalCoords.xy);
float fPrevFrameDepth = textureRect(aPrevFrameDepth, vTemporalCoords.xy);



/////////////////
// Get to view space position of temporal coords
vec3 vTemporalPos = ScreenCoordToViewPos(vTemporalCoords.xy, fPrevFrameDepth);
        
///////
// Get weight based on distance to last frame position (removes ghosting artifact)
float fWeight = distance(vTemporalPos, vPos) * 9.0;



////////////////////////////////




// And weight based on how different the amount of AO is (removes trailing artifact)

// Only works if both fAO and fPrevFrameAO is blurred 
fWeight += abs(fPrevFrameAO - fAO ) * 5.0;

////////////////
// Clamp to make sure atleast 1.0 / FPS of a frame is blended
fWeight = clamp(fWeight, afFrameTime, 1.0);       


fAO = mix(fPrevFrameAO , fAO , fWeight);
   

------------------------------------------------------------------------------

Editor feature: New color picker

Originally posted by Luis.

Hey there! So I finally dropped by to write a quick post on our stuff after quite a long hiatus (exactly 3 years, 3 months and 8 days long according to my last post) and Thomas pretty much took over the blog. Right now most of you might be wondering who’s typing even . The name is Luis and I mainly code the tools we use here at Frictional (also the ones you might have been using for making your own Amnesia custom stories) After the quick introduction, here comes a little editor feature showoff.

If you ever used the Amnesia tools yourself, it’s more than probable that you already know about this little buddy here:

One could say it is functional. I mean, you actually can pick a color by using it. But it had a little problem. It sucked, and real bad I must add.

When you are editing a level and need to pick a color for anything, be it the diffuse color for a light or the color multiplier for some illumination texture, you are probably going to need a nice system that allows for quick tweaking with not much sweat. This is definitely not the case of our old color picker, mainly because of two reasons: the RGB color space and lack of immediate preview of changes.

Selecting a color in the RGB color space is pretty straightforward you might think. And it is indeed, you only need to set the values for the Red, Green and Blue components and you are all set. That’s it. Okay, but what if you need to choose from a range for a particular color tonality? How are you supposed to know which RGB values you need to set for that? Summing up, RGB is a pretty uninintuitive system when it comes to edition, so we pretty much had to update the color picker to use the HSB color model as well.

This model describes colors as follows:

  • the H value, ranging from 0 to 360, controls the hue for the resulting color; 
  • S, which stands for saturation and ranges from 0 to 100, indicates, in layman terms, how washed the color will look, with 0 being white and 100 equalling the original hue color (for a B value of 100); 
  • and finally B, standing for brightness and ranging from 0 to 100 as well, is quite self explanatory and sets how bright the final color will be, with 0 meaning black and 100 meaning the color that is set by the other 2 parameters.

As for the immediate preview, anyone who used the old picker knows that you had to exit the picker in order to apply and see the changes. When you need to tweak things, which involves constantly changing values and checking out how it looks, this setup is bound to get old and annoying way quicker than you would want it to. To fix this, the new picker can change the target color on the fly, letting the user see the end result in the scene while modifying the values.

Another upgrade that speeds things up a lot is going as visual as possible. No matter which parameter you are editing, being able to change it via a slider or some other graphical tool that you can click on will always help tons. The drawback of these, though, is that it’s very likely that you spend more time on them than needed just because they are so fun to use.

After all this long-ish introduction, just say hi to our new color picker:

And see it in action in this short demo video:

Tech Feature: Linear-space lighting

Originally posted by Peter.
Some links in this article have expired and have been removed.

Linear-space lighting is the second big change that has been made to the rendering pipeline for HPL3. Working in a linear lighting space is the most important thing to do if you want correct results.

It is an easy and inexpensive technique for improving the image quality. Working in linear space is not something the makes the lighting look better, it just makes it look correct.

(a)  Left image shows the scene rendered without gamma correction 
(b) Right image is rendered with gamma correction

Notice how the cloth in the image to the right looks more realistic and how much less plastic the specular reflections are.

Doing math in linear space works just as you are used to. Adding two values returns the sum of those values and multiplying a value with a constant returns the value multiplied by the constant. 

This seems like how you would think it would work, so why isn’t it?

Monitors

Monitors do not behave linearly when converting voltage to light. A monitor follows closer to an exponential curve when converting the pixel value. How this curve looks is determined by the monitor’s gamma exponent. The standard gamma for a monitor is 2.2, this means that a pixel with 100 percent intensity emit 100 percent light but a pixel with 50 percent intensity only outputs 21 percent light. To get the pixel to emit 50 percent light the intensity has to be 73 percent.

The goal is to get the monitor to output linearly so that 50 percent intensity equals 50 percent light emitted.

 Gamma correction

Gamma correction is the process of converting one intensity to another intensity which generates the correct amount of light.

The relationship between intensity and light for a monitor can be simplified as an exponential function called gamma decoding.

To cancel out the effect of gamma decoding the value has to be converted using the inverse of this function.

Inversing an exponential function is the inverse of the exponent. The inverse function is called gamma encoding.

Applying the gamma encoding to the intensity makes the pixel emit the correct amount of light.

Lighting

Here are two images that use simple Lambertian lighting (N * L) .

(a) Lighting performed in gamma space
(b) Lighting performed in linear space

The left image has a really soft falloff which doesn’t look realistic. When the angle between the normal and light source is 60 degrees the brightness should be 50 percent.  The image on the left is far too dim to match that. Applying a constant brightness to the image would make the highlight too bright and not fix the really dark parts. The correct way to make the monitor display the image correctly is by applying gamma encoding it. 

(a) Lighting and texturing in gamma space
(b) Lighting done in linear space with standard texturing
(c) The source texture

Using textures introduces the next big problem with gamma correction. In the left image the color of the texture looks correct but the lighting is too dim. The right image is corrected and the lighting looks correct but the texture, and the whole image, is washed out and desaturated. The goal is to keep the colors from the texture and combining it with the correct looking lighting.

Pre-encoded images

Pictures taken with a camera or paintings made in Photoshop are all stored in a gamma encoded format. Since the image is stored as encoded the monitor can display it directly. The gamma decoding of the monitor cancels out the encoding of the image and linear brightness gets displayed. This saves the step of having to encode the image in real time before displaying it. 

The second reason for encoding images is based on how humans perceive light. Human vision is more sensitive to differences in shaded areas than in bright areas. Applying gamma encoding expands the dark areas and compresses the highlights which results in more bits being used for darkness than brightness. A normal photo would require 12 bits to be saved in linear space compared to the 8 bits used when stored in gamma space. Images are encoded with the sRGB format which uses a gamma of 2.2.

Images are stored in gamma space but lighting works in linear space, so the image needs to be converted to linear space when they are loaded into the shader. If they are not converted correctly there will be artifacts from mixing the two different lighting spaces. The conversion to linear space is done by applying the gamma decoding function to the texture.

(a) All calculations have been made in gamma space 
 (b) Correct texture and lighting, texture decoded to linear space and then all calculations are done before encoding to gamma space again

Mixing light spaces

Gamma correction a term is used to describe two different operations, gamma encoding and decoding. When learning about gamma correction it can be confusing because word is used to describe both operations.

Correct results are only achieved if both the texture input is decoded and then the final color is encoded. If only one of the operations is used the displayed image will look worse than if none of them are.

(a) No gamma correction, the lighting looks incorrect but the texture looks correct. 
(b) Gamma encoding of the output only, the lighting looks correct but the textures becomes washed out
(c)  Gamma decoding only, the texture is much darker and the lighting is incorrect. 
(d) Gamma decoding of texture and gamma encoding of the output, the lighting and the texture looks correct.

Implementation

Implementing gamma correction is easy. Converting an image to linear space is done by appling the gamma decoding function. The alpha channel should not be decoded, as it is already stored in linear space.

// Correct but expensive way

vec3 linear_color = pow(texture(encoded_diffuse,  uv).rgb, 2.2);

// Cheap way by using power of 2 instead

vec3 encoded_color = texture(encoded_diffuse,  uv).rgb;

vec3 linear_color = encoded_color * encoded_color;

Any hardware with DirectX 10 or OpenGL 3.0 support can use the sRGB texture format. This format allows the hardware to perform the decoding automatically and return the data as linear. The automatic sRGB correction is free and give the benefit of doing the conversion before texture filtering.

To use the sRGB format in OpenGL just pass GL_SRGB_EXT instead of GL_RGB to glTexImage2D as the format.

After doing all calculations and post-processing the final color should then to be correct by applying gamma encoding with a gamma that matches the gamma of the monitor.

vec3 encoded_output = pow(final_linear_color, 1.0 / monitor_gamma);

For most monitors a gamma of 2.2 would work fine. To get the best result the game should let the player select gamma from a calibration chart.

This value is not the same gamma value that is used to decode the textures. All textures are be stored at a gamma of 2.2 but that is not true for monitors, they usually have a gamma ranging from 2.0 to 2.5.

When not to use gamma decoding

Not every type of texture is stored as gamma encoded. Only the texture types that are encoded should get decoded. A rule of thumb is that if the texture represents some kind of color it is encoded and if the texture represents something mathematical it is not encoded. 

  • Diffuse, specular and ambient occlusion textures all represent color modulation and need to be decoded on load 
  • Normal, displacement and alpha maps aren’t storing a color so the data they store is already linear

Summary

Working in linear space and making sure the monitor outputs light linearly is needed to get properly rendered images. It can be complicated to understand why this is needed but the fix is very simple.

  • When loading a gamma encoded image apply gamma decoding by raising the color to the power of 2.2, this converts the image to linear space 
  • After all calculations and post processing is done (the very last step) apply gamma encoding to the color by raising it to the inverse of the gamma of the monitor

If both of these steps are followed the result will look correct.

References

https://en.wikipedia.org/wiki/Gamma_correction
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html
http://filmicgames.com/archives/category/gamma