Thursday, 28 November 2013

Tech Feature: Linear-space lighting


Linear-space lighting is the second big change that has been made to the rendering pipeline for HPL3. Working in a linear lighting space is the most important thing to do if you want correct results.
It is an easy and inexpensive technique for improving the image quality. Working in linear space is not something the makes the lighting look better, it just makes it look correct.

(a)  Left image shows the scene rendered without gamma correction 
(b) Right image is rendered with gamma correction

Notice how the cloth in the image to the right looks more realistic and how much less plastic the specular reflections are.
Doing math in linear space works just as you are used to. Adding two values returns the sum of those values and multiplying a value with a constant returns the value multiplied by the constant. 

This seems like how you would think it would work, so why isn’t it?

Monitors

Monitors do not behave linearly when converting voltage to light. A monitor follows closer to an exponential curve when converting the pixel value. How this curve looks is determined by the monitor’s gamma exponent. The standard gamma for a monitor is 2.2, this means that a pixel with 100 percent intensity emit 100 percent light but a pixel with 50 percent intensity only outputs 21 percent light. To get the pixel to emit 50 percent light the intensity has to be 73 percent.

The goal is to get the monitor to output linearly so that 50 percent intensity equals 50 percent light emitted.

 Gamma correction

Gamma correction is the process of converting one intensity to another intensity which generates the correct amount of light.
The relationship between intensity and light for a monitor can be simplified as an exponential function called gamma decoding.



To cancel out the effect of gamma decoding the value has to be converted using the inverse of this function.
Inversing an exponential function is the inverse of the exponent. The inverse function is called gamma encoding.




Applying the gamma encoding to the intensity makes the pixel emit the correct amount of light.

Lighting

Here are two images that use simple Lambertian lighting (N * L) .

(a) Lighting performed in gamma space
(b) Lighting performed in linear space
The left image has a really soft falloff which doesn’t look realistic. When the angle between the normal and light source is 60 degrees the brightness should be 50 percent.  The image on the left is far too dim to match that. Applying a constant brightness to the image would make the highlight too bright and not fix the really dark parts. The correct way to make the monitor display the image correctly is by applying gamma encoding it. 

 (a) Lighting and texturing in gamma space
(b) Lighting done in linear space with standard texturing
(c) The source texture

Using textures introduces the next big problem with gamma correction. In the left image the color of the texture looks correct but the lighting is too dim. The right image is corrected and the lighting looks correct but the texture, and the whole image, is washed out and desaturated. The goal is to keep the colors from the texture and combining it with the correct looking lighting.

Pre-encoded images

Pictures taken with a camera or paintings made in Photoshop are all stored in a gamma encoded format. Since the image is stored as encoded the monitor can display it directly. The gamma decoding of the monitor cancels out the encoding of the image and linear brightness gets displayed. This saves the step of having to encode the image in real time before displaying it. 
The second reason for encoding images is based on how humans perceive light. Human vision is more sensitive to differences in shaded areas than in bright areas. Applying gamma encoding expands the dark areas and compresses the highlights which results in more bits being used for darkness than brightness. A normal photo would require 12 bits to be saved in linear space compared to the 8 bits used when stored in gamma space. Images are encoded with the sRGB format which uses a gamma of 2.2.

Images are stored in gamma space but lighting works in linear space, so the image needs to be converted to linear space when they are loaded into the shader. If they are not converted correctly there will be artifacts from mixing the two different lighting spaces. The converstion to linear space is done by applying the gamma decoding function to the texture.



      (a) All calculations have been made in gamma space 
        (b) Correct texture and lighting, texture decoded to linear space and then all calculations are done before encoding to gamma space again

Mixing light spaces

Gamma correction a term is used to describe two different operations, gamma encoding and decoding. When learning about gamma correction it can be confusing because word is used to describe both operations.
Correct results are only achieved if both the texture input is decoded and then the final color is encoded. If only one of the operations is used the displayed image will look worse than if none of them are.



     (a) No gamma correction, the lighting looks incorrect but the texture looks correct. 
(b) Gamma encoding of the output only, the lighting looks correct but the textures becomes washed out
(c)  Gamma decoding only, the texture is much darker and the lighting is incorrect. 
(d) Gamma decoding of texture and gamma encoding of the output, the lighting and the texture looks correct.

Implementation

Implementing gamma correction is easy. Converting an image to linear space is done by appling the gamma decoding function. The alpha channel should not be decoded, as it is already stored in linear space.

// Correct but expensive way
vec3 linear_color = pow(texture(encoded_diffuse,  uv).rgb, 2.2);
// Cheap way by using power of 2 instead
vec3 encoded_color = texture(encoded_diffuse,  uv).rgb;
vec3 linear_color = encoded_color * encoded_color;

Any hardware with DirectX 10 or OpenGL 3.0 support can use the sRGB texture format. This format allows the hardware to perform the decoding automatically and return the data as linear. The automatic sRGB correction is free and give the benefit of doing the conversion before texture filtering.
To use the sRGB format in OpenGL just pass GL_SRGB_EXT instead of GL_RGB to glTexImage2D as the format.

After doing all calculations and post-processing the final color should then to be correct by applying gamma encoding with a gamma that matches the gamma of the monitor.

vec3 encoded_output = pow(final_linear_color, 1.0 / monitor_gamma);

For most monitors a gamma of 2.2 would work fine. To get the best result the game should let the player select gamma from a calibration chart.
This value is not the same gamma value that is used to decode the textures. All textures are be stored at a gamma of 2.2 but that is not true for monitors, they usually have a gamma ranging from 2.0 to 2.5.

When not to use gamma decoding

Not every type of texture is stored as gamma encoded. Only the texture types that are encoded should get decoded. A rule of thumb is that if the texture represents some kind of color it is encoded and if the texture represents something mathematical it is not encoded. 
  • Diffuse, specular and ambient occlusion textures all represent color modulation and need to be decoded on load 
  • Normal, displacement and alpha maps aren’t storing a color so the data they store is already linear

Summary

Working in linear space and making sure the monitor outputs light linearly is needed to get properly rendered images. It can be complicated to understand why this is needed but the fix is very simple.
  • When loading a gamma encoded image apply gamma decoding by raising the color to the power of 2.2, this converts the image to linear space 
  • After all calculations and post processing is done (the very last step) apply gamma encoding to the color by raising it to the inverse of the gamma of the monitor

If both of these steps are followed the result will look correct.

References


28 comments:

  1. I noticed that reviewers post very dark Amnesia screenshots, or, even worse, they try increase brightness in a image editing software. There is no a gamma correction applied? Example: http://kotaku.com/amnesia-a-machine-for-pigs-the-kotaku-review-1274112886

    ReplyDelete
    Replies
    1. Amensia and HPL2 doesnt use gamma correction correctly.

      When you first start Amensia you get a calibration chart that lets you select gamma. But this value affects how the whole operating system displays images.

      Lets say you increase that gamma to the maximum value and then take a screenshot. On your computer the screenshot will look very bright. But if you send that screenshot to another computer that has other gamma correction settings it will look dark.

      In HPL3 we use gamma correction that is applied manually. So changing the gamma value would change how the screenshot looks.

      Delete
    2. Gamma correction in HPL2 only affects the encoding of the final images. Where as HPL3 also has decoding of the textures.

      Delete
  2. This is really interesting stuff, thanks for sharing!

    ReplyDelete
  3. every monitor have brightness settings and gamma settings , plus majority games have their own in game setting , each individual adjust these settings according to their personal choice . end of the story .

    ReplyDelete
  4. HPL 3 Engine looks amazing.Much better than 2nd one

    ReplyDelete
  5. This scene absolutely needs some shadows made by the dressed corpse on the ground. That, or simple ambient occlusion. So, please do consider that. Also, the spots on the ground, where the pipes go into the ground need some transition. Perhaps they go into the ground violently, and the ground is shattered a little from around the spots of penetration? Or, perhaps there is a little ring around the entry there, so that everything is neatly designed. Think about the narrative repercussions.

    The cloth, however, as is the topic of the posting, looks very good.

    ReplyDelete
  6. I always dislike gamma correction, because it makes the picture much brighter. Colors gets "faked" and aren't original anymore. This looks very strange especially in a horror game, where dark areas should be dark and not lightened up.

    I also think the gamma hint in Amnesia was wrong. When tuning up gamma that the right square is barely visible, the image is too bright and dark areas aren't scary anymore. I recommend to let gamma as it is at 1.0 (or the left square).

    I prefer scenes with a good contrast between bright and dark. Gamma correction reduces the contrast while it's lighting up the picture to a constant value of 2.2. It's like a dynamic contrast where dark areas will be brightened and vice versa. I dislike it because it's faking the picture and harm the atmosphere/horror in bad cases.

    ReplyDelete
  7. edit:

    I think that picture "a" looks the best in any case. Without gamma correction. I absolutely can't say that the lighting looks incorrect there. In contrary - it looks incorrect with gamma correction.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Gamma correction doesnt make the game look any brighter. That is up to the artists and level designers. They can make a level look as dark as they want.

      Gamma correction just makes the light physically correct by making it linear.

      It sounds like you need to calibrate your monitor.
      Start Control Panel -> Display -> Calibrate Color and follow these guides:
      http://filmicgames.com/archives/32
      http://radsite.lbl.gov/radiance/refer/Notes/gamma.html

      Delete
    3. I already calibrated my monitor, but then it looked extremely horrible!
      A gamma value of 2.2 never and NEVER could be called the correct standard. Monitors are very different and you should use your own settings where you think that pictures looks correct.

      Also in reality the lights aren't linear. Lights are losing intensity in dependence to the distance. Surfaces are very differently reflecting the light. Dark surfaces will "swallow" the light where white or bright surfaces will shine even brighter. But gamma correction seems to me that the brightness is constant everywhere.

      In this example I prefer picture "C". Look at the ground textures, where you see the most details. With gamma correction it looks too clean overall and lose contrast.
      http://2.bp.blogspot.com/-rdI77kpiSFo/Upb7xs_OyxI/AAAAAAAAAC8/ZM99tkX4vMk/s1600/blog_mixing.jpg


      The idea of gamma correction is nice, but the problem is it doesn't work as it should (in my opinion).

      Delete
    4. This article is not talking about light as in a light source. It is about the brightness that each pixel on the monitor emits.

      If the computer tells the monitor that a pixel should emit a light 50% brightness it is important that it does emit 50% brightness . If a monitor has incorrect gamma calibration it will emit the incorrect brightness.

      Delete
    5. OK, but when I look at this example here:
      http://1.bp.blogspot.com/-iqVB5rg5chs/Upb7xqeBrXI/AAAAAAAAACo/-Yq8a_kn1Gs/s1600/blog_sphere_correct.png

      You say that picture B is correct. But it would be only correct in a completely dark room of vacuum - like in space. Picture A shows how the brightness scatters a lot smoother like it would be on earth or other planets with an atmosphere full of particles.
      Hard shadows only exist in vacuum (on the moon for example) or at very near distances. So the correct picture is A, it shows the true brightness/shadow spreading in an environment like ours.

      Delete
    6. They are displaying the lighting model for a single light, not how the objects will actually appear in the game. Game artists and level designers will place more light sources in the level, add ambient lighting, ambient occlusion, some more tricks, so the shadows will look more natural.
      But for all that to look good, a good lighting model must be established first.
      The problem is that an untrained eye can't see that in these images, because you lack some understanding on how light works, how we see colors, and how all of this is simulated and/or faked in a computer.

      BTW, hard shadows are not related so much to the vacuum, but really have to do with the number of light sources, and light scattering off of objects and other things, like the air or dust particles. Now, it's true that the atmosphere of the Moon is very thin, but light reflects from the lunar environment and from the Earth, if visible in the lunar sky, so you can have soft shadows on the Moon.

      Delete
  8. I was wondering if you are going to implement any kind of physically based shading/lighting? I haven't had the chance to use it in any capacity, I don't think there's any engine (that's implemented it) that's freely available yet. Like I know they are doing in it UE4 and pretty sure the new Cryengine.

    There is that one Maya shader, and I think they were implementing it in the new Marmoset Toolbag. It seems artists will soon be expected to create their assets with physically based shaders in mind?

    ReplyDelete
    Replies
    1. To be clear I'm talking about akin to this: http://www.unrealengine.com/files/downloads/2013SiggraphPresentationsNotes.pdf

      Also what shading language do you use? I'm by no means a coder, but I have messed with some Cg and HLSL (I think, since they are pretty similar right).

      Delete
    2. We currently use our own language that converts automatically to OpenGL for PC or PSSL for PS4. The syntax we use is pretty much the same as OpenGL.
      We could convert to other shading languages if we wanted. They are all pretty much the same.

      Physically based shading is something we use. But it doesnt really mean much. The core of physically based shading is that it is performed in linear space and that it matches physical formulas as much as possible. But the shader never really uses real physics formulas since that would be to expensive.

      For example: here is the falloff formual that is in the used in the document u linked.
      falloff = saturate(1 − (distance / lightRadius)^4) ^2 / (distance^2+ 1)
      The real physics formula is
      falloff = 1.0 / (distance ^ 2)
      But that cant be used because that would mean that the light would have an unlimited radius.

      If u want to read about real physically correct lighting/shading u should look up Unbiased rendering.

      Delete
    3. I have a rather general question, not really related to HPL3 - since you seem to know something about lighting. I've heard that people are experimenting with real-time raytracing systems; now, I know (or guess) this is not yet suitable for interactive things like games (maybe better suited for some architectural flythroughs and such), but I keep wondering could some form of crude, light cone-based raytracing approach be used to preprocess a game level in order to generate a "map" (again, crude) of ambient lighting, based on what's visible in the environment from certain points, and interpolate from that map onto the objects as a (direction-dependent) ambient term, in order to fake a more realistic lighting? Do you have any thoughts on that?

      Also, I've seen that the new Unreal engine has support for dynamic diffuse light bouncing (like, when you shine wight light on a red wall, and then the reflected light bathes the wall-facing sides of various nearby objects in red) - do you have some idea how they might be doing this in realtime?
      Thanks.

      Delete
    4. by "wight" i mean "white" :) ...

      Delete
    5. This is something that I belive we will see quite a lot of at the start of this generation, before dynamic GI will be used.
      The new Fox Engine used in the upcoming Metal Gear uses this, they even bake it for different times of the day and interpolate between the different probes to make it more dynamic.
      Unity also supports this feature, but they use a much simpler approach.
      HPL3 will use a version that is somewhere between Unity and Fox Engine in complexity.

      Unreal ended up not using dynamic light bouncing. It was to expensive for the next gen consoles. Unreal Engine 4 uses baked lightmaps just like UDK, but it will support much larger lightmap sizes.

      http://docs.unity3d.com/Documentation/Manual/LightProbes.html
      https://www.youtube.com/watch?feature=player_detailpage&v=W2DlBGM8ZBc#t=3453 start at 57 minutes in

      Delete
    6. What about the Material Model? I was wondering if you're going to use the traditional standard maps;
      Diffuse, Specular, (+normal map)
      or have you guys looked into
      Basecolor, Metallic, Roughness, Cavity. (+normal map)

      Or any combination of all of the above? Instead of the colored specular map to define metal surfaces, they use three maps, metallic, roughness and cavity.

      And in HPL2, how come you didn't do support for colored specular? Was this for performance reasons, or am I missing something? At least I think, in HPL2 the Red channel is used for Specular level and Green for specular power?

      Delete
    7. HPL3 supports
      Diffuse, Colored Specular, Roughness, Translucency (cheap SSS), Normal map and Height map.
      It also supports some additional textures for smaller specific effects.

      To simulate a metallic surface you would have to use Colored Specular and roughness to make it look correct.

      The reason HPL2 did not support colored specular was because of memory and space limitations for the deferred render targets. Greyscale specular uses 1 channel and 8 bits per pixel while colored takes up 3 channels and 24 bits.

      Delete
    8. Thank you. I guess I shouldn't be surprised people have already thought of this, and that there is an established name for the approach. :D
      So, it's called light probes - now I know what to search for.
      Also, thanks for the links.

      Delete
  9. Thank you for this tech post (it was about time! :D); really interesting, I learned something new today.

    P.S. To people arguing that no gamma correction looks better - it may appear to be so to you, but you don't really know what to look for in those images, or how the lighting is really modeled in a game. Once you have a good lighting model, the artists can make the scene look even better. Just look at the Soma trailer and compare it to the previous games if you have any doubts.

    ReplyDelete
  10. P.P.S. Since I know you guys (and many other people) loved Telltale's The Walking Dead; you should definitively try their newer game The Wolf Among Us - from a game design perspective, a similar, but more refined formula, works quite well, and it's an awesome game. Do play if you haven't already; and if you decide later to post some thoughts on the game and it's design, even better!

    The only thing I wish was done somewhat differently is the decision timer system - the problem is that, due to their desire to keep the flow of the scene, I guess, your time to pick an answer/option runs out basically when the character stops speaking, so you can't really concentrate on what it is that is being said, because you have to figure out what each of the options is and what it actually means - which is kind of paradoxical because to do that, you have to listen to the conversation... But it's only a minor nuisance, rather than a serious problem.

    ReplyDelete
  11. Be aware of a lot of blending problems with linear lighting. It's really pain in the ass :) Our terrain blending fell apart after linear lighting was introduced to the engine: brighter textures have move visible weight then darker textures and so blending shifts according to texture to texture contrast ratio.
    That's because human eye is logarithmic or gamma in some approximation. So linear gradient from black to white in sRGB space (which isn't linear in terms of intensities that you get from the monitor) looks more linear to the eye, than in linear space.
    This problem also applies to decals. If you have semi-transparent decal, it will look more transparent on brighter wall, and more opaque on darker wall.

    ReplyDelete

Note: only a member of this blog may post a comment.