Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Question about GH2 DR
  • Got my GH2 about a week ago and have been testing it with different hacks. I'm liking it, however, the dynamic range is a bit of a downer imo! After testing all the different color profiles, nostalgic seems to have the most latitude with everything set to -2. My tests were confirmed on this site . It appears that GH2 has the least latitude of all the cameras tested. For me, to get the GH2 to bump up 3-4 steps (it's now rated at about 10 steps) on par with the canons with technicolors color profile would a BIG bonus. Just curious about the outlook of that happening? With that said, I want to give a big shout out to VK, in case I came across as ungrateful.
  • 58 Replies sorted by
  • Just use a flatter gamma... like cineform SiLog, Rspage or 5dtorgb flat; it's similar to the technicolor (the Rspage compress even more). Then calculate the DR in Rspace gamma and tell us how much it is..
  • Sure, but it's a bit time consuming. I was more hoping for a color profile alteration (like technicolor for canon) so that the camera would spit out a flat image with much higher DR.
  • @Tamir

    This is your first post, in wrong category, all in caps, and it is strange complaint.
    We recently had troll going about GH2 resolution, hope it is not the case.
    We are not club of GH2 fanboys, but it is always best to stay constructive.

    Please show us your films, clips.
  • Did you ever try to grade a scene with true 13 steps from a Canon with Technicolor profile? It just falls apart badly.
    8 bit can't hold it, so even if Vitaliy could hack the GH2 to get that, we couldn't make much use of it in 8 bit.

    Plus, the GH2 doesn't deliver that much, even in RAW photographs there are only 11.5 steps (tested with Xyla by DSc Labs). If you need 13 to 14 steps, get an Alexa. Even RED has difficulties getting that much without HDRx.

    Regarding Canon's C300, I normally don't talk about cameras I haven't tested yet. But even if it had 14 stops (it seems to have very low noise) how do we make use of it if they are deliverd squeezed into 8 bits ?
  • @Tamir

    You might want to have a look at this:

    It's complicated stuff. The upshot is that dynamic range is related to sensor pixel size. With a full frame sensor 14 stops is approaching the theoretical limit. A Nikon D700 (12MP) has better dynamic range than a D3X (24MP), for example, because each pixel well is bigger. 14 stops is probably only possible with a full frame sensor operating at base ISO. Smaller sensors and higher ISO levels will degrade dynamic range performance. I suspect that achieving 14 stops on a 4/3 camera is currently impossible. Same goes for any camera with a smaller sensor (like Red).

    Applying a flat rendering curve won't help because the additional image data simply isn't there.

  • @cbrandin I know you were investigating film modes. Any updates on that? You think the 9.5 stops is the tops for 4/3 camera dynamic range, or is it possible to squeeze another stop (or two!?) out of the GH2?
  • It should be possible to squeeze a bit more, since the camera does better in photo mode. The question is, if it's technically possible to hack that part of processing.
  • @qwerty123 there has been renewed discussion on P-V regarding issues with higher than 9.5 stops- notably the issue present in the C300- (using 8 bit to describe 12-14 stops) - so until it has been concluded that 8 bits can actually safely contain higher stops- then this is not really going to be a game changer.
  • @cbrandin I am no expert at that level of those who wrote the clarkvision article but I think there reasoning is completely false. Look at DXOmark and there is another site, the highest DR in the photo camera world (RAW format) is the apsc Pentax k5 and just a little bellow is the d7000 followed by the d3x. The first two are around 14 stop, that is 2 stop more than the full frame D700. The ARRI Alexa which is S35/APSC size sensor is widely credited at 14.5 stop.

    Sony has been investing a lot in the DR of its sensor, you can see a clear improvement from each generation. In the photoworld with the Nikon d7000 and Pentax k5 and in the video world with the s-log F3 and soon to be F65.

  • @danyyyel

    The problem with many of these tests is that signal to noise ratios are often not considered, and that results in being able to claim better DR. True, the darkest pixel vs the brightest one might be a wide spread, but if you look at details you will notice that low level details are swamped with noise. If you consider S/N ratios the D7000 comes to about 9-10 stops of usable dynamic range. Even Nikon admits that the D3x cannot resolve low level detail as well as the D3/D700 - but you have to save in 14 bit lossless mode (which is not the camera's default) to see the difference. Canon, for example, actually lowered the pixel count in the G11 because of how bad the low level detail looked with the G10 (which had a higher pixel count). My hat's off to them for that because at long last they listened to their engineers instead of simply going for a higher pixel count (as marketing people would suggest).

    It's not strictly the pixel density that needs to be considered - it's the pixel well size that actually matters. One advancement has been to connect sensor elements more efficiently (back-light sensors) so that a higher percentage of of the sensor space is actually sensor wells - resulting in bigger individual wells, and that helps. With a camera that is strictly video the pixel count can be much lower (2 megapixels for HD 1080) - also resulting in bigger wells.

    APS-C is a bigger sensor than 4/3 (1.5x vs 2x), so one would expect better performance.

    I realize that incremental advancements have been made. However, there is a limit to how effectively low level detail can theoretically be captured simply based on the physics of how many photons can be gathered within a given time within a certain space.

    By the way, virtually nobody supports these high DR values with JPEG because quantization gets ugly. With JPEG, DR is typically degraded 2-5 stops. H.264 is similar to JPEG when it comes to compression, so video performance is also limited by the 8 bits per channel. 8 bits can only completely correctly represent 7 stops of data (because of the nyquist limit) - if a higher DR is rendered, gradation steps will be skipped and the total range is mapped into only 8 discreet steps. This is why S/N ratio is so important - if the dark detail has a lot of noise then you are effectively stuck with 6-7 bits of usable data per channel. Raise ISO too high and you lose another stop or two. It can get pretty bad.

    Of course all this can be obfuscated depending on how you develop specifications - which would explain why some cameras look better than others even with "lower" specs.

  • You could have a codec that has a 2-bit depth. Set the thresholds at: 00=-7EV, 01=-2EV, 10 = +2EV, and 11= +7EV. Now, according to specifications you would have 14 stops of DR. It's meaningless, though, because 14 stops aren't actually rendered - but technically, it's still 14 stops of DR. Also, the lowest value (00) will be swamped with noise - so really you end up with only 3 usable bits at best.

    Gamma curves are used to make the mapping of DR into 8 bits as pleasant visually as possible. Use a flat curve, and things get really ugly in 8-bit space. Gamma 2.2 is pretty close to ideal from a perception standpoint, so other curves will usually look worse in the end.

  • @qwerty123

    I've been away for a while so I haven't had time to work on film modes. They are extremely complicated and elusive so I haven't been able to make much progress on that front. There is an unfortunate trade-off with film modes. If DR is increased, posterization will result because of the 8-bits of data per channel.

    As for the 9.5 stops. I suspect that for the GH2 that's pretty much the practical limit - again because of the 8-bits per channel. JPEG and H.264 images will never be able to handle the full raw range and still look good in most cases. Even my big Nikon cameras limit DR to 9 or so stops with JPEG in most cases. Raw, of course has much higher DR; but raw is 14-bits per channel.

  • Great contribution, Chris! Please allow me two corrections:

    Any Bayer-pattern sensor will need more than 2 megapixels to resolve HDTV – about 3.2 minimum, like Alexa, F3 or FS100.

    The representation of stops in bits has not much to do with the Shannon/Nyquist limit, but the quantization error in A/D conversion (or later reduction). It's a similar problem, Shannon/Nyquist is dealing with frequencies though, like in audio sampling or in image resolution.

    BTW, your explanation also covers why the Canon C300 will have a major problem: it seems to be very low in noise and consequently high in DR, but how to squeeze that into 8 bits?

    And it explains why we have more noise in the GH2 than in FS100: smaller wells.
  • One way cameras get better DR is to use tricks like combining multiple image captures when low shutter speeds are used (this can be very effective and can greatly raise DR). This is not HD photography because the images are all captured the same way - rather the images are combined to effectively counter the effects of noise. Another trick is to capture an image and then a black frame (with shutter closed) - the black frame is used to subtract sensor noise. Unfortunately, none of this works with video.

  • The latter is working with video. CMOS has two kinds of noise, one is changing statistically over time, the other is just different sensitivity of photocells, called fixed pattern noise.

    The latter can be measured by covering the sensor, storing the values and calculating the correction in realtime. RED is doing this and Sony too in the F3. Don't know about GH2, but the lower third stripe we see at high ISOs might be the result of a insufficiently corrected fixed pattern noise.
  • @nomad

    You're right about the pixel count, of course - I was mistaken (stuck in the old 3 sensor days). Thanks for the correction.

    The nyquist limit does apply because the A/D converters are 12-bit (I believe), which makes the roundoff error insignificant in this discussion. So, we are close to the simple mathematical error introduced when pixel values are mapped into the 8-bit space. From an image gradation fidelity standpoint the principle is the same.

  • The black frame trick I was alluding to is used to subtract noise that results from the sensor heating up over time - like with long shutter speeds, or constant capturing. It has to be done separately for each shot and in close proximity, and using the same capture time as the image. I can see where one could do this as long as shutter speeds are faster than 2x the frame rate. Or, maybe they can do it faster?

    I doubt the GH2 can do this because it requires using a shutter during capture - maybe the GH3 will?

  • Right, A/D conversion is even 16 bit in high-end cameras. Downsampling to 8 bit is about as bad as digitizing in 8 bit, though. The only advantage: image processing can be done before downsampling, that's what the picture profiles in our cameras do.
    So, if we can hack the picture profiles, we can adapt better to a specific scene and to the intended look before the encoder is downsampling to 8 bit. But we will never get the freedom to record the full DR and decide about the look in post. That can only be done in RAW, like REDCode or ARRIraw.

    And regarding Shannon/Nyquist limit: yes, the principle is the same, but nobody is calling it by that name, most of the literature will say quantization error. It's a kind of aliasing too, that's for sure.
  • @VK

    Clever way to get around the 8-bit issue. Double the bitrate and increase usable DR by at least 2 stops (theoretically, even more) - pretty nice!

  • Very interesting discussion indeed! Near RT too ;-)

    You are right about the black frame to some degree. The fixed pattern noise can change with exposure time (similar to Schwarzschild effect in analog photography) and shift with temperature. But with a camera that is not overheating massively (like the 7D) our GH2 would be fine if it just did such a test between the shots.

    When shooting RED I only need one black shading if the exterior temperature changes massively or if I go to very long exposure times (and back). The RED has more mass to slow down changes in temperature, but even GH2 wouldn't need it on every frame.

    The other part of noise changes all the time, it's the dance of the molecules and only comes to a stop at zero (Kelvin!). Unfortunately, cooling the sensor down to that value makes most of it's physics change massively ;-)
  • @nomad

    I was under the impression that Nyquist/Shannon applied to any kind of analog sampling in any domain. At least that what it means from an information theory standpoint. I guess conventional use of the term has been restricted - too bad, that just means that things are made more complicated for no good reason.

    Wouldn't the black frame require the use of a shutter even if not all the time? I doubt the GH2 can fire it fast enough to not end up interrupting video. I assume the Red uses a shutter?

  • No, you just cap it ;-)

    From Wiki:
    Shannon's version of the theorem states:
    If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart.

    So, it's about frequencies, but you can generalize the problem to any kind of sampling, that's true. Even the Heisenberg observer problem could be seen as a similar issue…

    Regarding noise: fixed pattern noise is one component of noise, but it's a simplification if one divides it into components. Actually, it's a continuum:
    Noise is tiny differences in sensitivity of the photocells. One part of it is introduced in production and can be measured and compensated.
    Another part is changing very slowly by heating of the sensor and can change with exposure time or aging too. This could be compensated by calibrating from time-to-time, like RED or F3 does. Even some scanners do this from time-to-time, asking you to keep the lid shut for a minute or so.
    A (theoretical) GHX could do this, even automatically by a shutter being closed when you switch it on (or the cap left on), doing a calibration and then come to life. Might be annoying if you need it NOW.
    Another, relatively small fraction of noise is changing faster than this but slower than frame-to-frame. This could be compensated with a fast shutter, but it won't help much. The largest part of noise is fully random from frame to frame and can't be reduced other than with complex temporal filters like Neatvideo – far from realtime with an ordinary computer, only the very expensive ARRI Relativity can do this.
    BTW, conservation of this fully random type of noise makes it look so much better with our GOP1 patches, since it changes so fast it looks more organic. Longer GOPs tend to freeze this random noise over time. But even a much faster camera can't reduce this type of noise by comparing it to a black frame.
  • Oops, sorry, double post…
  • Well... then it looks like Panasonic could do something like this if they wanted to. I doubt it's doable with the hack, though.

    Nikon has a long shutter NR feature where a black frame is captured and is used to reduce noise - it works very well.

    I also have an old Kodak SLR/n. They had a "low ISO" feature where you could set ISO as low as 6. What they really were doing was to capture images several times at ISO 160 and integrate them into one image. The results were stunning! The feature was rather hard to use, however, and pretty restrictive (requiring very long shutter times) - but boy, the images looked great. It's a technique commonly used with film scanners as well.