Personal View site logo
Question about GH2 DR
  • 58 Replies sorted by
  • The vibrating sensor would be a temporal version of the old spatial trick of pixel-shifting in 3-chip camcorders.
  • @nomad

    right - I guess I should have specified "proportional" better... as a reduction related to the current number of passes completed plus 1 - potato vs potatoe (at least in american politics:-).
  • All this gave me an idea. Most cameras have a "clean sensor" feature that vibrates the sensor. It seems to me that you could increase effective resolution by shifting the sensor between captures. It would be sort of like HD photo techniques, except applied to effective resolution rather than DR. It could be great for landscape/architectural images, etc... Integrating the images would require some new widget, though.
  • Now some authority needs to decide who was first ;-)

    Just like Shannon, Nyquist, Kotelnikow and Küpfmüller…

    Regarding scanners: well, it's a square function. Noise is reduced by the same amount with every doubling of the number of scans, i.e. 2,4,8,16 scans and so an. AFAIK, no scanner goes beyond 16 for obvious reasons.
  • "RED is doing it to some degree to extend DR with their HDRx™. They sample every cell with a shorter exposure time and then continue until a longer one is reached. You'll get two tracks, one exposed for the blacks and one for the highlights, which you can mix later in RCX or DaVinci Resolve to your liking."

    This sounds fairly similar to VK's idea.
  • Nikon film scanners have a feature supporting multi-pass scanning. Theoretically, that should reduce random noise proportionally with each pass (sensor noise, that is). I have a Nikon 4000 scanner that does this and it makes a significant difference - although it slows things down quite a bit. Also, because the sensor is moved mechanically, I assume the slight variation in sensor positioning between iterations resulting helps too.
  • Sure! Temporal oversampling. But I don' see how this can be applied to film in that manner.

    RED is doing it to some degree to extend DR with their HDRx™. They sample every cell with a shorter exposure time and then continue until a longer one is reached. You'll get two tracks, one exposed for the blacks and one for the highlights, which you can mix later in RCX or DaVinci Resolve to your liking.

    If they license this, one day a GH5, -6 or -7 might have this too ;-)
  • Well... then it looks like Panasonic could do something like this if they wanted to. I doubt it's doable with the hack, though.

    Nikon has a long shutter NR feature where a black frame is captured and is used to reduce noise - it works very well.

    I also have an old Kodak SLR/n. They had a "low ISO" feature where you could set ISO as low as 6. What they really were doing was to capture images several times at ISO 160 and integrate them into one image. The results were stunning! The feature was rather hard to use, however, and pretty restrictive (requiring very long shutter times) - but boy, the images looked great. It's a technique commonly used with film scanners as well.

  • Oops, sorry, double post…
  • No, you just cap it ;-)

    From Wiki:
    Shannon's version of the theorem states:
    If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart.

    So, it's about frequencies, but you can generalize the problem to any kind of sampling, that's true. Even the Heisenberg observer problem could be seen as a similar issue…

    Regarding noise: fixed pattern noise is one component of noise, but it's a simplification if one divides it into components. Actually, it's a continuum:
    Noise is tiny differences in sensitivity of the photocells. One part of it is introduced in production and can be measured and compensated.
    Another part is changing very slowly by heating of the sensor and can change with exposure time or aging too. This could be compensated by calibrating from time-to-time, like RED or F3 does. Even some scanners do this from time-to-time, asking you to keep the lid shut for a minute or so.
    A (theoretical) GHX could do this, even automatically by a shutter being closed when you switch it on (or the cap left on), doing a calibration and then come to life. Might be annoying if you need it NOW.
    Another, relatively small fraction of noise is changing faster than this but slower than frame-to-frame. This could be compensated with a fast shutter, but it won't help much. The largest part of noise is fully random from frame to frame and can't be reduced other than with complex temporal filters like Neatvideo – far from realtime with an ordinary computer, only the very expensive ARRI Relativity can do this.
    BTW, conservation of this fully random type of noise makes it look so much better with our GOP1 patches, since it changes so fast it looks more organic. Longer GOPs tend to freeze this random noise over time. But even a much faster camera can't reduce this type of noise by comparing it to a black frame.
  • @nomad

    I was under the impression that Nyquist/Shannon applied to any kind of analog sampling in any domain. At least that what it means from an information theory standpoint. I guess conventional use of the term has been restricted - too bad, that just means that things are made more complicated for no good reason.

    Wouldn't the black frame require the use of a shutter even if not all the time? I doubt the GH2 can fire it fast enough to not end up interrupting video. I assume the Red uses a shutter?

    Chris
  • Very interesting discussion indeed! Near RT too ;-)

    You are right about the black frame to some degree. The fixed pattern noise can change with exposure time (similar to Schwarzschild effect in analog photography) and shift with temperature. But with a camera that is not overheating massively (like the 7D) our GH2 would be fine if it just did such a test between the shots.

    When shooting RED I only need one black shading if the exterior temperature changes massively or if I go to very long exposure times (and back). The RED has more mass to slow down changes in temperature, but even GH2 wouldn't need it on every frame.

    The other part of noise changes all the time, it's the dance of the molecules and only comes to a stop at zero (Kelvin!). Unfortunately, cooling the sensor down to that value makes most of it's physics change massively ;-)
  • @VK

    Clever way to get around the 8-bit issue. Double the bitrate and increase usable DR by at least 2 stops (theoretically, even more) - pretty nice!

    Chris
  • Right, A/D conversion is even 16 bit in high-end cameras. Downsampling to 8 bit is about as bad as digitizing in 8 bit, though. The only advantage: image processing can be done before downsampling, that's what the picture profiles in our cameras do.
    So, if we can hack the picture profiles, we can adapt better to a specific scene and to the intended look before the encoder is downsampling to 8 bit. But we will never get the freedom to record the full DR and decide about the look in post. That can only be done in RAW, like REDCode or ARRIraw.

    And regarding Shannon/Nyquist limit: yes, the principle is the same, but nobody is calling it by that name, most of the literature will say quantization error. It's a kind of aliasing too, that's for sure.
  • The black frame trick I was alluding to is used to subtract noise that results from the sensor heating up over time - like with long shutter speeds, or constant capturing. It has to be done separately for each shot and in close proximity, and using the same capture time as the image. I can see where one could do this as long as shutter speeds are faster than 2x the frame rate. Or, maybe they can do it faster?

    I doubt the GH2 can do this because it requires using a shutter during capture - maybe the GH3 will?

    Chris
  • @nomad

    You're right about the pixel count, of course - I was mistaken (stuck in the old 3 sensor days). Thanks for the correction.

    The nyquist limit does apply because the A/D converters are 12-bit (I believe), which makes the roundoff error insignificant in this discussion. So, we are close to the simple mathematical error introduced when pixel values are mapped into the 8-bit space. From an image gradation fidelity standpoint the principle is the same.

    Chris
  • The latter is working with video. CMOS has two kinds of noise, one is changing statistically over time, the other is just different sensitivity of photocells, called fixed pattern noise.

    The latter can be measured by covering the sensor, storing the values and calculating the correction in realtime. RED is doing this and Sony too in the F3. Don't know about GH2, but the lower third stripe we see at high ISOs might be the result of a insufficiently corrected fixed pattern noise.
  • One way cameras get better DR is to use tricks like combining multiple image captures when low shutter speeds are used (this can be very effective and can greatly raise DR). This is not HD photography because the images are all captured the same way - rather the images are combined to effectively counter the effects of noise. Another trick is to capture an image and then a black frame (with shutter closed) - the black frame is used to subtract sensor noise. Unfortunately, none of this works with video.

    Chris
  • Great contribution, Chris! Please allow me two corrections:

    Any Bayer-pattern sensor will need more than 2 megapixels to resolve HDTV – about 3.2 minimum, like Alexa, F3 or FS100.

    The representation of stops in bits has not much to do with the Shannon/Nyquist limit, but the quantization error in A/D conversion (or later reduction). It's a similar problem, Shannon/Nyquist is dealing with frequencies though, like in audio sampling or in image resolution.

    BTW, your explanation also covers why the Canon C300 will have a major problem: it seems to be very low in noise and consequently high in DR, but how to squeeze that into 8 bits?

    And it explains why we have more noise in the GH2 than in FS100: smaller wells.
  • @qwerty123

    I've been away for a while so I haven't had time to work on film modes. They are extremely complicated and elusive so I haven't been able to make much progress on that front. There is an unfortunate trade-off with film modes. If DR is increased, posterization will result because of the 8-bits of data per channel.

    As for the 9.5 stops. I suspect that for the GH2 that's pretty much the practical limit - again because of the 8-bits per channel. JPEG and H.264 images will never be able to handle the full raw range and still look good in most cases. Even my big Nikon cameras limit DR to 9 or so stops with JPEG in most cases. Raw, of course has much higher DR; but raw is 14-bits per channel.

    Chris
  • You could have a codec that has a 2-bit depth. Set the thresholds at: 00=-7EV, 01=-2EV, 10 = +2EV, and 11= +7EV. Now, according to specifications you would have 14 stops of DR. It's meaningless, though, because 14 stops aren't actually rendered - but technically, it's still 14 stops of DR. Also, the lowest value (00) will be swamped with noise - so really you end up with only 3 usable bits at best.

    Gamma curves are used to make the mapping of DR into 8 bits as pleasant visually as possible. Use a flat curve, and things get really ugly in 8-bit space. Gamma 2.2 is pretty close to ideal from a perception standpoint, so other curves will usually look worse in the end.

    Chris
  • @danyyyel

    The problem with many of these tests is that signal to noise ratios are often not considered, and that results in being able to claim better DR. True, the darkest pixel vs the brightest one might be a wide spread, but if you look at details you will notice that low level details are swamped with noise. If you consider S/N ratios the D7000 comes to about 9-10 stops of usable dynamic range. Even Nikon admits that the D3x cannot resolve low level detail as well as the D3/D700 - but you have to save in 14 bit lossless mode (which is not the camera's default) to see the difference. Canon, for example, actually lowered the pixel count in the G11 because of how bad the low level detail looked with the G10 (which had a higher pixel count). My hat's off to them for that because at long last they listened to their engineers instead of simply going for a higher pixel count (as marketing people would suggest).

    It's not strictly the pixel density that needs to be considered - it's the pixel well size that actually matters. One advancement has been to connect sensor elements more efficiently (back-light sensors) so that a higher percentage of of the sensor space is actually sensor wells - resulting in bigger individual wells, and that helps. With a camera that is strictly video the pixel count can be much lower (2 megapixels for HD 1080) - also resulting in bigger wells.

    APS-C is a bigger sensor than 4/3 (1.5x vs 2x), so one would expect better performance.

    I realize that incremental advancements have been made. However, there is a limit to how effectively low level detail can theoretically be captured simply based on the physics of how many photons can be gathered within a given time within a certain space.

    By the way, virtually nobody supports these high DR values with JPEG because quantization gets ugly. With JPEG, DR is typically degraded 2-5 stops. H.264 is similar to JPEG when it comes to compression, so video performance is also limited by the 8 bits per channel. 8 bits can only completely correctly represent 7 stops of data (because of the nyquist limit) - if a higher DR is rendered, gradation steps will be skipped and the total range is mapped into only 8 discreet steps. This is why S/N ratio is so important - if the dark detail has a lot of noise then you are effectively stuck with 6-7 bits of usable data per channel. Raise ISO too high and you lose another stop or two. It can get pretty bad.

    Of course all this can be obfuscated depending on how you develop specifications - which would explain why some cameras look better than others even with "lower" specs.

    Chris
  • @cbrandin I am no expert at that level of those who wrote the clarkvision article but I think there reasoning is completely false. Look at DXOmark and there is another site, the highest DR in the photo camera world (RAW format) is the apsc Pentax k5 and just a little bellow is the d7000 followed by the d3x. The first two are around 14 stop, that is 2 stop more than the full frame D700. The ARRI Alexa which is S35/APSC size sensor is widely credited at 14.5 stop.

    Sony has been investing a lot in the DR of its sensor, you can see a clear improvement from each generation. In the photoworld with the Nikon d7000 and Pentax k5 and in the video world with the s-log F3 and soon to be F65.

  • @qwerty123 there has been renewed discussion on P-V regarding issues with higher than 9.5 stops- notably the issue present in the C300- (using 8 bit to describe 12-14 stops) - so until it has been concluded that 8 bits can actually safely contain higher stops- then this is not really going to be a game changer.