Personal View site logo
420, 422, 444, rescaling and colors flame
  • 114 Replies sorted by
  • @Ze_Cahue, I am guessing you don't record in 4K 8-bit and downscale to 1080p with your current equipment/workflow?

    You will most certainly see problems when working with 8-bit, and keep in mind that you are likely also working with either uncompressed or much less compressed material from your external recorder than the internal codec.

  • OK I'm a confused person, and my computer is crazy when I grade material from the old and good ex1 and get much more DR to push in post if recorded externaly 10bit instead of internaly 8bit. I think its pure Psycho from my mind : )

  • You could have a very high DR camera that only shoots 8 bit and a very low DR camera that shoots...heck....14 bit.

    It all depends :-) If you are talking about raw for high DR you need good sensor, good ADC (say 12bit or 14bit low noise one) and also low noise. After you go from linear (almost) raw to the non linear space you can use less bits (by using human vision properties). Hence, yep, you can compress original high DR source into 8bit. Photo guys make even extreme HDR that look very weird.

  • Wow @Ze_Cahue You are certainly mixing apples with oranges. One has nothing exclusively to do with the other. Two totally separate topics there. @tosvus said it correctly. You could have a very high DR camera that only shoots 8 bit and a very low DR camera that shoots...heck....14 bit. Yes..HOW you shoot your scene helps big time... But whether 10 bit or 8 bit...you can't produce information in the highs or lows that was not there to begin with. That's not what people here are talking about. They are talking about the results from downscaling a HUGE 4K image into a 1920x1080 Full HD image.

  • with all you logic, please let me know how can you extract the 10bit dynamic range from a 8bit source? I mean "push" dynamic range in post, I mean get juice from the clipped highlight and the black darkness

    Please understand, I can't fix problems with school math. And it is basic simple school math. Nothing to do with highlights and darkness :-)

    Please just read link with buckets analogy, it has ALL you need (except working frontal core).

    8bit 4k downscaled to 1080 yields a 444 image with "virtual" (for lack of a better term) 10 bit? This virtual 10 bit is good but can be expected to be noisier than native 10 bit?

    Nothing is virtual, it is all real.

  • @brianl, the exact result will depend on the algorithm that Panasonic use when they convert 10 to 8 bit. An ideal implementation would actually make sure 4 pixels combined contain the actual bit representation of a single pixel in 1080p 10bit. I.e it COULD be true 10bit to recover. However, not sure if Panasonic chose to spend the processing power..

  • So tell me if I understand this right. 8bit 4k downscaled to 1080 yields a 444 image with "virtual" (for lack of a better term) 10 bit? This virtual 10 bit is good but can be expected to be noisier than native 10 bit?

  • @ze-cahue, Don't confuse bit-depth with dynamic range. It is separate. You can have a camera with 8-bit and high dynamic range, or a 10-bit camera with low dynamic range. Pf course you would ideally get both.

  • @Vitaliy_Kiselev, with all you logic, please let me know how can you extract the 10bit dynamic range from a 8bit source? I mean "push" dynamic range in post, I mean get juice from the clipped highlight and the black darkness. Please don't tell me to "light the scene properly" or "get the right exposure", we know that even the right exposure have its limits, and some light are not 100% controlled (sun for example). Please, allow me to understand how these extreme dark "blacks" or clipped "whites" can be pushed, if they have only 8bit depth? From where this retrieved data will come from, if they nearby pixels from the 4k source are also 8bit darkness or 8bit clipped white? With 10bit data I can retrieve some over or under exposure parts because the elastic dynamic range of the 10bit. Let me really know how can you do that from 8bit? I already read all your recommend links and have found nothing about it, please let me know how to do it.

  • @Mistas Yes, good way of thinking about it. Just keep in mind that the 4 approximated pixels don't necessarily equate to a "precise" pixel in terms of color they hit. The actual number of bits will in practice, depending on source material, vary between 8bits and 10bits somewhere, but experts seem to suggest it is closer to 10 than 8.

  • I just wanted to point out that it will be an inferior 10-bit compared to one diverged from an actual 10-bit source.

    If you'll had source 4K 10bit in AVC file it could be cool. But. Most of this new bits will be quite noisy. Yes, in ideal noiseless sensor it is good advantage, not so much in 4K on m43 sensors.

  • I think we're on the same page, hehe :) I just wanted to point out that it will be an inferior 10-bit compared to one diverged from an actual 10-bit source, the same way a bayer pattern yields inferior color compared to real RGB stripes. Thus the 10-bit 4:4:4 from downscaled 4k isn't necessarily better than a "real" 1080p 10-bit 4:2:2, like people seem to imply in relation to the GH4

  • If the 4k 8-bit image is properly dithered, the downscaling will lead to a smooth gradation equivalent of 10-bit.

    Logic :-) As it is used in exactly reverse situation were you want to use limited values to represent something close to original. One of extreme cases is representing grayscale image using only b/w :-)

    If you look at thin in reverse - converting high res dithered b/w image into grayscale (and read my link, yes link does not have anything about colors, but it explains why DR range in bigger after rescale) you'll understand were additional bits come from.

  • I'm sorry, I can't seem to find anything about bits in that link?

    Dithering in the same way that a printer dithers to make different shades of gray on a piece of paper. If the 4k 8-bit image is properly dithered, the downscaling will lead to a smooth gradation equivalent of 10-bit. However, since dithering can't capture the exact gradations of a source image (the raw image before data is thrown away) the resulting 10-bit gradation will be a mathematical guesswork - similiar to the way Red MX guesses at colors based on a bayer pattern.

  • For 4k to be downsampled to something equivalent to 10-bit, it will need extensive dithering.

    What kind of dithering? Did you read link above to understand from where 10bits can come from?

  • For 4k 8-bit to be downsampled to something equivalent to 10-bit, it will need extensive dithering. This 10-bit will be a mathematical approximation, a bit like how bayer sampling works. It will in other words be inferior in color and luma accuracy to "real 10 bit" from acquisition.

  • Let's see if I understand this correctly... for arguments sake let's use 0-250 as the 8bit pixel data range, and 0-1000 for 10bit.

    In a 2k 8-bit file if a color is captured as an actual value of 999 in 10bit it has to convert this to 250 in the output file, so some info is lost as it cannot be 249.75

    But if there are 4 nearby pixels in 4k that are 250, 249, 249, 249 in 8bit...this can be down sampled by adding the totals, which would give us 999 in a 10bit pixel, a value otherwise not attainable in 8bit.

    Is this reasoning correct?

  • Scaling a 4k image to 1080 will improve colour rendition to a close approximation of 4:4:4 colour sampling but 8 bit is 8 bit and stays 8 bit. Bit depth is a measure of how many luminance samples there are between black and white limits. Colour sampling is the resolution in the spatial plane i.e. X/Y axis. So we will see an improvement in colour resolution but not bit depth. Banding will still be an issue.

  • @mrbill if you reduce an image in Photoshop you are essentially doing this, as long as you don't use the "nearest neighbor" interpolation method. But even in Photoshop there are different interpolation methods, so I'm wondering if the algorithm to perform such reduction is not unique, and some are better than others (?)

  • This is not about compression, I mean it is, but not in the sense that software that would do the up-sampling or whatever the proper term would be, must create missing information out of nothing, it just rearranges the existing color information into new pattern with more color resolution. So the up sampling uses the information that would be thrown away in simple down-scaling (I am over simplifying I know), and ads +2 and +4 to Cb and Cr to make 444. Luma is 4 anyways.

    Am I even close to the logic behind this process?

  • Is anyone aware of any software that performs a 'supersampling' (to use Avid's words) function that will allow transcoding from 4k to 1080p with all the above benefits? I don't think Avid's proprietory 'frameflex' will do the job, somehow.

  • This was discussed a few years ago. Extract below from http://www.ambarella.com/docs/1080p60.pdf ( it's no longer online and I can't find a copy)

    "This creates the opportunity to use 1080p 60 4:2:0 60p at 8bits-per-sample as a unifying format for both contribution and distribution. This is the case because down conversion of 1080p60 4:2:0 at 8 bits-per-sample can deliver 4:2:2 with increased dynamic range (almost 10 bits) at both 1080i and 720p resolutions"

  • "4K420toHD444" is the new "micro43crop/dof/lens equivalent"

  • all analog signal is measured by wave. Can be square, but wave. The value of the pixel well is not continuous, giving a dynamic measure in time, interpreted by a wave in the ADC.

    It is called flame topic, not bullshit topic, if you noticed :-)

    Thing that you just said is very incorrect as you even do not understand that you are talking about.

    And this

    all analog signal is measured by wave.

    Must be added to famous citations :-)

    As I said, please READ SOMETHING and THINK before posting.