Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Please, support PV!
It allows to keep PV going, with more focus towards AI, but keeping be one of the few truly independent places.
Frame Rate Downscaling / Frame Averaging
  • Anyway, the basic idea is the same as image stacking, you stack 2 or more images, one of the advantages is if you do it in a high bit depth space, the bit depth of the image will go up, that means 8-bit images increase in bit depth when the stack is calculated in a higher bit depth space.

    I commented off hand that if you're desperate enough you can get higher bit depth from merging frames together if you have access to 50, 60p etc. Oh, this also works if you downscale a higher resolution to a lower one within a higher bit depth space as well - so for all you people down scaling footage, if you're able to do that in a 16-bit space and save a high bit depth format then you'll have something to gain. I didn't speak for practicality, only that it can be done.

    When I wrote my program to count unique colours in very large resolution 16-bit tiff images, I didn't see other software that'd do that. As the array would require tremendous memory. If you want to know the way I managed to do it I had a simpler way to do it which required processing power over memory, and that was to sort the pixels in an array from smallest colour value to highest essentially, that way you could simply not add +1 to the unique colour count if the same colour appeared more than once in a row.

    I had other reasons for doing this, it's part of my experiments with image stacking, but I've done it with various image sources, scans, jpegs from shitty cameras with no raw, raw, jpegs from better cameras, snap shots of two frames in video.

    Been told it's "cant be done" many times. Because the information from one from to the next isn't the same - well no, that's exactly why it works, even in the worst case scenario of a tripod locked off shot of still life with nothing moving it works.

    But if you're recording 50p, blending every 2 frames together just speeds the footage up to 25p, the average of the two frames still contains the same amount of motion blur/movement as recording in 25p at the same shutter angle, though it may be divided into 2 sections over the 1 frame instead of 1 long section of motion.

    Generally that shouldn't be an issue as 180 degree is accepted as general purpose good shutter angle which leaves space between the movement/motion blur anyway, and if you wanted none you could shoot 360 degree.

    Now to the screenshot - R, G, B bit values, these represent how much of the scale Red will use on it's own (regardless of what green and blue is attached to - the same goes for Green and Blue etc), a value of 6-bits would mean, the image only has 64 differing red values in total. I made this to identify problems with some images.

    Unique Colours is the total number of colours, an 8-bit image has a maximum value of 24-bits here, but that would be very hard to achieve, as it would have to have every combination of colour, hue, intensity, saturation in the one image etc from bland to neon, and the image resolution would need to be 16.7 million pixels or over.

    The Unique Colour Factor - A 1920x1080p image has 2,073,600 pixels in the frame, a value of 50% would mean it would have 1,036,800 unique colours in the image, this percentage can be used as a factor of colour quality and separation.

    The first example is a single frame grab from some A7s (720p 120fps) video I downloaded off the net (I was going to use my GH2, but I left it at work this afternoon), it's placed into Photoshop and converted to 16-bit and saved as a tiff (partly to rule out any conversion to 16-bit as the contributing rise in bit depth counting, and also because the program I wrote only reads 16-bit tiffs at this time of writing).

    The second example, is the first 2 frames in 16-bit mode in photoshop, blended with opacity at 50% for the top layer, flattened and saved as a tif.

    The fourth is the same but with 4 frames.

    As you can see, the more you stack, the more in between values with higher precision than 8-bit arise, about ~9x the unique colours.

    It should only be logical, even if the video or two photos is of on a tripod of a still life, 2 pixel values will have variance in them over the two frames, especially since 8-bit has high quantisation error compared to higher bit depths, variable noise, micron image movement etc, so if you have one that's 241 and 240 for example, the in between value cannot be represented in 8-bits.

    Unfortunately I'm working tomorrow, but that means I can get my camera back before Monday, so I probably can shoot some footage with the lowest contrast settings and see if I can induce banding in something and post up an actual video sample.

    count.jpg
    862 x 1119 - 202K
  • 5 Replies sorted by
  • Here is plan for you:

    1. Make source images B&W. It'll get rid of complete mess in your head related to colors.
    2. Understand that BEST (not WORST!) case for you is static scene on tripod.
    3. Understand that if you have two 8 bit values and add them producing 9 bit result (if they are large enough) it is no need to go with flag and tell that you used 16 bit intermediary buffer.
    4. Understand that you do not need to calculate "unique colors" or "unique shades" as it is pointless thing. As whole thing is basic school exercise.
    5. Understand that most modern cameras have more advanced version (it also involves feature matching and frame alignment) working for stills noise reduction.
  • If you can shoot an image of a blue sky with 8bit banding issues and process in this way to remove the 8bit banding, then I may agree with this flawed concept. Until you can upload images with proof that it works, I remain skeptical. Vitaliy is correct in his reply.

  • 8bit cameras that overcrank usually do so at the cost of compression artifacts and resolution, so I question the usefulness of eeking out an extra bit shooting in those modes. Some pictures showing the benefit would be great in any case.

  • You seem to be stuck on simple arithmetic of 1+1, that is what is a school exercise, it does not work like that, my program is not doing the frame averaging, Photoshop is, After Effects works similar to Photoshop when you frame average, and you get more than simple arithmetic simply due to the fact it isn't simple arithmetic of (126+127)/2 = 126.5 or 32384 in 16-bit space, etc. That isn't what happens. The averaging going on is a lot more complex than that.

    Moving camera and moving images are find to blend together, this is a very simple concept. You will get blur yes, but that blur is -supposed- to be there, we are talking about going from say 50p to 25p, it is not a photo where you need to align and register for stacking.

    There is no mess in my head related to colours.

    The same reason it is also better to desaturate images in a high bit depth space than to do it in 8-bit or to shoot in B&W originally with 8-bit footage.

    In regards to B&W. These attachments have been heavily lifted with levels, they are deep shadows of GH2 1080p footage.

    If you take original 8-bit footage and desaturate in an 8-bit space, I get 256 tones - 8 bits. Attachment 1.

    If you take original 8-bit footage and desaturate in a 16-bit space, I get 639 tones - ~9.3 bits. Attachment 2. The sample looks about twice the number of tones - twice being 9 bits and our sample count ~9.3 bits, so that checks out. This already offers a mild improvement simply by doing the black and white conversion in a high bit depth space.

    If you take original 8-bit footage, and merge 2 adjacement frames in a 16-bit space, and desaturate them, I get 22,984 tones - ~14.5 bits. Attachment 3. This is the legitimate number of tones in the image. It may not be as accurate/good as a camera original recording at 14.5 bits, or 12-bits for that matter, but the improvement is well beyond what 9 bits can deliver. Yes I realise I'm going from ~24p to ~12p here, but the argument is the same:

    IF you have a high frame rate camera, you can use that frame rate to compensate for the lack of tones.

    @caveport, it's night here now, can't right now.

    c1.jpg
    418 x 331 - 69K
    c2.jpg
    418 x 331 - 67K
    c3.jpg
    418 x 331 - 60K
  • @Athiril

    Get some good book on algorithms used in image processing, good library and understand how it work.

    As for now it is not serious talk. Because most of the thing you are telling is just inaccurate as you do not understand algorithms. I am sure you tell it just because slightly lack knowledge in this field and you will fix wires in the head as start digging deeper.