Personal View site logo
Pro: AVCHD Quantization process
  • 131 Replies sorted by
  • Great work, Chris. I'm glad someone is approaching this GH2 testing methodically.

    Forgive a newbie question, and don;t let me distract you from your work, but what exactly is being quantized in these tests?

    A quick survey of the h.264 literature via Google suggests to me that it is the pixel data in the frequency domain. (This may not be the correct terminology; I'm trying to translate from my understanding of FFT in audio DSP work.)

    [EDIT] - I suppose you;re varying this parameter (described in http://www.pixeltools.com/rate_control_paper.html) :

    "In particular, the quantization parameter QP regulates how much spatial detail is saved. When QP is very small, almost all that detail is retained. As QP is increased, some of that detail is aggregated so that the bit rate drops – but at the price of some increase in distortion and some loss of quality."

    Anyway good luck with your interesting tests!

    -bruno


  • I have been testing lowering the minimum quantization value. This is not incorporated into PTool yet; Vitaliy made a special patch for me to test.

    I tested three scenarios (all in the 24H mode): Panasonic factory settings, Vitaliy's suggested 42000000 setting for 24H, and finally Vitaliy's 42000000 setting plus the minimum quantization value lowered from the default value of 20 down to 10. The test chart was an image containing gray, red, green, blue, cyan, magenta, and yellow. Each stripe was a gradient going from full saturation to white.

    Chart:
    image

    Vector Scope of factory 24H settings:
    image

    Vector Scope of PTool suggested 42000000 for 24H settings:
    image

    Vector Scope of PTool suggested 42000000 plus quantization minimum set to 10 for 24H settings:
    image

    It appears that a lower quantization value does improve color fidelity (which means less blocking). Notice that the yellow axis shows a significant color artifact in all but the quantization=10 captures.

    These are just the static tests, I plan to do some dynamic testing later.

    [UPdate] The quantization minimums don't seem to be changing as much as I expected. Maybe they won't with static scenes. So, at this point I'm still analyzing exactly why the color fidelity is improved with this patch.

    Chris
    Gradation Test Chart.jpg
    600 x 424 - 33K
    Factory-20QMin-24H.jpg
    636 x 403 - 32K
    42M-20QMin-24H.jpg
    632 x 401 - 29K
    42M-10QMin-24H.jpg
    636 x 402 - 30K
  • I haven't analyzed the other modes yet, I'll get on it.

    Chris
  • The problem is that StreamParser hasn't got any video stream analysis parts in it (it's all transport stream based) - and writing all that is a very big deal. StreamEye does have that capability, and for $50. I wrote StreamParser to do what other programs couldn't, I really don't want to replicate Elecard's efforts, especially since they have dropped the price to $50. We are talking hundreds of hours of work here - I'd rather do other things.

    Chris
  • Chris, one additional thing.
    We clearly need to make StreamParser able to show how bitrate is distributed in AVCHD part.
  • @cbrandin

    Thanks for ideas, Chris.
    Did you try 1080i and 1080p at high bitrates and look at quantization values?
    I don't know how they could be the same as I frame size also increases.
  • No, quantization happens at the macroblock level (there are about 4000 of them in a 1080p frame). A lot has to happen when a stream starts, so I think there is a throttle at the beginning. With the GH13 we found that it was important to start slow, otherwise the camera crashed instantly.
  • I know the bare minimum about this stuff but something I read kind of spurred a thought.

    So it looks like the codec quantization goes from top to bottom of the frame and the first second or so of the stream seems to be lower quality/bitrate, so could that mean that the panny version of the codec is written to go through one cycle of GOP to build up enough data to estimate proper encoding?
  • I'm guessing lowering the minimum quantization level would help with banding, gradation etc... as well

    Chris
  • On that same page you'll see explanations for "keyframe boost" and "b frame reduction". I think this is similar to the "top" and "bottom" parameters in PTool in that they affect the relative sizes of I frames and B frames (and possibly P frames, but I don't know). That would mean you have to be careful about how you set them, maintaining reasonable ratios. Maybe they should be scaled in proportion to the bitrate without changing the ratio between them.

    Chris
  • Look here:

    http://onlinehelp.avs4you.com/Appendix/AVSCodecSettings/H264AdvancedSettings/ratecontrol.aspx

    It seems that quantization limits are programmable with some codecs. Adding a parameter to adjust the minimum settings seems appropriate. Right now it appears that the min value is set to 20 and the max value is set to 51 (which goes to the end of the tables) in the firmware.
  • I don't know. Right now I'm trying to figure out where the extra bandwidth is being used. It's going somewhere because the video files aren't padded with nulls, they actually contain more data.
  • "Wherever the extra bitrate bandwidth is going it's not going into higher resolution macroblocks."

    Does it makes sense then to exploit more I-frames via the low-GOP strategy?
  • It could be going to other places - like more delta coded macroblocks vs. motion vector coded blocks, for example.
  • Maybe the higher bitrate goes into nirvana and fills the stream with many zeros?
  • I think I found something interesting about quantization. I've been looking at streams with StreamEye to see how macroblocks are being quantized. At the beginning of a stream (during the "blip"), or if low bitrates are used, the quantization values for macroblocks range from 26-51. That's to be expected as the higher the quantization value the lower the detail. The interesting thing is what happens once high bitrate streams settle past the "blip" stage. With Panasonic factory firmware, and 24H cinema mode, all macroblocks are coded with a quantization value of 20. With higher bitrates (I tried up to 42000000) macroblocks are still coded with a macroblock quantization value of 20. I've never seen a value below 20 - it's as if that is a hard coded limit. It seems to me that with higher bitrates we should see lower quantization values. Wherever the extra bitrate bandwidth is going it's not going into higher resolution macroblocks.

    Chris
  • Vitaliy,

    I sent you some reference codec files via email. Let me know if you get them.

    Chris
  • I meant that there are few references because offsets are used - that can make it more difficult to identify where one table starts and ends. It seems that the tables you are looking at are actually 24 tables each. Or, are they i tables of 24 (or, 6 if long) entries? Anyway, it means that you have to examine the code to figure the tables out because there aren't references that separate them. Not a big deal, just more work. It can get complicated if the tables were created from arrays of structures of mixed data types.
  • @cbrandin
    What do you mean?
    They are normal references, i*24, i*24+4, and similar things.
  • And the references are indirect - right? That makes things fun.
  • @cbrandin
    About tables - you resolve them looking at references. They way compiler works it is extremely rare to have such tables to be 52 size, in fact.
    This table I referenced is acessed as i*24+offsets.

    >Is there a patch in PTool for these values (I couldn't find it)? It may be necessary to change these if the GOP length is changed in order to maintain stability, or for some other reason.

    Maybe, I just prefer doing things step by step.
    We already have too much of testers patches.
  • Vitaliy,

    "For 1080p24 we have 24 (GOP=12), 60 for 1080i60 (GOP=15), 48 for 1080i50 (GOP=12)."

    Is there a patch in PTool for these values (I couldn't find it)? It may be necessary to change these if the GOP length is changed in order to maintain stability, or for some other reason.

    Chris
  • I had assumed - I suppose incorrectly - that the table was resolved (i.e. the start and end were known). This is just a dump of the memory, right? So, the table could be any size; 52 entries for example?

    Chris
  • Other interesting stuff from http://www.ficam.fr/upload/documents/AVCIntra.pdf

    image

    This can be also table consisting from prediction modes codes.
    quant2.png
    735 x 225 - 129K
  • @cbrandin

    I think it is not 66 entry table.
    It could be same 52 entry table. Look at values.