Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Please, support PV!
It allows to keep PV going, with more focus towards AI, but keeping be one of the few truly independent places.
Pro: AVCHD Quantization process
  • 131 Replies sorted by
  • We're working away at it. I've been looking at quantization scaling matrices, etc... There's still some more work to do. It will take a while longer. Hopefully, some good stuff comes out at the end...

    I'm pretty familiar with the TI offerings. We've been looking at several codec's as background information. The Panasonic codec is pretty different, so it's a bit of a challange.

    Chris
  • @Vitaly_Kiselev How's the progress here going? Any news from Chris? Also wondered if there were any chips out there which were sort of similar to Panasonics encoder like the Ti stuff here which could be of help?

    http://processors.wiki.ti.com/index.php/DM36x_H.264_encoder_FAQ
    http://processors.wiki.ti.com/index.php/DM365_Codecs_FAQ
  • We made some progress with Chris discussing all this stuff.
    Also it seeems like we found the bug causing strange bitrate behaviour sometimes.

    Chris will make few things and test them so we'll understand it even better.
  • @driftwood
    ok, thank you!
  • @Butt Make films! Driftwood Productions. I bought two GH2's (instead of going for an AF101) to B cam alongside my EX3s and 5DmkIIs, etc... With this hack, I'm just trying to understand the mechanics to get the best out of the equipment and I'm sorry if I don't understand the deeper in/outs of the engine. I just want to learn like everyone else! And help. Gonna donate some more dosh to Vitaly now. I can't believe the amount of work he has to process. And with the help of some incredible guys around him. Really pleased with what some of you experienced guys have achieved so far. Sometimes I feel he would be better telling us 'drones' what to do to help - I have no problem with that. You pros can then analyse the results. :-)
  • @driftwood

    what do you do?
  • Thanks for removing the comments! Old chinese proverb say "do not despise the snake for having no horns, for who is to say it will not become a dragon?"
  • @VK Ok, I think I get what you need. Incidentally, if you could announce which parts need testing /verifying - organising the testers more we could cover much better ground and quicker? Even if it means only you and a few others know what the results mean.

    PS Sorry for the misinterpretation but I want to help you get the best out of the GH2 :-)

    So my first test would be this?
    checkbox 1st test.png
    380 x 624 - 32K
  • @driftwood
    Nope.
    1080p24 Settings High > 1080p24 Settings Low etc.
    You can't go wrong, as all you need is select item from dropdown list.
  • @Vitaliy_KiselevJuly
    As in

    Test 1080p24 Settings High> Scaling T1
    Test 1080p24 Settings High> Scaling T2
    Test 1080p24 Settings High> Scaling T3
    etc...
    Then..
    Test 1080p24 Settings Low> Scaling T1 etc.. etc..
    ?
  • @driftwood
    You do not understand that you are doing. :-)
    I know that you want to help very much, so, I'll try to explain your error.
    Right now all your settings are identical, absolutely.
    You need to check 1080p24 setting high and change it to each option (except original) sequentally and test. Check that you record in 1080p24 24H mode on camera.
    Camera must be fixed on tripod aimed at well lit and static scene.
  • @Emul

    Thanks.
    Interesting idea.
  • A short thought:
    The "Encoder Settings" look to me like scaling factors for quantisation tables.
    720p and 1080i/p seem to be addressed each with 4 progression stages
    to control kind of a 4-step adaptive matrix.

    The easier-to encode progressive format start with lower factors for temporally smooth content: 720p (2) 1080i/p (3). The lowest 1080 stage (3) can be shared by both i and p.
    (A temporally smooth picture does not differ much in 1080i or p.)
    The more temporal ripply content becomes, the more quantisation factor has to rise,
    harder for i than for p.
    Seeing the (17) for 1080i it becomes obvious that 1080i is harder to encode than 1080p, and would need more bits. So in consequence this has to compensated by even higher quantisation factors to have the same bitrate results.
    The last factor for 1080i is 17. Why 17?
    I am looking for known H.264 restrictions and correlation in integers first
    and find:(17)*3=51.
    Now multiplied by 3 we see 51, this would resemble the highest quantisation factor implemented in this encoder. And (2)*3 would mean the lowest quantisation factor of 6, maybe a comfortable value for temporal smooth 720p content.

    Furthermore it can be explained now why the encoder working at 720p and with lowest quantisation factors together with comfortably higher given bitrates sees no need to insert B-frames.
    The encoder simply does not need to save any bits here and produces P-Frames only (beside the occasionally I-frame at GOP boundaries).
    This can be useful to have higher quality and easier encoding/decoding.

    On the other side forcing a 1080i encode to have only a last factor of 4 instead of 17 violates either bitrate restriction or forces too many bits.
    So that encoder freezes, or SD card can not keep up. This is not useful at all.

    I would leave these controls alone at first, rather concentrate on the buffer controls.

    While implementing x264 it became obvious how important proper buffer implementation and following HRD model was, for getting broader playback compliance.
  • Does anyone have more info on the GH2's interpretation of Quantisation tables and scalers yet? e.g. in ptool settings does high settings = 51 end of the range, low settings = bottom end 1 ?
    So to remind ourselves, 1 gets the best quality/worse compression, and 51 getting the best compression/worse quality.

    "...GOP Pattern 7 = GOP length 12 giving IBBP. Each video frame is divided into M macroblocks (MB) of 16 × 16 pixels (e.g., M = 99 for QCIF) for DCT based encoding. The DCT transformation is performed on a block (i.e., subdivision of a macroblock) of 8×8 pixels. We denote the quantization scale of the encoder by q. The possible values for q vary from q = 1, 2, . . . , 31 for MPEG4 and q = 1, 2, . . . , 51 for H.264/AVC..."
    Some more info in http://mre.faculty.asu.edu/VDcurve_ext_Jan05.pdf
  • I don't know if it would help but there is are a number of charts from http://www.dsclabs.com/ that might be useful for this kind of testing if anyone has access to one.
  • @driftwood
    May be, but for me it is no use as I can't understand this.
    I repeat. Do not touch anything, uncheck all video related patches, change one setting only.
  • @driftwood
    I can't understand that you are doing. Absolutely.
    You changed GOP, and vary ISO, bunch of other settings.
    It is useless results.

    Most probably it is better for you to use http://www.personal-view.com/talks/discussion/335/pro-gh2-avchd-encoder-settings
  • @Vitaliy_Kiselev Is it ok to leave my GOP setting at 3 (as in Kae's settings). All Im going to do is leave Quantiser table on 1080p24 Setting Low, and, then process thru all the Quantisation Scaling tables one at a time. Or this pointless?
  • @driftwood
    First, do not change GOP or anything else other than one parameter.
    Second, fix ISO.
    Always attach your settings or list them complete, as I can't understand anything from your tests.
  • @Vitaly_Kiselev ok will start tests again. (ps I was only leaving Quantizer table to 1080p24 Setting Low and travelling through the Scaling tables one at a time with this test. ie leaving everything else alone. Do you prefer GOP, and bitrates to be put back to 'Suggested Values'?)

    Can someone talk to me about how the variable Quantizer's scaling tables will affect Quantizer tables settings in Ptool? Doing some methodical quant testing now on 24p.
    I take it the scalers give a percentage reduction in the bitrate over the standard Quant table's use if employed?

    Been reading this to try and understand things;-
    http://www.vcodex.com/files/H264_4x4_transform_whitepaper_Apr09.pdf

    Update: first test roughly a minute over same scene, using Kae's 3 GOP on 24P setting with;-
    (pic 1) Quantiser Table set to 1080p24 Setting Low, Q Scaling Table = Scaling T1, Auto ISO
    (pic 2) Quantiser Table set to 1080p24 Setting Low, Q Scaling Table = Scaling T1, ISO 1600
    (pic 3) Quantiser Table set to 1080p24 Setting Low, Q Scaling Table = Scaling T2, Auto ISO
    (pic 4) Quantiser Table set to 1080p24 Setting Low, Q Scaling Table = Scaling T2, ISO 1600
    Kae-Butt setD Qaunt Table 1080p24 Setting Low and Scaling T1 checked with Auto ISO.png
    1297 x 685 - 86K
    Kae-Butt setD Qaunt Table 1080p24 Setting Low and Scaling T1 checked with ISO1600.png
    1304 x 688 - 87K
    Kae-Butt setD Qaunt Table 1080p24 Setting Low and Scaling T2 checked with Auto ISO.png
    1295 x 686 - 105K
    Kae-Butt setD Qaunt Table 1080p24 Setting Low and Scaling T2 checked with ISO 1600.png
    1295 x 682 - 85K
    Q Table low and T2 settings.txt
    538B
  • Actually, that doesn't really make sense either. It's the test with the lower Q value that is the best. The problem is that it was only used in the top part of the very first frame. I'll have to repeat this test. Who knows, maybe my cat walked in while I was running the last test and changed the light reflected on the target slightly.

    Chris
  • Good point. Maybe the yellow came out funny for another reason; because it's lower in the frame perhaps? One thing I have determined is that the codec sometimes runs out of steam toward the lower parts of frames. The yellow stripe is the last one in the test target.

    Chris
  • @cbrandin - Interesting observation about the eyes' ability to resolve yellow. However, if the encoder is detecting yellow solely on the basis of hue, I'd anticipate a problem with dark shades of yellow, i.e. yellowish-brown. The eye can discriminate dark shades with much more clarity than it has for bright yellow.
  • iI think I figured out why the color fidelity with higher bitrates is better, even though the minimum quantization value is not lower. It's because, even though the minimum is the same, a higher proportion of the macroblocks in I frames are coded at that minimum value. It looks like the codec is smart enough to know that yellow is the color eyes are least able to resolve - so it bails on that color first, which is why the biggest difference is in the yellows.

    As the codec goes from top to bottom in a frame, if it runs out of bandwidth, items low in the frame are sacrificed. I've seen lots of footage where mud, bad colors, macroblocking, etc... is more prevalent in the lower parts of frames.

    Chris
  • Yup, that's exactly what I'm experimenting with.

    Chris