Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Please, support PV!
It allows to keep PV going, with more focus towards AI, but keeping be one of the few truly independent places.
8bit color nonsense
  • 90 Replies sorted by
  • Don't listen to Philip Bloom if you ever expect to surpass him.

  • Hi @Shian In Sony Vegas

    "8-bit Performs video processing using 8-bit arithmetic and in the video (studio RGB, or 16-235) color space.

    32-bit floating point (video levels) Performs video processing using 32-bit arithmetic and in the video color space.

    32-bit floating point (full range) Performs video processing using 32-bit arithmetic and in the full-range color space."

    I'm still trying to understand the difference between video levels and full range. Any advice from your side would be appreciated.

  • Without getting too technical: it just means you have more colors available to try and accurately display the image. Those of us who remember the early days of designing pages for the internet (14.4k modems) and often being confined to displaying gifs with 256 colors vs today with PNGs containing millions of colors. It's not too dissimilar to that. this wiki page helps demonstrate that.

    http://en.wikipedia.org/wiki/Color_depth But what working in floating point (32bit) color gives you is basically the ability to sort of paint over the top of the given image, and fill-in the blanks. It's a fake way of adding more color information to your images. It doesn't really add anything, but it prevents the image from breaking down the way it would in 8 bit. You are essentially airbrushing fine detail onto a course image.

    Which is why you shoot flat. Take the burden off the internal engine of the camera and place it on the massive processing power of your PC, and a 32bit color engine. Everything I've done in ColorGHear is based on this.

  • @shian: you made me test; I see no difference (in a well lit scene) between 16bpc and 32bpc processing. On the other hand 16bpc is only very marginally faster than 32bpc float (I was making assumptions which don't turn out to be true) so 32bpc it stays.

    Agreed on the RGB444 master, which on Windows really means Uncompressed. DNxHD I'm staying away from, at least until they fix their colorspace conversion bug in the 2.3.x release.

  • Tommyboy, I use Vegas 10... What I write here is for luminance levels. The story is pretty much the same for chrominance levels.

    With 8 bits you can have 256 different combinations. We use 8 bits to represent the integers 0-255. 8-bit video levels means black is 16, white is 235, and everything in between is a shade of grey. To keep things simple, values below 16 and above 235 are invalid. 8-bit full range means black is zero and white is 255.

    When you set your Vegas project to 8-bit, the 8-bit values decoded from your input format are used as-is, with no expansion or contraction of the ranges, and then converted to 8-bit RGB for working, and converted back to your output format upon rendering with no expansion or contraction of the ranges. So when your Vegas project is set to 8-bit, if your input format is video levels (like the GH2's files are), then you are working in video levels. If your input format is full range, then you're working in full range. Output formats like MP4 and MTS are supposed to be video levels also.

    In 32-bit mode, the full range is still 0-255 as far as I know, but now you are working with decimals instead of integers. You can have values with more precision, like 20.235453. When you choose 32-bit video levels, the input is decoded and converted to RGB with no expansion or contraction of the ranges. It's essentially the same as choosing 8-bit, except you are working with more precision internally. When you choose 32-bit full range, the input is expanded out to the full range when read, and compressed down to video range when rendered: black at 16 in your input becomes 0 in the working space, and is converted back to 16 upon rendering.

    Practically speaking in Vegas, you have a few different ways of working. If your PC is fast, set the project to 32-bit full range and a gamma of 1.0. Video previews in the Vegas window and on an external computer monitor will show the correct levels. If your PC is not so fast, do your work in 8-bit mode and render in 32-bit mode. Choosing 32-bit video levels will make all of Vegas's filters behave exactly the same as they would in 8-bit mode, except of course that they work with more precision. If you prefer to work in full range, when set to 8-bit mode you'll need to put a Levels filter with the Studio RGB to Computer RGB preset at the beginning of every filter chain and another Levels filter with the Computer RGB to Studio RGB preset at the end of every filter chain, and then disable those filters when you switch to 32-bit full range mode. It's a bit complicated, and I don't suggest working that way.

    Vegas will only display the levels correctly if you have everything set correctly. When the project is set to 8-bit and your input format is using video levels, full-screen previews on a computer monitor will only be correct if you check the "Use Studio RGB" box in the Preview Device Preferences. Video previews in the Vegas window will only be correct if you add a Levels filter with the Studio RGB to Computer RGB preset at the end of the filter chain. Disable the filter when rendering in 8-bit mode. The same rules apply when the project is set to 32-bit video levels and your input uses video levels. If you set the project to 32-bit full range, previews will show the levels correctly with no need to add a Levels filter or check the "Use Studio RGB" box.

    There's more info here: http://www.glennchan.info/articles/vegas/v8color/vegas-9-levels.htm

    .

    shian, it makes no sense to say that working in 32-bit floating point is a reason to shoot flat. You probably wouldn't want to shoot flat if you couldn't work in 32 bits. But having the ability to work in 32 bits doesn't mean you should shoot flat. The reasons for shooting flat are separate.

  • Yep...8 bit 4:2:0 color really is nonsense in 2013...

  • "The reasons for shooting flat are separate."

    @balazer If/when you have the time, it would be greatly appreciated if you could elaborate.

    Thanks in advance.

  • My mistake was assuming everyone knew why to shoot flat. I'm trying to find the numerous threads where I (and others) have explained this, and save some typing.

    The impetus behind this thread was the assumption that shooting flat in 8 bit was a mistake. A mistake because of the "lack of color information." Which is just a wrong assumption.

    you shoot flat because there are some things you can't undo in post. You can easily add contrast. You can't take it away and have it look very nice. You can't undo the saturation bleed that comes from emebedding rich colors into the image in 8bit, 4:2:0. You have ultimate control over your image in post by shooting as flat as possible, and THEN grading in 32bit. Which goes back to what I said earlier, "Which is why you shoot flat. Take the burden off the internal engine of the camera, and place it on the massive processing power of your PC, and a 32bit color engine. Everything I've done in ColorGHear is based on this."

  • @ shian so do you use Intravenus on a regular basis now? Have you experienced any problems with wider shots with it?

  • @Mirrorkisser Yes. I love it for everything. No problems on wide shots. It's my all-around favorite. I don't know if @driftwood and @cbrandin can get any closer to film than they have with Intravenus... but if they never get any closer than this, I won't care. I love the way the footage looks.

    I just shot it side-by-side with 5D Mark II and Mark III cameras, as the C Cam, and the Canon guys all snickered at me. They were gonna show me that my little camera couldn't possibly hang with them. I was brought in, mainly as a joke, and I knew it. And at the rushes viewing, the producers and the director blew up at the DP because they wished all the footage looked like mine. They wanted to know why the GH2 wasn't considered the first choice for A and B cam. I just shake my head at this point. The indie world has it's head all the way up Canon's ass. In the end; I was the only one laughing.

  • @shian What is shooting flat? What is benefit? Thanks

  • @shian +1 on Canon comment. People just trust the name. Glad I found the GH2 and personal-view.com. it's all good - we'll just keep rocking the house

  • @shian - Love that anecdote, for sure. Local DP just ditched Canon after seeing what the GH2 could do when I and my coworker were pulled in on a music video with him. Cuts stunningly with his F3.

    @matt_gh2 - In the Canon world it's Cinestyle. Here, in GH2 land, most shoot with most setting in the picture styles set to -2. I shoot standard -2 across, except for saturation I believe.

  • @matt_gh2 dial down the saturation, and the contrast. Try to shoot using a profile that doesn't process (and bake-in) any film curves. Basically as close to something that resembles a raw image. Also, if you manage to be able to squeeze it into the 5 stop range I discuss in the Expose in the Zone tutorial, you get an even flatter image.

    http://www.personal-view.com/talks/discussion/3773/expose-in-the-zone-getting-the-best-image-for-grading./p1

    This is why I say shooting Intravenus is like shooting Kodachrome. Kodachrome only had a 5.6 stop Dynamic Range. Unlike today's 14 stop film. I'm used to that from my days shooting stills. But with today's technology I don't see that as a limitation except when shooting in available light. I'm torn between my love of ISO sensitivity going up, and having to work around limitations. Having to work around them makes you better. Better sensitivity and wider DR are great to have. But I'm a fan of those who paint with light. And who paint again in post. I'm excited for the new tools. But I feel for those who will learn on them. Most will never do anything interesting with them. Like the moron's who buy RED cams thinking that it will instantly make them a Cinematographer.

  • @shian ok. Currently shooting Apoc Now Boom with settings at Smooth all -2 (saturation -1). So would that be considered flat? Also what is benefit of shooting flat? Thanks. Right on re RED cam idea

    @artiswar Thanks

  • benefit? control and flexibility in post

  • Ok. So my smooth all -2 with Boom is considered flat?

  • @shian, If my only distribution method is bluray (and I will not make any further changes to the footage in the future), do you still recommend rendering to a 444 high bitrate codec before burning to bluray (as indicated in steps 10/11 of your workflow)? The Bluray max bitrate is 54Mbps, so can't I just render to MPEG 2 around 54Mbps? Thanks for your time.

  • @balazer Actually 16bit is enough to stop banding, but in After Effects going to 32bpc enables super-white and super-black (i.e. greater than 255 and less than 0). This means one effect could push your colours past the brightest displayable white and then next effect could pull them back into the visible range. In 16 bit this would clip, but 32 bit doesn't.

    http://help.adobe.com/en_US/aftereffects/cs/using/WSB48B246A-E34D-4d3f-A0A4-B932FD3F12E6a.html

  • @Peter123456 you want the cleanest image possible before going to bluray. I've done encodes from both 444 and 422 masters exported from AE with the same settings just to see, and the 444 held up better every time. But I encode in compressor. I've tried letting AE do the compression - no good. I've tried letting toast do it - no good. I spent a week tweaking my settings, and I get the best results from 444 ProRes to Bluray. Even when letting a pro house do the encoding. They love me for walking in with a hard rive and all my assets lined up, and a ProRes444 master.

    Sam's point is valid too. Just make sure if you are going to broadcast that you conform everything back to broadcast standards at the end, or it'll get rejected. But this is broadcast video 101, you guys already know this stuff, right? Cuz you've been dubbing digibeta masters from day one.

  • @Shian what about using Adobe media encoder VS AE? Any difference in quality? What is your recommended render settings for sending a 1080p file to vimeo?

  • @shian I'm just getting in to the grading and post universe and reading your posts and threads makes my head explode on a daily basis but I love it! I feel like every time I read a long discussion about an area I want to stop you and say,"Ok, I heard you tell me to turn the computer on, then you lost me. Can you start from there." LMAO! Thanks for all you dedication and hard work sir.

  • May I ask respectfully for your optimized Compressor settings?

    BTW, I second having a master in ProRes 4444 to store. You never know which new distribution format will be around when a client is coming back and wants a new version. I tend to offer them very reasonable prices for keeping a master, and they tend to come back eventually. Sometimes even those come back with a desperate look on their faces who didn't want to pay me for keeping a master. Normally I have one deep in the attic ;-)

    Such jobs are the best profit per hour, much better than initial production …

  • My 2 cents here :-)

    Actuallty what is a color profile for our loved GHx? It is here a signal mapping from 16 bits (direct from sensor) to a signal of 8 bits (recorded).

    Imagine first setting named: F (flat) and a different one: NSF (not so flat).

    Imagine same image recorded in F and NSF. In this image you will get one physical point named E here (of the source signal) having same value in F and NSF. Lets say for instance value of E= 43. The value of E (here 43) reffers to same source luminance in the actual image.

    0 means full black 256 means full white.

    Profile F is flatter than NSF that means that you give in profile F more bits of information for any points whose value < E compared to same points in profile NSF.

    Considering a point P(x,y) of the initial image, lets say : L(F,P(x,y)) the luminance of P(x,y) in profile F, L(NSF,P(x,y)) the luminance of P(x,y) in profile NSF.

    So we have : L(F,E)=L(NSF,E)=43.

    We have 2 cases:

    cases 1: For every P / L(F,P)< L(F,E) we have : L(F,P)>L(NSF,P) so that 1/L(F,P) < 1/L(NSF,P) That means for shadows, profile F is better in term of precision : 1/L(F,P) is the relative precision of signal (while mapping or for any work in PP) in profile F for the point P.

    cases 2: For every P / L(F,P)>L(F,E) we have : L(F,P)<L(NSF,P) so that 1/L(F,P) > 1/L(NSF,P). Here we are in highlights and profile F is worse (less precise) than profile NSF. If we were in a 10 bits signal, precision should be 4 times better (smaller) and it will be ok in term of quality. I think this is the point stressed by @FilmingArt.

    But, yes there is a but, here in case2 L(F,P)>L(F,E) that means we have in highlights a lot of bits of information and precision 1/L(F,P) is quite small (that means quite good) even if greater than 1/L(NSF,P) so that argumentation of @FilmingArt is not so relevant as far as my analysis is ok.

    Again my 2 cents here :-)

  • @shian thanks for your reply! I was sceptical if too detailed wide shots could be too much for the 444cbrandin matrix in all-I-mode, but i will give it another go now, sounds very encouraging your experience!