Personal View site logo
ColorGHear [PART 2]
  • 568 Replies sorted by
  • A cool hack on any camera would be an adjustable cutoff highlight. So you could set 90% or 80% zebras up top, and a purple highlight at either 20% or 10% at the bottom.

    Any good monitor allows this :-) besides false color :-)

  • A cool hack on any camera would be an adjustable cutoff highlight. So you could set 90% or 80% zebras up top, and a purple highlight at either 20% or 10% at the bottom.

    False Color is awesome for this, but one could do rather well with just those 2 upper and lower indicators and the above reference.

  • @shian Great info and thanks for digging for us!

  • So, I finally got a definitive answer as to why DSLRs place +/- 0 below 50%:

    DSLRs tend to read at the bottom of Zone V, and not the center. So I asked a couple of DSLR manufacturers if this was on purpose to have even stop readings? And the response I got was essentially, "yes" -

    "...So you'd know where the bottom of the zone is, and then the next stop indicator gets you to VI and is easier to read. So when you hit 1 stop over you are at the bottom of VI and so on."<

    So NOW you know why it's not dead center at 50%

    image

    in camera reading.jpg
    1200 x 900 - 123K
  • The Zone System tutorials are live. I hope they clear things up. Let me know if I just made it worse.

    zonesystem.jpg
    399 x 465 - 102K
  • Depends on the lightness or darkness of the skin tone which is why we have gray cards. Put the gray card at 50% and everything else will fall into line. Then from there you can move things into their proper zone with light, or you can get creative. But knowing where you are within the system, and within your cam's DR allows you to break the rules in really creative ways using gels, flags, bounces, cookies, etc...

  • It makes sense. But I'm sure that in reality, it is more tricky. You have to know how to properly measure elements in your scene, first of all (which I am still not clear on. Light meter explanations always baffle me. I need to just get my hands one one, that might solve the confusion.) Second, you need to know how to correct properly. On-camera, that's fairly straight forward/by the numbers, but I'm sure modification by adding light to the scene introduces a whole other slew of units of measurement that have to be dealt with.

    One basic question . . . is it a general rule of thumb to put face exposure at 18%/middle grey/f5.6zone and then to manipulate the other light in the scene to get as much of it in range as possible?

  • Because of this.

    image

    Jay's explanation is an introduction to the zone system. This map is step 2.

    Learning to manipulate the zones for your own ends is the tricky part. That's the part I'll teach in part 2 of the Zone system tutorials. It takes some very careful thought at first, but like everything else it becomes 2nd nature over time. the hardest part for me is finding a way of explaining the whole thing in such a way that people don't get confused, and that's why it's not finished yet. My first 2 attempts were way too complex.

    thezones.jpg
    1900 x 1067 - 250K
  • @Shian thanks for the video! First explanation of the zone system that didn't leave me scratching my head. How on earth does everyone else make it sound more complex than it is?!

  • Since I am slammed with work on all fronts, and I know you GHearHeads have been wanting the zone system tutorial, which I've yet to complete - here's my buddy Cinematographer, Jay Holben over at Panavision giving a great introductory talk about the Zone System and dynamic range.

    My tutorial will be much more in depth, but for those still struggling with it, this is a good starting point. Also there's a Zone Map in the film school section showing where different subjects fall when exposing for middle gray.

  • The New Adobe suite is going to make ColorGHear even more flexible, and give even CGT users more options possibly beyond AE. As soon as I'm able to test it out and confirm, I'll inform you of all the cool things you can do.

  • Yes. That's why I couldn't port AE's grain killer. You can save it as a powergrade and reuse it in Resolve, but can't export it externally to other platforms.

  • @shian tried to save sharpening method into LUT. When I compare it with the origina node structure its completely different. Could that be an effect that sharpening or softening in general cannot be applied via LUTs.

  • @shian this luminance sharpening method seems to be a nice and simple. I think it would also fit for a LUT system - maybe two or three LUTs with different intensities:

    http://liftgammagain.com/forum/index.php?threads/building-a-better-sharpen-in-resolve.278/

    As it it mentioned, maybe BMD will intigrate this into Resolve.

  • Wow thanks @BurnetRhoades . I have a few long render and for a while I render out sequences Tiff Sequences then rendered them as a ProRes 444 movie in Photoshop (yes you read that right) then married the audio. I think my problem was getting it out all in one pass and just worrying if my computer would handle it. I'll keep this in mind.

  • In the event of a crash, a partial .mov (due to crash, or monster) can be joined with a continuation .mov in exactly the same way marrying sound to picture after render works. Cue: QuickTime Pro Player

    I do essentially this when I've done a big render only to later discover some glitch in an individual frame, a scene where an effect or layer might have been accidentally left off (or on), etc. Anything that's maybe a few minutes or an hour of rendering versus many hours, or days, for the main project. I render "patches" and cut these into larger .mov renders very easily thanks to the QTPro Player.

    It's totally lossless and totally awesome to be able to do so quickly and easily, with a tiny app rather than have to fire up a full-on editor and create a new project, etc.

  • Hey @shian

    I have a basic question for you. I know you render out as Pro Res444 file then marry the audio and video tracks. What is your opinion on rendering out Image Sequences for big projects. Would it save time in the final render and be safer ( in case of blackouts, program crashs, attack by giant monsters etc) ?

  • I've already got the 2 strip working - 3 strip should be easy - BUT their example looks close to a grade I did on the same footage using the cyan Tone Monster - that look they demonstrate is not extraordinarily special. If I could find time, I'd probably knock it out I a few minutes using the GHears that already exist and save it as a LUT.

    Screen Shot 2013-04-24 at 1.39.40 PM.png
    1054 x 592 - 950K
  • Hi @shian, could a 3-strip technicolor look like below be intigrated as LUT into ColorGHear:

    http://www.mynahmedia.com/2013/01/3-strip-technicolor-look-in-davinci-resolve/

  • GH2 + Nikkor 24mm + Century Optics 1.33X Anamorphic 5DtoRGB transcode (rain killer) CS6: PPro + AE Kuler ColorGHear

  • You'd have to ask the CG users if my tutorials were any help in that regard, cuz there are not enough Resolve tutorials yet where I can say with confidence that they will. There are a few Resolve Tutorials in the CG Pro tract, with more to come, and eventually that will be the case.

    The film school deals primarily with cinematography for now.

  • Does the ColorGHear film school/ tutorials give you instructions on how to use resolve? Would a novice at color editing be able to learn how to color edit/resolve, in addition to the instruction sets that are included?

  • I didn't get one for that reason. I don't really like the way it looks. But that's just me.

  • I was just curious. I'm looking to get one and just wanted to be prepared for any weird footage quirks.

Start New Topic

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with Google Sign In with OpenID

Sign In Register as New User

Tags in Topic

Top Posters