Personal View site logo
Zacuto 'Revenge of Great Camera Shootout,' featuring GH2
  • 261 Replies sorted by
  • Test after test... getting bored of these... Still, since theyre using Quantum 9b - it'd be intersting to see the 'non-pixel peeping' reaction of the Zacuto team...

  • @jrd I disagree. When I was a 8 years old, my parents could look past poor image quality but I was a very tough sell. A film had to be very old (black and white through Music Man) or recent (80s 35mm or newer, preferabbly Hollywood) for me not to be distracted. I commented on it immeadiately (within the first 60 seconds of footage) and would sometimes walk out of the room while my parents kept watching a foreign film that had poor image quality because of a lower production budget (though I might well have stayed now).

    That was when I was a kid. Kids today are exposed to a higher image quality on a regular basis in everything from TV shows to music videos, etc. and sometimes even on the web than the 8 or16mm work that used to distract me. When I saw the preview for Clerks 1 (for example) my older step brother could not get me interested in it, even though the humor was a good fit for us, because I disliked the look so much as a kid. In college I went back and checked it out.

    In other words, if you expose a kid too a lot of something technically better (image, sound, graphics, etc.) they will notice the difference for the most part and react to it. As adults, we may pick and choose or analyze and retrain but as kids the reaction can be a lot more direct.

    Good image quality is what gives you the chance to tell your story to a wider audience. Without it, a lot of people (especially young people) will be difficult to convince to give it a chance. Just my 2 cents.

  • @jrd I completely agree. Independent creators will find out just how important all of those elements are, and they'll be better off for it. Tailor your stories to what you can accomplish. Write around your locations and the resources you have available to you.

    And I agree @driftwood test after test is getting annoying, why not create something original? Instead of shooting shrubbery arguing over bit rates, motion cadence, noise in the red or blue channel we should be discussing story elements, shot selection and other elements of our creations. But, as long as cameras keep improving I'm sure many will labor over the technical aspects, and in many cases this is a great resource and service to the creative community. However, what happens when cameras start recording 12-bit uncompressed RAW images at incredible frame rates? What else will we have to talk about except the creative process of our craft?

  • @thepalalias

    Maybe we'd be better of if it were otherwise, but in the ultra-low budget feature realm, there's no correlation between format and production value, and marketplace acceptance. The biggest hits of the last 15 years were shot on consumer hi8, DV and first generation pro-sumer HD, with cheap production values in all cases. This at a time when far better formats were readily accessible, without enormous increases in budget.

    I honestly don't know what kids expect today, but being willing to watch a movie on the web or a telephone doesn't suggest high standards....

    In demanding better image quality, I think you're the exception.

  • On the one hand I'm glad the GH series is getting more respect. But on the other hand, the reason I even got a GH1 was because they were so cheap at close out when the GH2s came out. T2i's have never gotten that cheap, and even as cheap as other Canon's have gotten now that some of those are being discontinued, they are still no where near that.

    Not gonna like it if the shootout causes a run on the GH2s and prices never go down. Or worse, go up.

  • @jrd Look at image quality in video games if you do not find the evidence in films (as represent an even larger financial market for kids these days). Games drove a surprising number of higher resolution displays (computer and television) and the correlation between increasing graphic quality and the widespread acceptance of gaming initially (beyond the "solitaire market") was unmistakeable. Final Fantasy VII (released in the U.S. in 1997) spent several times as much on CG cut scenes as Final Fantasy VI had just a few years before (and the project cost close to the estimated budget for the first part of the Lord of the Rings Trilogy).

    New games are talked about in the gaming magazines based upon their graphics engines, even to the mass market crowds reading things like store mag at EB Games. Gamers often test their graphics cards in engines the way that we might compare cameras. And outside the recent indie market, poor graphics tend to be one of the first thing that gamers or reviewers will complain about (in close proximity to poor controls). The visual budgets are several times what the other departments get.

    If increasing visual quality were not such a concern to the young market, developers would not be spending ca. $14 million dollars a game on the average triple A HD game. Yes, there is an indie market growing - which also looks many times better than the indie market 10 years ago and is ALsO often HD except on mobiles (which look several times better than they did several years ago...)

    Even Nintendo has been moving towards HD and they were one of the last holdouts.

    Marketing or not, kids care about visual quality a lot. The gaming magazines targeted a pre-teen to tenn demographic in the mid 90s were already often providing glossaries of 3d cg terms (phong vs gorouad shading, mip-mapping, etc.) and the annual gaming guides were keeping track of things like the maximum color range each system could display (which is why I can tell you from memory that the SNES could display more colors and choose from a wider palaette than the Sega Genesis). Up until recently, polygon performance was a big measure of sytem performance (from the 90s on).

    Enoguh ranting: back to video. People may choose convenience over delivery format quality, but they seem to care more about production format quality. What are they watching on the formats you mentioned? When it is not reality subjects (which is a lot of YouTube and a different category) they tend to prefer things with higher production values. They can only watch The Blair Witch Project times. :)

    Oh and the next time you think kids do not care, try getting them to watch the 90s versions of their favorite superhero show and see how long they go without commenting on the abimation (which was usually lower frame rate, etc.)

  • @thepalalias What's interesting about the video game comparison is how games are driven by higher frame rates. Gamers really crave the fluidity of consistent >60FPS. 24FPS w/ a first person shooter is annoyingly stuttery. The total opposite view of most film makers.

    I know we all love the ethereal look of 24P, but I wonder if the young market that grew up with gaming will embrace the push for 48+ FPS in movies.

  • @jrd on the topic of tools -- yes. We were kind of already there when you had the DVX100, it got better with the HVX200, and the day that the DSLR could do 24P it was a done deal. The rest of this stuff is just for hobby/interest, at least for me.

    I knew you could do competitive work with a 35mm Adapter, it's just easier now.

    @dbp

    Higher Frame Rates for games and CGi movies = Yes please.

    But, as an avid gamer and consumer locust, no way for motion picture, or narrative content. I'll wager money on that, and part of the reason is that there is still a very strong distinction between what is considered reality in media and what is considered narrative fiction.

    As in, there are still a lot of reality shows, and then you have movies which look like movies. When they both start looking the same (just one looks more like a History Channel re-enactment) then there will be blood.

    We'll know more after The Hobbit screens 48FPS 2D (not 3D). xxDDDDDD

  • @kholi The subject is very very interesting to me. I understand where people are coming from... it's jarring to see narrative content with such high temporal motion. I'm still open to the idea that it could be made to work, and here's why.

    To my knowledge, there hasn't been a proper high production, well crafted hollywood movie shown at 60fps. Part of the reason people have negative emotions with the Soap Opera look, imo, is because they are bad! Bad acting, writing, etc etc. Comparing that, to seeing something like Goodfellas in 60p isn't the same in my eyes.

    From what I'm gathering, the other big problem is that 48-60p is a lot less forgiving in terms of costumes and sets. When I ate at Planet Hollywood, I was actually shocked at how bad some of the props looked. With the magic of film lighting, low frame rate/motion blur and softness, those flaws are hidden well. I seem to recall reading that a lot of the New Zealand stock footage was really well recieved with The Hobbit screening, but the actual movie footage was not. Perhaps it's cause the set designers, costume designers, makeup artists are now held to a much higher standard and can't get away with the same quality of work for higher frame rates?

    These are just speculation on my part. I'm still in the 24p camp, but I'm not ready to write off 48 or 60fps for narrative fiction just yet.

  • @dbp

    I think it's just the perception of reality: we all know that Hobbitses (hehe) aren't real, so why are we trying to make them look real? There almost has to be a disconnect so that we have to suspend our belief briefly.

    Not sure if that makes sense, but yeah.

    I guess we'll see when it's here! Not too far away.

    I definitely will watch it at least three times, 1. 24p 2. 48p 3. 48P 3D

  • @driftwood yes sir I read almost 3 articles which are similar. most of them are using quantum v9b. I think they were not informed regarding the sednas and canis

  • All the best video work i have seen, is done with quantum v9b.
    All other video's i have seen are practically just test clips........

  • I see this as an analogy to painting. Do you prefer expressionism or photo realism? Is the emotional resonance in art linear to the degree of "realness"?

  • @jrd "What that actually means is, now would-be filmmakers will discover the real excuses (and good ones!) for why they can't create "worthy projects": no resources for great writing, no resources for great acting, no resources for professional lighting, no resources for credible locations, and on it goes...."

    Then don't be a film-maker. ANYONE could make a film with hollywood level resources, the point of indie films is to show that you're truly creative and a problem solver. I would argue that some of "the best" filmmakers really just have "the best" crews. But that's just hollywood for you. Film is weird, as it's the only art-form where you can not be an artist yourself, but instruct others to create something for you, and then take credit for the "artistic vision". It makes me sick.

    Anyways, lets stop making excuses all together!

    "no resources for great writing": You're not creative then. Don't attempt to make a film. (not literally you jrd, you could be a great writer, I'm just saying "you" in the general sense.)

    "no resources for great acting": You're either a total recluse, or can't see the potential in anybody who isn't carrying a SAG card. Directing probably isn't for this person. Most cities have TONS of great talent in theater guilds and such.

    "no resources for professional lighting": You don't have an "eye" then. Get out of visual mediums. Someone with a good eye could light with Home Depot resources.

    "no resources for credible locations": Make a zombie/slasher film then! Or a drama... or a comedy! The only genres I can see "credible" locations being needed is sci-fi and action. Use creativity and create a story that doesn't require "lavish" locations.

    Sorry. I just hate ANY excuse these days. Saying something like "I don't have a professional DP" is basically just saying "I can't make movies". You can either make movies, or you can't. You understand the visual language, or you don't. No excuses. Ever.

    ANYBODY could make a film with a great DP, oscar worthy writing, top class editor, and support crew. The only people who DESERVE these kind of resources, are the ones who don't need them.

    Reminds me of a quote... "Only the man who does not need it, is fit to inherit wealth, the man who would make his fortune no matter where he started.”

    Disclaimer: I haven't made a successful film myself. So these are just observations and personal philosophy. But I like to be in the camp that believes it's not up to "other" people to make your movie. It's up to YOU! The moment you believe you NEED hollywood level resources, is the moment your film has failed.

  • My suggestion is to create a topic on 'the weaknesses of the GH2', since everyone pretty much agrees on 'working around them'. Why should we not make a list with problems/weaknesses with possible solutions to work around it?

    For instance: 'bad low light' => possible solutions: fast lens, neatvideo noise remover, etc.

  • @thepalalias

    I agree with everything you say about how we perceive image quality. My EP said to me once, "16mm looks like 16mm, 35 like 35, SP Betacam like SP", etc. Later, I noticed that the new DV started to look like DV. I believe we'll always demystify formats - well, almost. When cinemascope 3-perf started to be used instead of anamorphic, there was a resolution drop, but nobody really noticed or cared. And these days, Kodak Vision 3 film is so grain-free I'm a little disappointed we never gave the new breed of 16mm cameras like the Aaton A-minima a really good go.

    Personally, I'm a bit anal about image quality and I'm known as the guy who returns a DVD to the shop "because it's shot on digital" - not usually because of its image quality but because of the Quick Flick genre where an el-cheapo producer saves money - not only on shooting & post costs, but is cheap all the way; getting actors to improvise their way through scenes in return for a killer one-day-shoot salary. Most of us recognise that style of movie which gives digital a bad name; we're lured into parting with our money by the DVD slick vaunting a couple of star names an somebody's "five star" rating, only to see continuity errors, fluffed lines and even out-takes being used because they just never get anybody back for a re-shoot.

    Not only does that practice give digital a bad name, it also has a big effect on out associative perception, at a subconscious level, of image quality. It will be interesting to see how our tastes evolve.

  • Good idea @Bressonftw, Looking forward to you starting it. :)

  • @Bressonftw Yes...that is a good idea.

    I've tried to do the same thing (i.e color banding and work-arounds) over at Wetpixel as a continuation of the review article I wrote about the Zacuto shootout screening in Australia.

    By the way have you seen the discussion over at RED User? The conspiracy theories have started...

    http://www.reduser.net/forum/showthread.php?79526-Just-watched-first-part-of-zacuto-shootout

  • @Bressonftw

    My suggestion is to create a topic on 'the weaknesses of the GH2

    Nah, I'd rather film a bush and marvel at the resolution. Seriously, that's a great idea. How to work within limitations is better than getting drunk on the strong points -- though it is fun to get drunk sometimes.

  • By the way have you seen the discussion over at RED User? The conspiracy theories have started...

    http://www.reduser.net/forum/showthread.php?79526-Just-watched-first-part-of-zacuto-shootout

    Those REDDites are crazier than we are. I understand there will be a public hanging at RED studio of the guy who shot the RED footage. Also a lot of here (me at least) whine and bitch that Zacuto gear is overkilll and overpriced. Some of the REDDites say it's toyish.

  • @rajmalik Those settings were released in mid-March. I thought I read that the tests were shot in February, but I may have remembered incorrectly.

    @dbp @kholi Framerates are an interesting question, and games are a very different question from passive media (that being a label for non-interactive media). For games, increases in framerate are synonomous with a decrease in latency. This is a non-consideration for film framerates, because their is nothing for the audience that requires the audience to do somehting "to" the content and the interaction is either entireley internal or social. So latency is no longer a reason for higher frame rates.

    But note that in my comment about animation, I was speaking about framerates relative to 24, 25 or 30P where a lower frame rate would be "shooting in 3s" or "shooting in 4s" etc. as opposed to "shooting in 2s" or even at the frame rate itself. So if we were talking about for 24P, shooting in 3s would result in an 8P look, etc.

    Now some of the 90s cartoons had interesting and influential character designs that are more aesthetically pleasing to me than their modern equivalents (X-Men and Batman TAS for instance) but when I watch them with friends that have only seen the newer series, they often frustrated by the animation being low framerate, etc.That is not to say they would prefer framerates exceeding 30P, just that they dislike framerates as low as 8P.

    @bressonftw Actually I disagree. The issue with the GH2 is not lowlight performance (which actually holds up quite well to a lot of the similar sensor sizes and somtimes even larger ones) but more specifically, it is the way it handles the lowest part of the histogram. The unpredictability of shadow exposure is troubling, in that certain scenes can result in shadow flash or flicker, even with intra codec settings.

    I think a list was already made in the April Fools thread.

  • @brianluce The irony is that Zacuto approached RED to supply their own people to do the shootout so as to do the best possible shoot and grade...but RED declined to accept.

  • "...but RED declined to accept."

    Yep. This seems to be the Red way. Decline everything, then say "it's done wrong" when someone else doesn't get acceptable results.

    David Mullen made a funny point in the thread that in the first shootout, Redusers complained that it wasn't fair because Red didn't get special treatment and proper work-flow to make it competitive to the Alexa and such. Now when each camera gets special treatment to make it sing... and Red still did come out on top... they're complaining that it wasn't fair that each camera has individual lighting? WTF? These guys are nuts.

    I mean, I'd still like a Scarlet or Epic myself (for frame-rates and flexibility)... But this is getting crazy. Maybe the GH2 really does just have a better look to most audiences. Did anyone think of that? Why does the most expensive piece of gear ALWAYS have to be the best or someone is cheating?

  • Reading REDuser these days is better than going to the movies xD Time to get another popcorn and make it a large one :D

  • @bwhitz

    To admit that the GH2 looks better to audiences than the Epic is to admit that in terms of image at the final stage (projection in a theater), the camera that cost 66 times more couldn't win. This really is insanity because it goes against so many of our core beliefs. How often do we expect to get worse performance out of any product that we pay 66 times more for? It just can't be real, so we deny that it's even possible without considering otherwise. Even the most zealous GH2 fanboy isn't willing to outright say the image on his camera can beat the Epic (though I don't know after this test).

    Every measurable spec of the Epic is objectively better than the GH2. It has R3D RAW vs AVCHD (hacked), RAW vs 4:2:0, 18 stops of dynamic range with HDRx, 5k vs 2k, and 12-bit vs 8-bit color. Yes, the resolution advantage has been compromised due to downscaling, but it should be noted the Epic gets to have true 4:4:4 levels of color because of this. On paper, the Epic slaughters the GH2.

    But if we break down each spec, we can see how and why the GH2 was able to compare (and win according to many viewers).

    Resolution: This is where the Epic wins hands down, but since Zacuto's test had all the cameras sized to 2k, the Epic didn't get to shine here. The GH2 resolves pretty close to 2k, so resolution was probably comparable in this test. While I realize Red didn't get to flex all its muscles in this test, I agree with Zacuto that the final output should have been 2k since that is what most people see in theaters as of 2012.

    RAW vs hacked AVCHD 8-bit color: I think the amount of extra quality gained from RAW (even with grading) is not as significant as many people think. 8-bit doesn't seem like much, but it actually is. That's 256 colors each for red, green, and blue meaning a total of over 16 million color combinations. Out of camera JPEG files (including the highly regarded Olympus JPEGs) are all 8-bit. The BMC stills that people have been so meticulously grading have all been from an 8-bit JPEG. Blu-ray is 8-bit even!

    RAW vs 4:2:0: When you look at explanation of color subsampling, you come away thinking "4:2:0 is awful! How can so much color information be tossed out like that?!". The fact is, all this stuff is at the pixel level, so most layman won't be able to tell a difference. The amount 4:2:0 hinders an image also depends on the kind of image. On some images, it may be impossible to distinguish between 4:2:0 and 4:4:4 without some serious pixel peeping. This would be a problem if movie audiences had a button on their seats that froze the film and magnified the image by 2x. I must admit though that 4:2:0 does suffer a bit for green screening (though be careful not to blame it on that if you are using bad keying techniques).

    RAW vs 175mbps hack: In terms of bitrate, the hacked GH2 gives each frame the amount of bitrate a still OOC JPEG gets which is good enough to get rid of most noticeable artifacts. If the hack can render fine sensor grain, I don't think the compression is hurting the image that much. Again, most layman (and apparently some pros) wouldn't see these artifacts on the GH2 footage.

    14 (or 18 with HDRx) stops of dynamic range vs 8 stops: This is what the GH2 needs more of in my opinion, but it took me awhile to realize that 8 stops is actually quite good. The Epic has no advantage in this respect if the image doesn't require more than 8 stops, and that's where lighting and NDing windows comes into play. In most cases, you can fit pretty much the entire image within the GH2's range. On really sunny contrasty days, I've been able to underexpose to protect highlights, and lift the shadows up a decent amount thanks to the hack. Apparently, the Zacuto test had a bright window in the shot, and I suspect the cinematographer NDed it good for the GH2. Also note that just because you capture 18 stops of dynamic range doesn't mean you use it all on the shot. Actually, 18 stops (or even 13 stops) is bound to look like garbage before you grade it. It is however a nice get out of jail free card when you botch the exposure. Just be extra mindful of the exposure with the GH2 and you'll get awesome shots. When I shot my feature on the GH1, I didn't need any extra dynamic range on most shots (I've blown out the sky a couple of times though, doh).

    With all that said, I think the Epic is probably able to match any of the cameras in the test with the right person behind it. I also think the Epic's chance to shine would be in situations involving high dynamic range (like a forest under bright sunlight) and of course, when cinema moves to 4k+. For aspiring filmmakers, I think the hacked GH2 is better than what we deserve.