Personal View site logo
25mm lens on 2x crop sensor identical to a 50mm lens on FF?
  • Hey,

    I just thought i'd ask you guys that are smarter than me on these things.

    I wondered as the title suggests if a 25mm lens on a 2x crop sensor, ie. micro 4/3. Was identical to a 50mm lens on a full frame sensor.

    I kind of assumed they'd be very similar, with identical FOV, but i don't know quite how or why but i assumed that perspective would appear different for very close and very distant objects due to the 25mm lens vs the 50mm lens. Something that cropping alone doesn't perfectly imitate.

    The reason i ask is a friend is trying to setup a 3D camera in a 3D package to be equivelant to my GH2. Forgetting movie shooting crop differences and assuming 2x crop, i'd told him 50mm lens equivelants. He started to question whether perspective looked right and i suggested that he actually used the 25mm lens and using camera aperture to simulate the 2x crop. Apparently doing so causes identical renders to be performed, something that I didn't expect if there were differences optically between the two scenarios.

    If there is a difference between the two setups, what causes the difference?

    Thanks a lot!!

    Cheers
  • 29 Replies sorted by
  • Thanks Vitaliy, I'll have a good read tomorrow. On skimming through the 2nd post seems helpful, it has a section on crop factor, but all it describes is equivalent focal length from focal length * crop factor.

    I'll read it all in more detail tomorrow, but i still for some reason think that the wider lens will have a different perspective rendering, that even though cropping with the crop factor resembles the framing of the full frame equivalent lens on a fullframe sensor, there will still be differences in how objects are arranged in depth into the scene between the two.

    If so, then the 3d camera settings in this package aren't correctly modelling this aspect of it. Or my gut feeling is just wrong :)
  • Lens does not define perspective. Your position and view direction define all such stuff.
    Lens just crops (see FOV) and also adds small things like abberations, etc.
  • The fields of view will be identical: they depend only on sensor size and lens focal length. Depth of field will depend on the aperture (not the F-number).
  • 25mm lens is always 25mm lens :)

    "25" represents focal length. The lens gives 25mm focal length angle regardless of sensor sizes.

    When Lumix 25mm 1.4 DG m43 lens is used on m43 sensor, its image circle will cover the entire m43 sensor area. When it's used on FF sensor, the image circle won't cover the entire sensor area, but the projected image circle does have 25mm focal length "angle". If moving the lens further away from the sensor, the image circle will cover the entire sensor, but it will never get focused. But both positions have same focal length angle. Confusing? :) That's why NEX suffers from lower corner sharpness where NEX has bigger sensor and shorter flange back distance.

    The cropping ratio determines FOV. 50mm FF lens will always have 50mm focal length. Thus it's always 50mm lens. Its FOV on 2x crop system will be 100mm.
  • Notwithstanding all the semantics, to all intents and purposes, yes.
  • In terms of field of view, yes. In terms of depth compression, no.
  • The GH2 in video mode has a fov equivalent to a 1.8x crop not 2x. A 50mm on FF will or rather should look slightly zoomed in c/w a 25mm on GH2. GH2 has a very wide oversized sensor that allows you to shoot at the extremes of the lens in 16:9 mode.
  • @ptchaw
    are you sure?
    Wouldn't a 14mm image zoomed and cropped as a, say, 75mm be identical? ( regardeless of image quality of course)
  • Interesting discussion. My wife was a studio / OB camera person for many years, and they always used lens angles as their way of looking at shots because I think it was the most compatible between different cameras, plus they had various bits of your hand that could describe angles, like your smallest fingertip is 1 degree, the span across four fingers is 10 degrees (all with your arm fully out).

    If you want to save the calculation, and want to find out what the lens angle really is, just put the video into Syntheyes with the camera tracking from side to side, and Syntheyes will work out what the lens angle is from the way objects in the scene move relative to each other. It's pretty accurate to within 1 degree of angle at least.

    Going back to the specific question, I wonder if the behaviour of very close objects with two different lenses but of equal focal length, relates to the distance of the object from the sensor or the distance from the front element, or something else? Actually, digging into my brain here, I think the actual distance is to the "front aperture" which is notionally somewhere in the lens near the front (it's not the same as the diaphragm). So if doing 3D with two identical focal length lenses but different otherwise, you might have difficulties with close objects as it might not be the object-to-sensor distance that's the one that needs to be the same for both lenses.
  • I have seen this discussion before and its always the same arguments going in circles.

    Isn't there anybody with a 5D MK2 and a GH1/2 willing to set up a simple shooting (some objects with controlled lightning). Don't mind small imperfections, its only about the overall look.
    Put a 25mm lens on your GH1/2 with an apperture of F2.0. Put a 50mm lens on your 5D MK2 with an apperture of F2.8. Try to set up the camera at the exact same point and direction and take some photos....oh, and post those pictures here;-)

    I think that will give a very good answer to this question.
  • @ttancredi Digitally cropping an image, akin to using a cropped sensor, will have a completely different depth compression to an image with the same field of view. It's the reason we're shooting on large sensor cameras and not 1/3"; the aesthetic is completely different. Depth compression is effected by aperture and sensor size.

    It's a rather poor example, but look at the relative size of the people in the background: http://filmschoolonline.com/images/sample_depth_compression.jpg The field of view is identical.
  • If you ask me, I hate this topic the most. :-)
    It is so common around all forums. I think we set the record without it.
  • @psyco +1 (if anyone shoots the test do it with highers aperture as well so than we can have everything focused)

    @Vitaliy_Kiselev

    I also thought this subject was finished and that would never read and think about it again. but until I don't get hands on a diferent sized sensor camera, I'll have to believe what I read..

    @Ptchaw

    If you shoot 14mm and crop a image relative to 50mm and compare it ( using same sensor size) the composition will be the same, that is as I far I know, comproved many and many and many times.

    Now, focal lenght with different sensors size is another story (or not?)

    of course DOF will be different while comparing 25mm+gh2 and a 5d+50mm but I didn't think composition would be diferent.



  • Seriously? Is this still being discussed? :-)

    All you need to remember, is that perspective is 99% effected by distance from subject. Focal length and sensor size then gives you your desired field of view. Then focal length, focus distance and aperture will give you your DoF.

    Now, if you want to completely melt your face off. Add anamorphic adapters and diopters....boom, head shot!
  • Love the fact that when switching fast between video and picture mode you can actually see the increase in sensor size on the GH2: ie. video becomes larger on the sides...
  • @ttancredi This is not the case, the composition will change. It may be a very subtle difference, most noticable when there is a big difference between foreground/background objects. Changing focal length is not the same as cropping. DOF will be also be perceivably shallower.
  • Sorry everyone for getting this discussion started. I'd not seen this discussed before and was and still am a bit baffled by it all so am appreciating the reading on it. It seems like something that an authoritative summary on how it all works would really help us all who wonder how and why it all works.

    @ptchaw do you have the link to the article that uses that depth compression photo, it'd be nice to see the context around the photo.

    Cheers everyone who's posted so far!!!
  • Matching the images of a Micro Four Thirds camera to a full frame 35-mm camera:

    Perspective is a function of camera placement and direction: put the camera in the same place and point it in the same direction, and you have the same perspective. If all of the objects in the scene are flat and in the focal plane, you can get the same perspective from any distance if your lenses are rectilinear.

    Field of view, or angle of view, is a function of sensor size and lens focal length. To make it simple, use crop factors: adventsam says it's 1.8 for the GH2 in video mode. So 25 mm on the GH2 gives the same field of view as 45 mm on a full frame camera.

    Here's the formula for the size of a circle of confusion: m*A*abs(S2-S1)/S2
    m is the magnification, that is, the ratio of the size of an object in the output image to its size in the scene
    A is the aperture diameter
    S2 is the distance to the object whose circle of confusion size we are calculating
    S1 is the distance to the focal plane

    If your camera placements are the same and you're focused at the same distance, S2 and S1 are the same for the two cameras. If you have the same field of view and the output images are the same size, you have the same magnification. A = f/N, where N is the F-number. So a 25-mm lens with the aperture set to f/2.0 has the same aperture diameter as a 45-mm lens with the lens set to f/3.6.

    Depth of field is a function of the sizes of the circles of confusion and your in-focus criterion: when the circles are below a certain size, they look to be in focus and making them any smaller makes no difference. If you have the same output resolution (measured in lines per image height), and the circles of confusion are the same sizes, you'll have the same depths of field. If the circles are the same size and one camera has lower resolution, the lower resolution camera will have greater depth of field. But opening the aperture on the lower resolution camera until the depths of field match won't give you the same image, because then the circles of confusion for the out-of-focus objects in the lower resolution camera's image will be larger.


    So to sum it up: to get the same image, you need the same perspective (camera placement), same field of view (matching lens focal length to the crop factor), same aperture diameter (not F-number), and if you want the depths of field to match exactly, the same resolution.
  • Nice and thorough and clear @balazer thanks!!!
  • In my more youthful years swinging a pole and Fisher boom on a film set - politics galore, the only chance you had was to take a peek at the lens and figure out the FOV and try and not make a tit of yourself by dangling in the cutoff - Lighting and DOP's have to be persuaded by years of beer to give you a hand ;p Fun tho!
  • @soundgh2 Lenses...hmm. But in terms of boom shadows, we all know that lighting causes them!

    By the way, the big Fisher studio booms would be great for small cameras, wouldn't they? Not particularly portable, mind you! Wonder if anyone's tried mounting a camera with a monitor feed on one?
  • A thousand words, many thoughts, theocrafting till the brain hurts....and still no one can just grab there GH1/2 and 5DmkII and take some simple pictures?!

    There had endless pages been filled with this stuff and thanks for all the information - but please just take those pictures and back your info up with it. As there are so many good arguments about all different opinions, I don't belive either of you. Just take some pictures and show me which lens/apperture/... I have to choose on my GH1 to match a 50mm F2.8 on a FF cam.
  • Ok. So i've had a go at trying to get some actual evidence of how this all hangs together with some real photos.

    Rather than try to sort it out between GH2 and a fullframe camera I thought it'd be simpler to do the following.

    Setup my Nikon D3 (fullframe) on a tripod with a 24-70mm zoom lens
    Then with the tripod locked take two photos
    1 at 24mm
    1 at 50mm

    Then digitally crop the photos by the 2.08x crop factor that 24mm requires to get to effective focal length of 50mm.

    Having done this i then lined up the 50mm shot resized to the same smaller size as the cropped shot.

    Sadly these two images don't line up. I assume down to the inconsistencies of actual focal length, as well as the lens extending as it zooms from 24mm to 50mm. Still I was at least ensured that the sensor was in the exact same location and angle, which is something that would have been hard with the different form factors of cameras between GH2 and the D3.

    Here's all the shots in a flickr album

    http://www.flickr.com/photos/jimbobuk/sets/72157628030691063/with/6355641337/

    and here's a small version of the overlaid images with the same effective FOV, showing the small discrepancies.

    DSC_0525_resized+DSC_0526_cropped

    Have I learn't anything!? I think i can see they're close enough.

    Is that helpful @Psyco ?
  • I finally found it: a lens's perspective is from its entrance pupil. So to get identical perspectives, you need to match the locations and sizes of the lenses' entrance pupils. In photography we call the entrance pupil the effective aperture, and its diameter is given by f/N. Published lens specs will sometimes say where the lens's entrance pupil is. But you can see it with your eyes: just look into the front of the lens for the aperture. Where it appears to be (even with the magnification of the front elements) is the location of the entrance pupil! And if you can't see the entire circular edge of the smallest aperture stop from some angle, like when the lens is wide open, it means that the entrance pupil is smaller for that angle, which causes vignetting!

    http://toothwalker.org/optics/cop.html
    http://en.wikipedia.org/wiki/Entrance_pupil
    http://www.janrik.net/PanoPostings/NoParallaxPoint/TheoryOfTheNoParallaxPoint.pdf (read this one if you really want to go deep)