In a recent update to its popular image editing app Photoshop, Adobe has implemented a new feature called Depth Blur (currently in beta). It’s designed to reduce the depth of field via AI after you’ve taken a shot, and that raises a burning question: Will we still be using fast lenses once this feature works flawlessly (and possibly finds its way into video)? Much of what makes an image stand out is massively influenced by the lens with which it was taken. All the light has to go through that lens. Equally important, but perhaps to a lesser degree in today’s world, is the sensor and the image pipeline behind it. That may be a bold statement, but think about it: Almost every current (photo) camera shoots very decent raw or processed image formats. Same is true for video. Flat log profiles in almost every semi-pro camera. In some ways, the sensor is more of a clinical capture device than a look-defining tool. Lots of bokeh. Photo by JJ Ying on Unsplash With lenses, I’d say the opposite is true: the coating used matters, zoom vs. fixed focal length matters, sharpness varies enormously between lenses, and, perhaps most importantly for some, the aperture (or T-stop for cine glass). Really fast lenses are usually very expensive, no compromise high-end pieces of technology. So does AI put an end to all this? Just shoot at f/5.6 and make it more cinematic in post-processing (aka shallow depth of field)? Adobe’s Depth Blur might...
Published By: CineD - Tuesday, 1 June, 2021