Whether for music videos, features or documentaries, timelapses are a well-known and widely-used tool for capturing change in motion. Though with random flickering, objects popping up out of nowhere and other so-called artifacts, they are anything but flawless – well, at least, they were, according to researchers from Aalto University in Finland. Together with NVIDIA, they developed a new AI technology for massively enhanced timelapses. There are three key elements to a visually compelling timelapse: camera, tripod and a whole lot of patience. Seems easy at first sight, doesn’t it? However, if you have ever captured a dynamic process yourself, you know how immensely tricky it can be. In one of our previous articles, we explained setting the right time intervals during the shoot. But what if the goal is to observe something over days, months, or even years? The new generative model from NVIDIA can become an irreplaceable assistant. Why and how? Let’s take a closer look. Solving the flickering problem through a new deep-learning model When it comes to creating timelapses, NVIDIA wanted to kill several birds with one stone. First, reduce flickering which can happen easily if you capture long timelapses, e.g. multi-year video observations. Second, control changes and clear up the most annoying ones from the timelapse, such as drastic weather development, a random person walking through the frame, or a spider web that’s suddenly covering the camera lens – all the unexpected things a filmmaker faces. A screenshot from the smooth timelapse visualization. Image credit:...
Published By: CineD - Monday, 27 February, 2023