You probably didn’t miss last month’s announcement of OpenAI’s video generator Sora. It created quite a buzz, raising both excitement and sorrow, as well as a lot of questions within the filmmaking community. One of the pressing matters that always comes up when talking about generative AI is what data developers are using for model training. In a recent interview with The Wall Street Journal, OpenAI’s chief technology officer (CTO) Mira Murati didn’t want (or wasn’t able) to provide the answer to this question. She added that she wasn’t sure whether Sora was trained on YouTube videos or not. This raises the important question: What does this mean in terms of ethics and licensing? Let’s take a critical look together! In case you did miss it: Sora is OpenAI’s text-to-video generator, which is allegedly capable of creating consistent, realistically-looking, and detailed video clips up to 60 seconds, based on simple text descriptions. It hasn’t been released to the public yet, but the published showcases have already sparked a heavy discussion on the possible outcome. One of the assumptions is that it might entirely replace stock footage. Another is that video creators will have a hard time getting camera gigs. While personally, I’m skeptical that AI can completely take over creative and cinematography jobs, there is another question that concerns me a lot more. If they used, say, YouTube videos for model training, how on earth would they be legally allowed to roll out Sora for commercial purposes? What would this...
Published By: CineD - Monday, 18 March, 2024