Last week, Luma AI launched Ray3, a generative video model positioned as a creative partner that can “reason” through prompts and deliver native HDR video in ACES2065-1 EXR across 10-, 12-, and 16-bit formats. The model is live on Luma’s Dream Machine and inside Adobe’s Firefly app, targeting filmmakers, advertisers, and game developers who need production-grade outputs. Let’s dive into Luma AI Ray3. Ray3 moves beyond one-shot prompt interpretation by evaluating its own outputs and refining results during generation. Luma frames this as a shift from slot-machine-style randomness to instruction following and temporal coherence that keeps characters, motion, and physics consistent over time. What’s new in Luma AI Ray3 Ray3 introduces a multimodal reasoning system designed for creative tasks. The model can plan scenes, follow annotated directions, and judge whether its results match user intent, which helps maintain narrative continuity and physical plausibility across shots. At the format level, Ray3 is the first model that generates video as true HDR in professional ACES2065-1 EXR, supporting 10-, 12-, and 16-bit pipelines commonly used in high-end finishing. Luma adds native 1080p generation and a neural upscaler to 4K to meet broadcast and studio requirements. Native HDR and pipeline readiness Because Ray3 writes directly to ACES2065-1 EXR at up to 16-bit precision, colorists gain access to deeper highlight and shadow detail, plus robust round-tripping through grading and VFX. Luma also notes that SDR material, whether camera-captured or AI-generated, can be converted into generative HDR for expanded grading latitude. Draft mode and creative controls...
Published By: CineD - Today