Runway claims its latest text-to-video model produces even more precise visuals than its predecessor. IN blog post on MondayRunway says its Gen-4.5 model can produce “cinematic and highly realistic effects,” which could make it even more tough to distinguish between what is real and what constitutes artificial intelligence.
“Gen-4.5 achieves unprecedented physical accuracy and visual precision,” Runway said in its announcement. He adds that the up-to-date AI model handles suggestions better, which allows you to create detailed scenes without losing video quality. Runway claims that AI-generated objects “move with realistic weight, momentum and force,” while liquids “flow with appropriate dynamics.”
According to Runway, the Gen-4.5 model will be rolled out to all users gradually and will offer the same speed and performance as its predecessor. However, there are still some limitations as the model may have problems with object persistence and causal reasoning, which means that effects can occur before the cause, e.g. a door opening before someone uses the doorknob.
With Runway, OpenAI is increasing its efforts to make AI-generated videos look more realistic. OpenAI highlighted the physics improvements with the release of its Sora 2 video text-processing model in September, with Sora CEO Bill Peebles saying, “You can accurately perform backflips on a paddleboard in a body of water, and all the fluid dynamics and buoyancy are accurately modeled.”
Runway claims that the Gen-4.5 model also better handles different visual styles, allowing for more consistent photorealistic, stylized and cinematic visual effects. The startup claims that photorealistic visualizations created with Gen-4.5 can be “indistinguishable from real-world footage with realistic detail and accuracy.”
