On Monday, I watched the CEO of Opeli, Altman himself drinks from a giant box with mango juice and raucous, by half its size. Haczyk: It wasn’t really Altman. The juice was not real. He didn’t really speak. It was a deep command generated by AI.
The most disturbing part: I couldn’t say if it was real.
Opeli announced that Sora 2, a up-to-date video generation system and AI AI, on Tuesday, and during check -in with reporters on Monday, employees called it the potential “chatgpt moment to generate video”. Like Chatgpt, Sora 2 is published as a way to play with a up-to-date AI tool – which contains a social application with the possibility of creating realistic films of real people who say real things. It can be said that this is basically an application full of deep wardrobes. Deliberately.
Opeli believes that Sora, which was announced for the first time in February 2024 and released in December, finally reached a point of relative reliability. Bill Peebles, the head of Sorai Opeli, compared the earliest iteration of the video generation system with a “machine to machine”, in which “you would put a hint and keep your fingers crossed that what you stretched, reminded what you asked.” The up-to-date model, he said, “is much better in terms of faithfulness, how users monitor it.”
During the briefing, the team standing behind Sora 2 said that they worked on it for at least 20 months. The biggest step in the product is that it can now generate a sound that is synchronized with video – not only the backgrounds of sound landscapes and sound effects, but also a dialogue that works for a number of languages. Is available Sora.com“Thanks to Sora 2 Pro available for ChatgPT Pro users” and programmers will receive access to the API “soon”.
The social application is also called “Sora” and is now available via iOS for users in the USA and Canada based on the invitation. More countries will take place, and each user will receive four additional invitations to share with friends.
In the Opeli edition he said that Sora 2 “brings us closer to the useful simulators of the world.” Opeli employees told journalists that the up-to-date system was also smarter in physics. Peebs saidIN “You can carefully make backflips on the shoulder blade on the water harvest, and all fluid dynamics and swimming pools are thoroughly modeled. This is really a change in the step function in terms of the intelligence of physics that this model has.”
But it can also be a nightmare when it comes to deep cabinets, which are already a common problem.
The accompanying Sora Social Media Media application looks very similar to Tiktok, with the “for you” page and an interface with a vertical scroll. But it contains a function called “Kamee”, in which people can give the application permission to generate films with their similarities. In a film that should be recorded in the iOS application, you are asked to move your head in different directions and terminate the sequence of specific numbers. After sending, your similarity can be remixed (including interactions with the similarities of other people), describing the desired video and sound in the text line.
Openai employees told journalists during Monday’s briefing that Sora replaced text messages, emoji and voice notes to become one of the main ways of communicating with each other. On the briefing, dismantling false ads, false conversations between two people, false information clips and others, all created from Sora 2 and consumed by scrolling applications on social media.
Some clips were lived live live, and were terrifyingly realistic-no more six-pad hands (which at least I saw). While the video did not contain a fantastic topic, just like the example of a gigantic juice box, an undered eye may not be able to say that these films were generated AI-A if you can say, it would probably be just based on feelings or climate, something feeling.
The Sora application allows you to choose who can create a camera with your similarity: only yourself, the people you approve, mutual or “everything”. OpenAi employees said that users were “co -owners” of these kamei and can cancel access to the creation of another person or delete a film containing their similarity generated by AI at any time. It is also possible to block someone in the application. The team members also said that users can see the sketches of the camera that other of them do before publishing, and that in the future they can change the settings so that the person presented in the camera must approve it before it is published – but it is not.
In the OpenAI edition, he also pointed to newly broken parental controls for his products, writing that the options include the inclusion of a “non -personal channel, choosing whether to let them send and receive direct messages, and the option to disable the uninterrupted content channel during scrolling.”
Like Tiktok, the Sora application seems to be built to generate trends on social media, with the possibility of “remiks” of other films. It currently generates 10 seconds, but professional users can soon get up to 15 seconds on the Internet, and the same possibility appeared later. Employees said that it is possible to create longer movies, but because it is a calculation task, they still wonder how they can handle it.
For everyone, the biggest task with Sora 2 and the Sora application can be to determine how to decide what is true. Opeli wrote in the edition that “every film made of Sora has many signals that show that it is generated by AI”, such as metadata, a movable watermark in movies downloaded from Sora.com or Sora application, and unspecified “internal detection tools that help to assess whether our products have been created with a specific video or audio.” (Opeli said in the edition that in some internet flows ChatgPT Pro: “Water signs can be omitted, except when real people were presented.”) The recording of the screen should not be possible in the application either. But the bypasses seem almost inevitable if the latest story is any guide – just like disinformation with the potential of spreading like a fire.
As for the deep government wardrobes, celebrities and other public characters? “Public numbers cannot be generated in Sora, unless they sent the scene themselves and agreed to use it,” wrote Opeli in the edition. “The same applies to everyone: if you have not sent a camera, your similarity cannot be used.” Openai employees also said during check -in that “it is impossible to generate” the content of X or “extreme” via the platform and that the company does not allow the free text from public data generated by AI. They also said that the company moderated video results in the field of potential violations of politics and copyright problems.
But people in the past, again and again. Last year Warned Microsoft Engineer that his image generator AI ignored the copyright and generated sexual, violent images with simpleads. XAI GROK recently generated naked Deepfake movies from Taylor Swift with a minimal hint. And even in the case of OpenAI, employees told journalists that the company is restrictive for public data regarding “this implementation”, they do not seem to exclude the ability to create such films in the future.
On Monday, The Wall Street Journal informed that the generations of Sorai Opeli will have a material protected by copyright Unless the owners of rights “give up” that their work will appear on the platform. When The Verge When asked about the case during Monday check -in with OpenAI, employees seemed to avoid questions, indicating the existing image generation policy and the statement that Sora would be this extension. They also said that some resignations from the copyright policy of the image generation would move and that the company would build more control.
