OpenAI The live GPT announcement event took place at 10:00 a.m. Pacific Time Monday, but you can still catch up.
The company described the event as “a chance to showcase some updates to ChatGPT and GPT-4.” Meanwhile, CEO Sam Altman promoted the event with his message, “not gpt-5, not a search engine, but we’ve been working challenging on some fresh things that we think people will love! Seems like magic to me.”
As it turned out, the announcement was a fresh model called GPT-4o – the “o” stands for “omni” – which provides better response to voice messages, as well as better vision capabilities.
“The reasons for GPT-4o are around voice, text and image,” said Mira Murati, OpenAI CTO, during a keynote presentation at OpenAI’s offices in San Francisco. “This is extremely important as we look to the future of interactions between us and machines.”
OpenAI also followed up Monday’s event with a number of additional demonstrations of GPT-4o’s capabilities on its YouTube channel, from improving visual accessibility through Be My Eyeshis the ability to harmonize with each otherAND next translation possibilities.
You can watch the replay on the OpenAI website or here:
