Introducing Gemini 1.5
Demis Hassabis, CEO of Google DeepMind, on behalf of the Gemini team
It’s an thrilling time for AI. Fresh developments in this field could make artificial intelligence more helpful to billions of people in the coming years. From Introducing Gemini 1.0we test, refine and improve its capabilities.
Today we announce our next-generation model: Gemini 1.5.
Gemini 1.5 delivers dramatically improved performance. This represents a step change in our approach, relying on research and engineering innovation in almost every aspect of the development of our core model and infrastructure. This includes increasing Gemini 1.5’s training and maintenance performance with the recent one A mix of experts (MoE) architecture.
The first Gemini 1.5 model we are making available for early testing is the Gemini 1.5 Pro. It is a medium-sized multimodal model, optimized to scale across a wide range of tasks performs at a similar level to 1.0 Ultra, our largest model yet. It also introduces a groundbreaking experimental feature in long context understanding.
Gemini 1.5 Pro comes with a standard context window containing 128,000 tokens. However, starting today, a restricted group of developers and enterprise customers can try out this service in a contextual window of up to 1 million tokens via AI Studio AND Apex AI in private preview.
As we roll out a full context window of 1 million tokens, we are actively working on optimizations to improve latency, reduce computational requirements, and improve user experience. We can’t wait for people to try this groundbreaking feature. We’re sharing more details about future availability below.
Continued advancements in our next-generation models will open up recent opportunities for people, developers and enterprises to create, discover and build using AI.