Responsibility by design
Gemma was designed with ours Principles of artificial intelligence in the lead. To ensure the security and reliability of Gemma’s pre-trained models, we have used automated techniques to filter out certain personal and other sensitive data from the training sets. Additionally, we used extensive fine-tuning and reinforcement learning from human feedback (RLHF) to adapt our instruction-adapted models to responsible behavior. To understand and reduce the risk profile of Gemma models, we performed detailed assessments, including manual red-teaming, automated adversarial testing, and assessments of the model’s capabilities for unsafe activities. These ratings are presented in our Model Card.
We are also releasing something recent A responsible set of tools for generating artificial intelligence with Gemma to lend a hand developers and researchers prioritize building safe and sound and responsible AI applications. Toolkit includes:
- Safety Classification: We assure you innovative methodology to build stalwart security classifiers from a minimum number of examples.
- Debugging: Pattern debugging tool helps investigate Gemma’s behavior and solve potential problems.
- Conductivity: You can access best practices for modelers, based on Google’s experience in developing and deploying huge language models.
Optimized for frameworks, tools and hardware
You can tune Gemma models on your own data to suit your specific application needs, such as summarization or search-assisted generation (RAG). Gemma supports a wide range of tools and systems:
- Cross-platform tools: Operate your favorite framework with reference implementations for inference and tuning in cross-platform Keras 3.0, native PyTorch, JAX, and Hugging Face Transformers.
- Compatibility between devices: Gemma models run on popular device types including laptops, desktops, IoT, mobile devices and the cloud, enabling widely available AI capabilities.
- State-of-the-art hardware platforms: We have has partnered with NVIDIA to optimize Gemma for NVIDIA GPUsfrom the data center to the cloud and on-premises RTX AI PCs, delivering industry-leading performance and integration with cutting-edge technology.
- Optimized for Google Cloud: Vertex AI provides a broad set of MLOps tools with a range of tuning options and one-click deployment using built-in inference optimizations. Advanced customization is available with fully managed Vertex AI tools or self-managed GKE, including deployment on cost-effective infrastructure spanning GPUs, TPUs, and CPUs from any platform.
Free research and development credits
Gemma was created for an open community of developers and researchers supporting AI innovation. You can start working with Gemma today with free access on Kaggle, a free tier for Colab notebooks, and $300 in Google Cloud beginner credits. Scientists can also apply for Google Cloud Credits up to a total of $500,000 to accelerate the implementation of their projects.
Getting started
You can find out more about Gemma and access quick start guides on the website ai.google.dev/gemma.
As we continue to develop the Gemma family of models, we look forward to introducing recent variants for a variety of applications. Stay tuned for events and opportunities in the coming weeks to connect, learn and build with Gemma.
We can’t wait to see what you create!