Recent controversies around GoogleGemma’s model once again highlighted the dangers of using test models for developers and the fleeting nature of model availability.
Google pulled its weight Gemma 3 model from AI Studio following the announcement by Sen. Marsha Blackburn (R-Tenn.) that model Gemma deliberately hallucinatory lies about her. Blackburn stated that the model fabricated articles about her that went beyond “harmless hallucinations” and amounted to a defamatory act.
In response, Google published on X On October 31, he will remove Gemma from AI Studio, stating that this is to “prevent misunderstandings.” Gemma remains available via API.
It is also available through AI Studio, which, as the company described, is “a developer tool (in fact, you must confirm that you are a developer to use it). We have now seen reports of non-developers trying to use Gemma in AI Studio and asking substantive questions of it. We never intended this to be a consumer tool or model or to be used in that way. To prevent this confusion, access to Gemma in AI Studio is no longer available.”
To be clear, Google has the right to remove its model from its platform, especially if people have discovered hallucinations and lies that may multiply. It also highlights the danger of relying primarily on experimental models and why enterprise developers must save projects before AI models are retired or removed. Technology companies like Google continue to grapple with political controversies that often impact their implementation.
VentureBeat contacted Google for additional information and was pointed to posts from Oct. 31. We also contacted Senator Blackburn’s office, who reiterated her position in the statement that artificial intelligence companies should “shut down [models] until you can control it.”
Development experiments
The Gemma model family, which includes, among others: Parameter version 270Mbest suited for small, fast applications and tasks that can be run on devices such as smartphones and laptops. Google said Gemma models were “built specifically for developers and the research community. They are not intended to present facts or make them available to consumers.”
However, non-developers can still access Gemma because it is on the web AI Studio platforma more beginner-friendly space where developers can play around with Google AI models compared to Vertex AI. So even if Google never intended Gemma and AI Studio to be available to congressional staffers, for example, situations like this could still happen.
It also shows that as models continue to improve, these models continue to provide inaccurate and potentially harmful information. Enterprises must constantly weigh the benefits of models like Gemma against their potential inaccuracies.
Project continuity
Another problem is the control AI companies have over their models. The saying “nothing is owned on the Internet” remains true. If you don’t have a physical or local copy of the software, you could easily lose access to it if the company that owns it decides to take it away. Google did not clarify to VentureBeat whether current AI Studio projects powered by Gemma are saved.
Similarly, OpenAI users were disappointed when the company announced that this would happen remove popular older models on ChatGPT. Even after retracting his statement and restoring GPT-4o return to ChatGPT, OpenAI CEO Sam Altman continues to raise questions about maintaining and supporting the model.
Artificial intelligence companies can and should remove their models if they create harmful products. AI models, no matter how mature, are a work in progress and are constantly developing and improving. But because the models are experimental, they can easily become tools that tech companies and lawmakers can apply for leverage. Enterprise developers must ensure that their work is writable before models are removed from platforms.
