Google today fired AI Test Kitchen is an Android app that allows users to try out experimental AI-based systems from the company’s labs before they go into production. From today, interested parties can do so complete the registration form as the AI Test Kitchen begins to be gradually rolled out to compact groups in the US
As announced at the Google I/O developer conference earlier this year, the AI Test Kitchen will feature rotating demos centered around cutting-edge, cutting-edge AI technologies – all from within Google. The company emphasizes that these are not finished products, but are intended to give a taste of the technology giant’s innovations, while giving Google the opportunity to investigate how they are used.
The first set of demos in the AI Test Kitchen explore the capabilities of the latest version of LaMDA (Language Model for Dialogue Applications), Google’s language model that queries the web to answer questions in a human-like manner. For example, you can name a place and LaMDA will offer paths to explore, or share a goal and have LaMDA break it down into a list of subtasks.
Google says it has added “multiple layers” of protection to its AI Test Kitchen to minimize risks associated with systems like LaMDA, such as bias errors and toxic outputs. As Meta’s BlenderBot 3.0 recently illustrated, even the most advanced chatbots today he can quickly go off track, delving into conspiracy theories and offensive content when prompted for a specific text.
Google says Test Kitchen’s AI systems will attempt to automatically detect and filter out objectionable words or phrases that may be sexually explicit, hateful or offensive, violent or illegal, or may reveal personal information. However, the company warns that offensive text may sometimes find its way online.
“As AI technologies continue to evolve, they have the potential to unlock new experiences that support more natural human-computer interactions,” Google product manager Tris Warkentin and director of product management Josh Woodward wrote in a blog post. “We are at a point where external feedback is the next most helpful step towards improving LaMDA. If you rate each of LaMDA’s responses as nice, offensive, off-topic or misleading, we will employ this data – which is not associated with your Google account – to improve and develop our future products.
The AI Test Kitchen is part of a broader, recent trend among tech giants of piloting AI technologies before they are released into the wild. No doubt informed by snafus like Microsoft’s toxicity-spewing chatbot Tay, Google, Meta, OpenAI and others are increasingly choosing to test AI systems in compact groups to make sure they behave as expected – and improve your behavior if necessary.
For example, OpenAI released its language generation system, GPT-3, in closed beta several years ago before making it widely available. GitHub initially restricted access to Copilot, a code generation system developed in partnership with OpenAI, to select developers before making it generally available.
This approach wasn’t necessarily born out of the goodness of anyone’s heart – today’s leading technology players are well aware of the bad press the wrong AI can attract. By opening fresh AI systems to outside groups and attaching broad disclaimers, the strategy appears to tout the systems’ capabilities while mitigating more problematic elements. Time will tell whether this will be enough to avoid controversy – before the launch of the AI Test Kitchen. LaMDA made headlines for all the wrong reasons — but it seems that influential parts of Silicon Valley are confident that this will happen.