Demis Hassabus CEO from Google Deepmind, says that reaching artificial general intelligence or agi-milking term usually used to describe human cleverness machines-will mean refining some of the emerging abilities found in the flagship Gemini models on Google.
Google has announced a lot of updates of artificial intelligence and recent products during the annual EVERY EVENT at Mountain View, California. The search giant revealed updated versions of Gemini Flash and Gemini Pro, the fastest and most talented Google models. Hassabis said that Gemini Pro exceeds other models to Lmarena, a commonly used reference point for measuring the skills of AI models.
Hassabis has shown some AI experimental offers that reflect the vision of artificial intelligence, which goes far outside the chat window. “The way we finished with today’s chatbots is, I think, a transition period,” said Hassabis before today’s event.
Hassabis claims that the emerging possibilities of Gemini’s reasoning, agency and global modeling may allow much more talented and proactive personal assistants, really useful humanoid robots, and ultimately AI, which is as astute as any person.
Wi/O Google revealed Deep Think, a more advanced type of simulated reasoning for the PRO model. The latest AI models can break down problems and consider them in a way that resembles human reasoning more than the instinctive output of standard models of vast languages. Deep Think uses more computing time and a few undisclosed innovations to improve this trick, says Tulsee Doshi, a product cable for Gemini models.
Google today presented recent products that are based on Gemini’s ability to reason and take action. This includes a mariner, a Chrome browser agent who can go and do work such as shopping when he receives a command. The mariner will be offered as a “research preview” through a recent subscription plan called Google AI Ultra costing a high $ 249.99 per month.
Google also showed a more talented version of the Google Astra experimental assistant, which can see the world with a smartphone or a few smart glasses.
In addition to talking about the world around him, Astra can now operate a smartphone if necessary, for example using application or searching for a network to find useful information. Google showed a scene in which the user had ATRA HELP in search of the parts needed to repair the bike.
Doshi adds that Gemini is trained to better understand how to prevent the user’s needs, starting from releasing search on the web when it can be useful. Future assistants will have to be proactive without nervousness, they say both Doshi and Hassabis.
Astra’s abilities depend on the twin modeling the physical world to understand how it works, something that says Hassabis is crucial for biological intelligence. He says that artificial intelligence will have to improve its reasoning, agency and ingenuity. “There is no possibility.”
Long before the arrival of Aga, AI promises to augment the way people search the network, which can deeply affect the basic Google business.
The company has announced recent efforts to adapt the search to the AI era on I/O (see liveblog I/O Wired live all announced today). Google will introduce an artificial intelligence version of the search version called AI mode for everyone in the US and introduce AI purchasing tool, which allows users to send a photo to see what the clothing element would look like on them. The company will also review artificial intelligence, a service summarizing results for Google users, available in more countries and languages.
Changing time axis
Some AI researchers and experts say that Agi can be only a few years – and even here, depending on how you define this date. Hassabis claims that he can take five to 10 years to master the machines that you can do. “This is still quite the closest in the big plan of things”, “Hassabis”.
Hassabis claims that reasoning, agencies and world modeling should not only enable assistants such as Astra, but also give humanoid brains to humanoid robots needed to reliably act in a sloppy real world.
