Sunday, April 20, 2025

The latest Google DeepMind research at ICML 2023

Share

Exploring the safety, adaptability, and performance of artificial intelligence in the real world

The 40th begins next week International Machine Learning Conference (ICML 2023), which will take place from July 23 to 29 in Honolulu, Hawaii.

ICML brings together the artificial intelligence (AI) community to share and connect up-to-date ideas, tools and datasets to advance the field. From computer vision to robotics, researchers from around the world will present their latest achievements.

Our Director of Science, Technology and Society, Shakir Mohamed, will speak: talk about machine learning for social purposestackling healthcare and climate challenges, adopting a sociotechnical perspective, and strengthening global communities.

We are proud to support the conference as a Platinum Sponsor and continue cooperation with our long-term partners LatinX in artificial intelligence, Queer in AIAND Women in machine learning.

We will also present demo versions at the conference AlphaFoldour advances in fusion science and up-to-date models, e.g PaLM-E for robotics and fenaki to generate video from text.

Google DeepMind researchers are presenting over 80 up-to-date papers at ICML this year. Because many articles were submitted before Google Brain and DeepMind joined forces, articles originally submitted as part of the Google Brain affiliation will be included in Google Research Blogwhile this blog features articles submitted as part of the DeepMind affiliation.

Artificial intelligence in a (simulated) world

The success of AI that can read, write, and create relies on foundational models – AI systems trained on massive datasets that can learn to perform multiple tasks. Our latest research explores how we can translate these efforts into the real world and lays the groundwork for more generally capable and embodied AI agents that can better understand the dynamics of the world, opening up up-to-date possibilities for more useful AI tools.

In the oral presentation we present AdA, an artificial intelligence agent that can adapt to solve up-to-date problems in a simulated environment, just as humans do. In just a few minutes, AdA can take on challenging tasks: combine objects in groundbreaking ways, navigate concealed areas and cooperate with other players

Similarly, we show how we can benefit vision language models to help train embodied agents – for example, by telling the robot what it is doing.

The future of reinforcement learning

To develop responsible and trustworthy AI, we need to understand the goals underlying these systems. In reinforcement learning, one way this can be defined is in terms of reward.

In an oral presentation our goal is resolve the reward hypothesis first put forward by Richard Sutton stating that all goals can be thought of as maximizing expected cumulative reward. We explain the precise conditions under which this occurs and explain what goals can – and cannot – be achieved by using a reward in the general form of a reinforcement learning problem.

When implementing AI systems, they must be hearty enough to meet the demands of the real world. We are wondering how to improve it train reinforcement learning algorithms under constraintsbecause AI tools often need to be restricted for security and performance reasons.

In our award-winning study 2023 ICML Award for Outstanding Publicationwe explore how we can train models of sophisticated long-term strategy under conditions of uncertainty imperfect information games. We share how models can play to win a two-player game without even knowing the other player’s position and possible moves.

Challenges on the border of artificial intelligence

People can easily learn, adapt and understand the world around us. Developing advanced artificial intelligence systems that can generalize in a human-like manner will support create artificial intelligence tools that we can exploit in our everyday lives and face up-to-date challenges.

One way AI can adapt is to quickly change its predictions in response to up-to-date information. In an oral presentation, we look at Plasticity in neural networks and how they can be lost during training – and ways to prevent losses.

We also present research that may support explain the type of contextual learning that occurs in vast language models during learning. neural networks meta-trained on data sources whose statistics change spontaneously, such as in natural language prediction.

In an oral presentation, we present a up-to-date family of recurrent neural networks (RNN), which are better at performing long-term reasoning tasks to unlock the promise of these models for the future.

Finally in ‘quantile credit assignmentwe propose an approach to separate luck from skill. By establishing a clearer connection between actions, outcomes and external factors, AI can better understand sophisticated real-world environments.

Latest Posts

More News