Saturday, April 19, 2025

DeepMind’s Latest Research at ICML 2022

Share

Paving the way for generalized systems with more effective and proficient AI

This weekend marks the start of the thirty-ninth International Conference on Machine Learning (ICML2022) will take place from July 17–23, 2022 at the Baltimore Convention Center in Maryland, USA, and will be a hybrid event.

Researchers working in artificial intelligence, data science, machine vision, computational biology, speech recognition, and many other fields present and publish their cutting-edge work in machine learning.

In addition to sponsoring conferences and supporting workshops and social events hosted by our long-term partners, LatinaX, Black in AI, Queer in AIAND Women in Machine Learningour research teams present 30 papers, including 17 external collaborations. Here is a brief introduction to our upcoming oral and spotlight presentations:

Effective reinforcement learning

Making reinforcement learning (RL) algorithms more proficient is key to building generalized AI systems. This includes helping to enhance accuracy and speed, improving transfer and lossless learning, and reducing computational costs.

In one of the selected oral presentations we show a new way of applying generalized policy improvement (GPI) on policy compositions, making them even more effective in increasing agent performance. Another oral presentation proposed a novel, well-established, and scalable way explore efficiently without having to use bonuses.In parallel, we propose a method extending the RL agent with a memory-based download processreducing the agent’s dependence on model capacity and enabling quick and malleable reuse of previous experiences.

Progress in language models

Language is a fundamental part of being human. It gives people the ability to communicate thoughts and concepts, create memories, and build mutual understanding. Studying aspects of language is key to understanding how intelligence works, both in AI systems and in humans.

Our oral presentation on the topic uniform scaling laws and our article about Search both explore how we could build larger language models more efficiently. Looking at ways to build more proficient language models, we introduce a novel dataset and test StreamingQA which evaluates how models adapt to novel knowledge and forget it over time, while our paper on this topic generating narrative shows that currently used pre-trained language models still have problems producing longer texts due to short-term memory limitations.

Algorithmic reasoning

Neural algorithmic reasoning is the art of building neural networks that can perform algorithmic computations. This emerging field of research has enormous potential to lend a hand adapt known algorithms to real-world problems.

We present CLRS Benchmark for Algorithmic Reasoningwhich evaluates neural networks in their performance on a diverse set of thirty classical algorithms from the Introductions to Algorithms textbook. Similarly, we propose general incremental learning algorithm which adapts the retrospective reconstruction of experience to automatic theorem proving, an significant tool that helps mathematicians prove sophisticated theorems. We also present framework for constraint-based simulationshowing how time-honored simulation and numerical methods can be used in machine learning simulators – a significant novel direction in solving sophisticated simulation problems in science and engineering.

See the full scope of our work at ICML 2022 Here.

Latest Posts

More News