Saturday, April 19, 2025

Connecting DeepMind Research with Alphabet Products

Share

In today’s Five Minutes With, we met with Gemma Jennings, Product Manager at Applied, who led a session on vision language models at AI Summit – one of the world’s largest events dedicated to artificial intelligence for business.

At DeepMind…

I am part of the Applied team that helps make DeepMind’s technology available to the outside world through Alphabet and Google products and solutions such as Wave and Google Assistant, Maps, and Search. As a product manager, I act as a bridge between the two organizations, working closely with both teams to understand the research and how people can operate it. Ultimately, we want to be able to answer the question: How can we operate this technology to improve the lives of people around the world?

I’m particularly excited about our sustainability portfolio. We’ve already helped reduce the amount of energy needed to nippy Google’s data centers, but there’s so much more we can do to have a bigger, transformative impact on sustainability.

Before DeepMind…

I worked at John Lewis Partnership, a British department store that has a forceful sense of purpose built into its DNA. I’ve always enjoyed being part of a company with a sense of social purpose, so DeepMind’s mission of solving intelligence problems to advance science and benefit humanity really resonated with me. I was intrigued to learn how that ethos manifested itself in a research-driven organization—and at Google, one of the largest companies in the world. Add that to my background in experimental psychology, neuroscience, and statistics, and DeepMind ticked all the boxes.

AI Summit…

This is my first in-person conference in almost three years, so I’m really excited to meet people in the industry I’m interested in and learn what other organizations are working on.

I can’t wait to attend some of the quantum computing talks to learn more. It has the potential to drive the next large paradigm shift in computing power, unlocking recent operate cases for AI applications around the world and allowing us to work on bigger, more intricate problems.

My work involves a lot of deep learning methods, and it’s always exhilarating to hear about the different ways people are using the technology. Currently, these types of models require training on enormous amounts of data—which can be pricey, time-consuming, and resource-intensive given the amount of computation involved. So what’s next? And what does the future of deep learning look like? These are the questions I want to answer.

I presented…

Image Recognition Using Deep Neural Networks, our recently published research on Visual Language Models (VLMs). In my talk, I discussed recent progress in combining Enormous Language Models (LLMs) with powerful visual representations to advance the state of the art in image recognition.

This fascinating research has so many potential applications in the real world. One day, it could act as an assistant to support learning in classrooms and informal learning in schools, or support blind or visually impaired people see the world around them, changing their daily lives.

I want people to leave the session…

With a better understanding of what happens after a research breakthrough is announced. There’s so much amazing research, but we need to think about what comes next, like what global problems can we support solve? And how can we operate our research to create products and services that have purpose?

The future is vivid and I can’t wait to discover recent ways to operate our groundbreaking research to support millions of people around the world.

Latest Posts

More News