Have you ever wanted to time travel and see what your future self might look like? Now, thanks to the generative power of artificial intelligence, it is possible.
Researchers at MIT and elsewhere have created a system that allows users to engage in an online text conversation with an AI-generated simulation of their potential future selves.
Named Future Youthe system aims to lend a hand newborn people improve their self-esteem future self-continuitya psychological concept that describes how connected a person feels to their future self.
Research has shown that a stronger sense of future self-continuity can positively impact the way people make long-term decisions, from their likelihood of contributing to financial savings to their focus on achieving academic success.
Future You uses a huge language model that, based on information provided by the user, generates a credible, virtual version of a person in their 60s. This simulated future self can answer questions about what someone’s life might look like in the future, and also offer advice or insights about the path they might take.
In a preliminary user study, researchers found that after approximately half an hour of interacting with Future You, people reported reduced anxiety and felt a stronger sense of connection to their future selves.
“We don’t have a real-time machine yet, but artificial intelligence could be a kind of virtual time machine. We can use this simulation to help people better think about the consequences of today’s choices,” says Pat Pataranutaporn, a recent Media Lab graduate student who is actively developing a program to advance human-AI interaction research at MIT and co-lead author A article on Future You.
In the article, Pataranutaporn is joined by co-authors Kavin Winson, researcher at KASIKORN Labs; and Peggy Yin, a student at Harvard University; as well as Auttasak Lapapirojn and Pichayoot Ouppaphan from KASIKORN Labs; and senior authors Monchai Lertsutthiwong, head of artificial intelligence research at KASIKORN Business and Technology Group; Pattie Maes, Germeshausen professor of media, arts and sciences and head of the Fluid Interfaces group at MIT, and Hal Hershfield, professor of marketing, behavioral decision-making and psychology at the University of California, Los Angeles. The results of the study will be presented at the IEEE Conference on Frontiers in Education.
Realistic simulation
Research on the conceptualization of the future self goes back a long time at least the 1960s. One early method for improving future self-continuity involved writing letters to one’s future selves. Recently, scientists have used virtual reality goggles to help people imagine future versions of themselves.
However, none of these methods were very interactive, which limited the impact they could have on the user.
With the advent of generative artificial intelligence and large language models like ChatGPT, researchers saw the possibility of creating a simulated future self that could discuss someone’s actual goals and aspirations in a normal conversation.
“The system makes the simulation very realistic. “‘Future You’ is much more detailed than what you might come up with when imagining yourself in the future,” Maes says.
The artificial intelligence system uses this information to create what researchers call “future self-memories,” which provide the background from which the model draws knowledge when interacting with the user.
For example, a chatbot can talk about the most important moments in a person’s future career or answer questions about how a user overcame a specific challenge. This is possible because ChatGPT was trained on extensive data of people talking about their lives, careers, and good and bad experiences.
The user uses the tool in two ways: through introspection, when they consider their life and goals as they construct their future identity, and through retrospection, when they consider whether the simulation reflects who they are becoming, Yin says.
“You can imagine Future You as a story search space. “You have a chance to hear how some of your experiences, which may still be emotionally charged for you, may be metabolized over time,” she says.
To lend a hand people imagine their future, the system generates a photo of the user with age progression. The chatbot is also designed to provide lively responses with phrases such as “when I was your age,” making the simulation more closely resemble an actual future version of the person.
Hershfield says the ability to take advice from an older version of yourself, rather than general AI, could have a stronger positive impact on a user contemplating an uncertain future.
“Interactive, live elements of the platform give the user a starting point and take something out of them that may result in anxious thoughts, making them more concrete and productive,” he adds.
But this realism can backfire if the simulation goes in a negative direction. To prevent this from happening, Future You warns users that it only shows one potential version of their future self and that they have the opportunity to change their lives. Providing alternative answers to a questionnaire leads to a completely different conversation.
“This is not a prediction, but rather a possibility,” says Pataranutaporn.
Supporting self-development
To evaluate Future You, they conducted a user study with 344 people. Some users interacted with the system for 10–30 minutes, while others either interacted with a generic chatbot or merely completed surveys.
Participants who used Future You were able to build a closer relationship with their ideal future self, based on statistical analysis of their responses. These users also reported less anxiety about the future after their interactions. Additionally, Future You users reported that the conversation was honest and that their values and beliefs seemed consistent with their simulated future identity.
“This work charts a new path by using a well-established psychological technique to visualize future times – an avatar of the future self – using cutting-edge artificial intelligence. “This is exactly the kind of work that scientists should be focusing on because the technology of creating virtual models of the self is coupled with huge language models,” says Jeremy Bailenson, the Thomas More Storke Professor of Communication at Stanford University, who was not involved in this research.
Building on the results of this initial user study, researchers continue to refine how to establish context and core users so that they have conversations that help build a stronger sense of future self-continuity.
“We want to guide the user to talk about specific topics, rather than asking themselves in the future who will be the next president,” says Pataranutaporn.
They also add security measures to prevent misuse of the system. For example, you could imagine a company creating a “future you,” a potential customer who achieves something great in life by purchasing a certain product.
In the future, scientists want to explore specific applications of Future You, perhaps allowing people to explore different careers or visualize how their everyday choices could impact climate change.
They are also collecting data from the Future You pilot to better understand how people use the system.
“We don’t want people to become addicted to this tool. We rather hope that it will be a significant experience that will help them look at themselves and the world differently and will help them develop themselves,” says Maes.
The researchers thank Thanawit Prasongpongchai, designer at KBTG and visiting scientist at the Media Lab.