The future never I feel completely confident. However, in times of rapid and intense transformation – political, technological, cultural and scientific – sensing what is waiting around the next corner is as hard as ever.
At WIRED, we’re obsessed with what comes next. Our pursuit of the future most often takes the form of vigorously reported stories, in-depth videos and interviews with the people who helped define it. That’s why we recently adopted a novel slogan: For Future Exploit. We focus on stories that not only explain what lies ahead, but also support shape it.
In that spirit, we recently interviewed many of the luminaries from across the worlds that WIRED touches — and who attended our recent Gigantic Interview event in San Francisco — as well as students who have spent their lives inundated with technologies that seem to be increasingly disrupting their lives and livelihoods. Unsurprisingly, the main focus was on artificial intelligence, but it expanded to other areas of culture, technology and politics. Think of it as a benchmark for how people today think about the future, and maybe even a gritty map of where we’re headed.
Artificial intelligence everywhere, all the time
It’s clear that AI is as integrated into people’s lives as search has been since Alta Vista. Similar to search, utilize cases tend to be practical or mundane in nature. “I use a lot of my LLM courses to answer any questions I have during the day,” says Angel Tramontin, a student at the Haas School of Business at the University of California, Berkeley.
Several of our respondents noted that they had used AI in the last few hours, even in the last few minutes. Recently, Anthropic co-founder and CEO Daniela Amodei has been using her company’s chatbot to support with child care. “Claude helped my husband and I potty train our older son,” she says. “I recently used Claude to trigger panic symptoms in my daughter and googled him.”
She’s not the only one. Mean director Jon M. Chu turned to LLM “just to get advice about my children’s health, which may not be the best,” he says. “But it’s a good baseline to start with.”
AI companies themselves see health as a potential growth area. OpenAI announced ChatGPT Health earlier this month, revealing that “hundreds of millions of people” utilize the chatbot every week to answer questions about health and wellness. (ChatGPT Health implements additional privacy measures given the sensitivity of queries.) Anthropic’s Claude for Healthcare solution targets hospitals and other healthcare systems as customers.
Not everyone we spoke to took such an immersive approach. “I try not to use it at all,” says UC Berkeley student Sienna Villalobos. “When it comes to doing your own job, it’s very easy to give an opinion. The AI shouldn’t be able to give you an opinion. I think you should be able to give it an opinion for yourself.”
This view may be increasingly in the minority. According to a recent Pew Research study, nearly two-thirds of American teenagers utilize chatbots test. About 3 in 10 people say they utilize it every day. (Given how Google Gemini is now tied to search, many more people may be using AI without even realizing or intending to do so.)
Ready to launch?
The pace of development and adoption of artificial intelligence continues despite concerns about its potential impact on mental health, the environment and society as a whole. In this wide-open regulatory environment, companies are largely forced to self-police. So what questions should AI companies be asking themselves before each launch, without any barriers from lawmakers?
“What could go wrong?” “That’s a really good and important question that I’d like to ask more companies,” says Mike Masnick, founder of the technology and politics news site Techdirt.
