I was recently on vacation in Italy. As is common these days, I ran the route alongside GPT-5 for sightseeing suggestions and restaurant recommendations. The bot reported that the best choice for dinner near our hotel in Rome was a tiny walk down Via Margutta. It turned out to be one of the best meals I can remember. When I got home, I asked the model how she chose this restaurant, which I hesitate to reveal here in case I want a table sometime in the future (hell, who knows if I’ll ever come back: it’s called Babette. Call ahead to make reservations.) The response was convoluted and impressive. Factors included rave reviews from locals, advertisements on food blogs and the Italian press, and the restaurant’s celebrated combination of Roman and contemporary cuisine. Oh, and a tiny walk.
Something was also required of me: trust. I had to believe that GPT-5 was an straightforward broker who was choosing my restaurant without bias; that the restaurant was not shown to me as sponsored content and did not receive a portion of my check. I could do in-depth research myself to double-check the recommendation (I looked at the website), but the point of using AI is to bypass this friction.
This experience strengthened my faith in AI’s performance, but also made me wonder: as companies like OpenAI become more powerful and seek to repay investors, will AI be susceptible to the erosion of value that seems endemic to the technology applications we apply today?
Pun
Writer and technology critic Cory Doctorow calls this erosion “enshitification.” His premise is that platforms like Google, Amazon, Facebook and TikTok start out with the goal of pleasing users, but once the companies beat the competition, they deliberately become less useful to reap greater profits. After WIRED republished Doctorow’s pioneering 2022 essay on the phenomenon, the term entered the vernacular, largely because people found it entirely precise. Enshittification has been selected as the American Dialect Society’s word of the year for 2023. The concept has been quoted so often that it has transcended its vulgarity and appeared in places that don’t normally hear the word. Doctorow just posted title book on this topic; the cover photo is an emoji… guess what.
If chatbots and AI agents become malicious, it could be worse than Google Search losing its usefulness, Amazon’s results being inundated with ads, or even Facebook showing less social content in favor of anger-inducing clickbait.
Artificial intelligence is well on its way to being a constant companion and providing one-off answers to many of our requests. People already rely on it to facilitate interpret current events and get advice on all kinds of shopping and even life choices. Due to the enormous costs of creating a full artificial intelligence model, it can be assumed that only a few companies will dominate this field. They all plan to spend hundreds of billions of dollars over the next few years to refine their models and get them into the hands of as many people as possible. I would say that artificial intelligence is currently in what Doctorow calls a “good for users” stage. However, the pressure to recoup massive capital investments will be enormous – especially for companies with a closed user base. These terms, Doctorow writes, allow companies to abuse their users and business customers “to reclaim all the value for themselves.”
When we imagine the enchantment of artificial intelligence, the first thing that comes to mind is advertising. The nightmare is that AI models will make recommendations based on which companies paid for the internship. This isn’t happening now, but AI companies are actively exploring the advertising space. IN recent interviewOpenAI CEO Sam Altman said: “I think there’s probably some cool advertising product we can do that will be a net gain for the user and be something positive for our relationship with the user.” Meanwhile, OpenAI has just announced a deal with Walmart that will allow the retailer’s customers to shop on the ChatGPT app. I can’t imagine any conflict there! The AI search platform Perplexity has a program where sponsored results appear in clearly marked follow-ups. However, it assures that “these advertisements will not change our commitment to maintaining a trustworthy service that provides direct, impartial answers to your questions.”
