Users of the conversational AI platform ChatGPT discovered an captivating phenomenon over the weekend: a popular chatbot refuses to answer questions about “David Mayer.” Asking him to do so will result in an immediate suspension. Conspiracy theories have emerged, but there is a more ordinary reason behind this strange behavior.
Word spread quickly last weekend that the name was poison to the chatbot, with more and more people trying to trick the service into merely confirming the name. No luck: any attempt to get ChatGPT to spell this particular name results in the middle name failing or even breaking.
“I am unable to answer,” the message says, if it says anything at all.
But what started as a one-off curiosity quickly blossomed when people discovered that ChatGPT couldn’t name more than just David Mayer.
The names of Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber and Guido Scorza were also found to be responsible for the service outage. (No doubt more has been discovered since then, so this list is not exhaustive.)
Who are these men? And why does ChatGPT hate them so much? OpenAI did not immediately respond to repeated inquiries, so we are left to piece it together as best we can.* (See update below).
Some of these names can belong to any number of people. However, a potential thread of connection identified by ChatGPT users is that these people are public or semi-public figures who may prefer that search engines or AI models “forget” certain information.
Brian Hood, for example, stands out in that, assuming he’s the same guy, I wrote about him last year. Hood, an Australian mayor, accused ChatGPT of falsely identifying him as the perpetrator of a decades-old crime that he actually reported.
Although his lawyers contacted OpenAI, no lawsuit was ever filed. Like him he told the Sydney Morning Herald earlier this year: “Offensive material removed and version 4 released, replacing version 3.5.”
As for the most prominent owners of the remaining names, David Faber is a longtime CNBC reporter. Jonathan Turley is a lawyer and Fox News commentator who was “encountered” in delayed 2023 (i.e., a false 911 call sent armed police to his home). Jonathan Zittrain is also a legal expert who has The topic of the “right to be forgotten” was discussed at length. Guido Scorza sits on the Management Board of the Italian Data Protection Authority.
It’s not exactly the same scope of work, but it’s not a random choice either. It is possible that each of these people is someone who, for whatever reason, could have formally requested that the information relating to them on the Internet be restricted in some way.
Which brings us back to David Mayer. There is no lawyer, journalist, mayor or other prominent person with that name that anyone can find (sorry to the many respectable David Mayers).
However, there was Professor David Mayer, who taught drama and history, specializing in the relationship between the delayed Victorian era and early cinema. Mayer died in the summer of 2023 at the age of 94. However, for many years the British-American scientist faced a legal and internet problem involving his name being associated with a wanted criminal who used it as a pseudonym, to the point where he was unable to travel anywhere.
Mayer he constantly fought to have his name distinguished from that of the one-armed terroristeven though he was still teaching already in the last years of his life.
So what conclusions can we draw from all this? Our guess is that the model ingested or provided a list of people whose names require special treatment. Whether for legal, security, privacy or other reasons, these names are likely subject to special rules, like many other names and identities. For example, ChatGPT may change its response if it matches a name you entered on a list of political candidates.
There are many such special rules, and each prompt undergoes various forms of processing before being answered. However, these rules of conduct after rapid intervention are rarely made public, except for political announcements such as “the model will not predict the election results of any candidate for office.”
What probably happened was that one of these lists, which are almost certainly actively maintained or automatically updated, was somehow corrupted by buggy code or instructions that, when invoked, caused the chat agent to immediately crash. To be clear, this is just our speculation based on what we’ve learned, but this wouldn’t be the first time an AI has behaved strangely due to post-training cues. (Incidentally, as I was writing this, “David Mayer” started working again for some, while other names continued to cause crashes.)
As usual in such cases, Hanlon’s razor applies: never attribute to malice (or conspiracy) what can be adequately explained by stupidity (or syntactical error).
All this drama is a useful reminder that not only are these AI models not magical, but they are also extremely sophisticated and auto-feeding, and are actively monitored and interfered with by the companies that create them. Next time you’re thinking about getting facts from a chatbot, consider whether it might be better to go straight to the source.
Update: OpenAI confirmed on Tuesday that the name “David Mayer” had been flagged by internal privacy tools, saying in a statement that “There may be cases where ChatGPT does not share certain information about people to protect their privacy.” The company did not provide further details about the tools or process.