They confirmed that the suspect, Matthew Livelsberger, an active-duty U.S. Army soldier, had a “possible manifesto” saved on his phone in addition to the email to the podcaster and other letters. They also showed a video showing him preparing for the explosion by pouring fuel into a truck at a rest stop before leaving for the hotel. He also kept a log of alleged sightings, although officials said he had no criminal record and was not under surveillance or investigation.
Las Vegas Metro Police he also published some slides showing questions he asked ChatGPT a few days before the explosion, asking about explosives, how to detonate them and how to detonate with a gunshot, as well as information about where to legally buy weapons, explosives and fireworks along his route.
Asked about the inquiries, OpenAI spokeswoman Liz Bourgeois said:
We are saddened by this incident and want AI tools to be used responsibly. Our models are designed to reject malicious instructions and minimize malicious content. In this case, ChatGPT responded by providing information already publicly available on the Internet and warning against harmful or illegal activities. We are working with law enforcement to support their investigation.
Officials say they are still investigating the possible origins of the explosion, described as a deflagration that moved rather slowly, as opposed to a detonation of explosives, which would have moved faster and caused more damage. While investigators say they have not yet ruled out other possibilities, such as an electrical low, the explanation that fits some of the questions and available evidence is that a muzzle flash ignited fuel/fireworks vapors inside the truck, which then caused a larger explosion of fireworks and other materials explosives.
Trying the queries in ChatGPT today still works, however the information he was asking for does not appear to be proprietary and can be obtained by most search methods. Still, the suspect’s exploit of a generative AI tool and investigators’ ability to track these requests and present them as evidence push questions about AI chatbot guardrails, security, and privacy out of the hypothetical realm and into our reality.