Las Vegas law enforcement has released critical details about their investigation nearly a week after the New Year’s Day explosion near the Trump Hotel, uncovering the potential involvement of generative AI in the case.
The suspect, identified as Matthew Livelsberger, an active-duty US Army soldier, was found to have a “possible manifesto” saved on his phone, along with emails and letters related to his plans.
Video evidence revealed him pouring fuel onto a truck before driving to the explosion site, suggesting deliberate preparation.
While he had no prior criminal record, he kept a log of alleged surveillance activities, which officials noted was unfounded.
Among the most concerning discoveries were ChatGPT logs showing the suspect had asked the AI tool specific questions about explosives, gunshot detonations, and legal purchasing options for firearms and explosive materials.
These queries were made days before the explosion, linking his actions to the information he sought online.
Police have also shared slides of these interactions, highlighting the ease with which the suspect accessed sensitive details through generative AI tools.
Investigators described the incident as a slow-moving deflagration rather than a high-explosive detonation, suggesting that fuel vapor and fireworks ignited by a gunshot may have caused the explosion.
While other possibilities, such as electrical issues, are being considered, the evidence aligns with the suspect’s documented inquiries and preparations.
The incident raises significant concerns about the misuse of AI tools like ChatGPT for criminal purposes and the adequacy of their guardrails.
Although attempts to replicate the suspect’s queries still return results, similar information is also accessible through standard search engines.
This case underscores the dual-edged nature of AI technologies, as well as the implications for privacy and safety when such tools are used maliciously.