The man who killed himself before exploding a Tesla Cybertruck outside of a Trump hotel in Las Vegas on New Year’s Day used ChatGPT to plan his attack, authorities say.
Matthew Alan Livelsberger, 37, queried ChatGPT for information about how he could put together an explosive, how fast a round would need to be fired for the explosives found in the truck to go off — not just catch fire — and what laws he would need to get around to get the materials, law enforcement officials said. “We know AI was going to change the game for all of us at some point or another, in really all of our lives,” Clark County/Las Vegas Metropolitan Police Sheriff Kevin McMahill said. “I think this is the first incident that I’m aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device.”
Artificial intelligence seems to have unleashed a new era of terror. The FBI says another New Year’s Day assailant, who rammed a truck into revelers in a deadly terrorist attack in New Orleans, wore AI-enabled Meta glasses as he plotted and carried out the assault.
A spokesperson said ChatGPT was saddened by the Las Vegas incident and is “committed to seeing AI tools used responsibly.” A Meta spokesperson told NBC News that the company was in touch with authorities regarding the New Orleans attack.
With the rising popularity of AI tools, experts on technology and national security have sounded the alarm about potential opportunities that AI could provide to people looking to carry out terrorist attacks.
In 2021, a report by the United Nations’ Office of Counter-Terrorism, titled “Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes,” listed several ways that AI already has aided terrorist attacks and could do so again. The report cites the occurrence of AI-enabled cyberattacks, the installation of AI-assisted malware and ransomware to disrupt systems or hold them hostage, and AI tools that allow terrorists to crack passwords and infiltrate vital programs.








