Marco Rubio, U.S. secretary of state. Photo Credit: Al Drago/Bloomberg via Getty Images

A new wave of cybercrime where artificial intelligence is used (AI) to impersonate high-profile government officials has emerged. In a recent and alarming incident, an AI-powered voice scam targeted several high-level U.S. and foreign officials by mimicking the voice and writing style of Secretary of State Marco Rubio, who is also the Acting National Security Adviser to President Donald Trump.

This AI-powered voice scam involved unknown perpetrators launching a target campaign by creating a Signal account under a name that looked legitimate and closely related to Rubio’s official email. Using a powerful AI voice cloning technology, the perpetrators were able to convincingly mimic Secretary Rubio’s voice in audio messages, and with these audio messages, at least five high-ranking officials were contacted, including three foreign ministers, a U.S. governor, and a member of Congress. 

The attackers employed a combination of tactics, by sending AI-generated voice messages via Signal to at least two targets, and using text messages that closely matched Rubio’s writing style to invite officials to further conversations on the platform. 

According to a State Department cable, the primary goal was to manipulate targets into revealing sensitive information and/or exploiting trust using the Secretary’s identity so as to be granted access to secure accounts. 

Vulnerabilities in Government Communication Practices

This AI voice scam is not an isolated incident. The U.S. State Department noted that the Rubio impersonation bore similarities to previous attempts to mimic other senior U.S. officials, some of which were investigated by the FBI in May 2025. Since then, the FBI warned of an “ongoing malicious text and voice messaging campaign” that was targeting senior government officials. The State Department then cautioned the public that any message claiming to be from a senior U.S. official should be treated with skepticism. 

This incident also brought to light significant vulnerabilities in government communication practices. For instance, the Trump administration’s reliance on commercial messaging platforms like Signal for sensitive discussions, which has introduced new security risks. 

Earlier in the year, confidential texts from a Signal group with major U.S. security officials in it were leaked. Some of these officials included JD Vance, the U.S. Vice President; Marco Rubio, the U.S. Secretary of State; Micheal Waltz, National Security Adviser; Pete Hegseth, Secretary of Defense; John Ratcliffe, CIA Director; Tulsi Gabbard, Director of National Intelligence; Scott Bessent, Treasury Secretary; and Susie Wiles, the White House Chief of Staff. 

They all discussed top-secret plans to launch drone strikes on certain targets across Yemen, and these detailed war plans that were discussed in the group chat were leaked by Jeffrey Goldberg, Editor-in-Chief of The Atlantic. The addition of Goldberg into the Signal group chat was an oversight on the part of Micheal Waltz who unknowingly added him. 

Cybersecurity experts have pointed out that the administration’s use of personal devices and commercial platforms to discuss critical issues creates additional attack vectors. They note that while Signal provides end-to-end encryption, it lacks the robust security protocols of dedicated and recommended government systems like SIPRNet or JWICS.

Detection and Prevention Challenges

Detecting AI-generated voice clones is becoming increasingly difficult. Current methods involve analyzing for vocal artifacts, inconsistencies in speech, and unusual background noises. However as AI technology improves, these indicators are becoming less reliable. 

Industry data reveals a dramatic surge in AI voice scam activity, with a reported 1,300% increment over the past year. According to voice fraud prevention firms, they report that deepfake attacks have risen from one deepfake attack per customer per month in 2023 to five and a half deepfake attacks per day per customer by the end of 2024. 

“Each tool is becoming even more, for lack of a better word, idiot proof, in terms of how easy it is to just create something,” said Vijay Balasubramaniyan, the CEO and founder of Pindrop, a company that specializes in voice fraud prevention. “It’s all just push of a button.”

And the broader implications are these – the use of AI voice cloning to impersonate high-ranking government officials stands as a serious threat to national security, diplomatic relations, and overall public trust. It also stands as a threat to the everyday people who religiously use social media. 

As Balasubramaniyan put it, “We’re in the Wild, Wild West as far as information is concerned.”

Share.

I’m Precious Amusat, Phronews’ Content Writer. I conduct in-depth research and write on the latest developments in the tech industry, including trends in big tech, startups, cybersecurity, artificial intelligence and their global impacts. When I’m off the clock, you’ll find me cheering on women’s footy, curled up with a romance novel, or binge-watching crime thrillers.

Comments are closed.

Exit mobile version