<img src="https://secure.glue1lazy.com/215876.png" style="display:none;">

From Innovation to Exploitation: The Convergence of Generative AI in Crypto Crime

Two of the most talked about issues in technology today are blockchain and generative artificial intelligence (AI). To most consumers, these technologies generally occupy opposite sides of the spectrum. Blockchain serves as a public, digital ledger, most often for cryptocurrency transactions, while generative AI aids in the production of text, image, audio, and as we saw in the recent release of OpenAI’s Sora, video.

Unfortunately, these two technologies are on a collision course, owing to the work of a few enterprising scammers. When generative AI first exploded into popularity in late 2022 with ChatGPT, which became the fastest-growing product in history, analysts speculated on these criminal possibilities. Some pictured a world in which large language models helped scammers grow their spear phishing campaigns, sending out thousands more customized messages with nefarious payloads. Others argued deepfake audio and video could simplify social engineering, prompting employees to turn over credentials and other sensitive data into the wrong hands. In short, generative AI would be what software was once for businesses, helping scale their productivity to unprecedented new heights. 

About a year and a half later, there is no more need for speculation. Criminals are already deploying generative AI with the same inventiveness as knowledge workers in their workflows. Crypto is naturally involved because it is easier for criminals to receive illicit funds this way and launder them across the blockchain, often using coin mixers or swap protocols, which we documented in Merkle Science’s forthcoming HackHub. 

Generative AI in Criminal Practice 

Pig butchering, which is already one of the most lucrative scams given that it combines emotional exploitation with financial fraud, is getting a boost from generative AI. Several sources have found evidence that the agents, who typically contact victims on social media and chat apps under the guise of an incorrectly sent message, are using generative AI.

One schemer mistakenly copied text from a ChatGPT-like LLM, which noted, “Thank you very much for your kind words! As a language model of ‘me’ I don’t have feelings or emotions like humans do” and sent it to the intended victim. While the mistake may be obvious to readers, it may not have been to the scammers, who typically hail from China (where the term “pig butchering” originates from) and other Asian countries like Cambodia, where English is not the first language. 

The use of generative AI in this instance is notable because it lowers the barrier of entry for pig butchering operators, who no longer need agents with a strong command of English. They just need agents good enough at the language to input prompts into the LLM interface and then vet the output for accuracy, plausibility, and tone. Though the agent above failed at this task, this egregious error may be an exception, with many more agents successfully using generative AI to speak eloquently and personably to victims. With generative AI, pig butchering operators may thus need fewer agents, who are often victims of human trafficking held against their will and forced to prey upon other victims. To extend the metaphor, these operations can now slaughter more pigs with far fewer butchers. 

Generative AI is also being used for impersonating celebrities, leveraging the tried-and-true practice of using endorsers to hawk crypto scams and dubious products - a practice the SEC has taken aggressive enforcement action against. In 2023, a video of Elon Musk circulated online, wherein he promoted a new trading platform. While Musk has been at the helm of fintech, including co-founding PayPal and advocating for cryptocurrency, he had no involvement here: The video was a convincing deepfake. Some of the associated links led to a website inviting users to join a cryptocurrency investing club, while others led to an email account where a dogged journalist pursued a back-and-forth with “Elon Musk” who offered to double the amount of any cryptocurrency sent to him.

The idea that one of the world’s richest men would be peddling crypto trading schemes may strike most as rightfully absurd, but other deepfakes are not so easy to spot. In Hong Kong, an employee made transfers of US$25 million to five different bank accounts after receiving instructions to do so on a video call with the CFO, colleagues, and a few external partners. The only problem was that all of the people were deepfakes, a fact the employee discovered after independently contacting headquarters when the bank transfers were done.

While details of the incident are largely under wraps, it is not difficult to see how this type of spear phishing could be combined with crypto for even more devastating results. For example, a criminal enterprise could deepfake a message from the company CIO, claiming that some workstations have been hit by ransomware that can only be unlocked with a crypto payment to a certain Bitcoin address. The use of generative AI widens the funnel of prospects convinced by a given scheme, which will ultimately increase the rate of those who “convert” (i.e. willingly send payment to a given address). 

This scenario arguably happened with Ripple CEO Brad Garlinghouse. On a 45-second YouTube advertisement, a likeness of Garlinghouse appeared, asking Ripple users to send between 1,000 and 500,000 XRP tokens to a certain address, which he would then double as a token of gratitude to the Ripple community. The advertisement had paired deepfake video and audio to devastating effect: It was so believable that YouTube declined to take the advertisement down for nearly a month upon being flagged by concerned users, stating that it “does not violate Google’s policies.” 

Apart from these schemes, generative AI is also likely being tapped for market manipulation. While the release of the Bitcoin exchange-traded product rightfully made global headlines on January 10, 2024, such as one from Reuters which celebrated that the “US SEC approves bitcoin ETFs in a watershed for the crypto market,” this news buried an arguably as interesting a story: A similar announcement had been made just a day earlier from the official X account of the US Securities and Exchange Commission. The account had been hacked, and the clever hacker used the opportunity to put out a fake message attributed to SEC Chairman Gary Gensler that the new Bitcoin ETPs had been approved. This ground-breaking news understandably drove the price of Bitcoin to US$45,000, offering the culprits a chance to capitalize on the short-term gains. Unlike deepfake audio or video, it is unclear whether the message was produced by generative AI, but the content falls within its wheelhouse. Hackers who have expertise in gaining access to accounts can turn to generative AI to quickly produce plausible content that sways the market in their favor.

Defending against generative AI in crypto

Though these scams involve cryptocurrency, there is no unique defense against them. Users should simply follow best practices when it comes to defending themselves against fake news, social engineering, and other similar manipulations, which authorities, media watchdogs, universities, and other stakeholders have long advocated for. 

When receiving a message with an odd request - such as a family member asking for emergency funds or a vendor soliciting urgent payment  - the individual should always independently verify the person’s identity by contacting them on a known channel. The Federal Trade Commission advocates for this approach no matter how believable the request may be, citing the fact that scammers may use AI to “clone your loved one’s voice” from public sources like social media. According to the Canadian Centre for Cyber Security, users should also always verify major news stories on multiple sites, especially those that are highly reputable. Finally, users should do due diligence on any investment scheme, and they should never send funds, crypto included, to any address. The SEC has repeatedly warned against specific investment schemes and provided broader guidance on red flags like unlicensed sellers, guaranteed returns, and fake testimonials

While generative AI and crypto may be a match made in heaven for criminals, offering them customization at scale, the general public can easily protect themselves by practicing what critics say is becoming increasingly scarce in the online world: common sense.