AI Might Be Source of PowerShell Script Used in Phishing Attack: Researchers

Security researchers have identified a phishing campaign that deploys malware using a PowerShell script bearing some hallmarks of being AI-generated.

Threat researchers at cybersecurity firm Proofpoint sounded the alarm on Wednesday over a malware attack they have observed targeting businesses in Germany. The attack, attributed to a threat actor named "TA547," uses a malware called "Rhadamanthys" to steal information from targets.

This is how the attack unfolds: TA547 sends its target an e-mail masquerading as an invoice. Attached to this e-mail is a ZIP file containing a malicious LNK file. Clicking on the file triggers a PowerShell script that in turn plants a Rhadamanthys file onto the target's system.

TA547 has been known to use information-stealing malware in other financially motivated attacks, targeting organizations across Europe and the United States. What's novel in this attack is the use of Rhadamanthys, a first for TA547, as far as the researchers were aware.

More notable, however, is that the PowerShell script TA547 used to deliver the Rhadamanthys malware shows signs of being made by an LLM.

"[W]hen deobfuscated, the second PowerShell script that was used to load Rhadamanthys contained interesting characteristics not commonly observed in code used by threat actors (or legitimate programmers)," wrote the Proofpoint researchers. "Specifically, the PowerShell script included a pound sign followed by grammatically correct and hyper specific comments above each component of the script."

Though not conclusive, these are signs common in code that's been generated by a large language model (LLM), according to the researchers. They proposed that TA547 did one of three things:

  1. Used an LLM to write the PowerShell script from scratch,
  2. Used an LLM to rewrite the script, or
  3. Lifted an LLM-generated PowerShell script from another attacker.

The researchers did not speculate as to which LLM might have been used to generate the PowerShell script that triggered the malware. They also couldn't definitively say that the attack was LLM-based.

Nevertheless, they said, "While it is difficult to confirm whether malicious content is created via LLMs -- from malware scripts to social engineering lures -- there are characteristics of such content that points to machine-generated rather than human-generated information."

Whether the PowerShell script was LLM-generated had no bearing on the effectiveness of Rhadamanthys, according to the researchers, though it can be a factor in how easily the malware is propagated.

"LLMs can assist threat actors in understanding more sophisticated attack chains used by other threat actors," they wrote, "enabling them to repurpose these techniques once they understand the functionality. Like LLM-generated social engineering lures, threat actors may incorporate these resources into an overall campaign."

Various generative AI vendors have expressed willingness to police their LLMs against potential misuse, though an industrywide testing and standards framework is still in the theoretical stage.

Earlier this year, OpenAI and Microsoft recounted a joint effort to identify and successfully shut down five state-sponsored attack groups for exhibiting red-flag behavior when using OpenAI's LLMs. At least one of those groups was described as using LLMs "to support tooling development, scripting [and] understanding various commodity cybersecurity tools."

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.