News

AI and Machine Learning Could Take Hacking to a New Level

While high-tech execs issue fevered warnings about the potential dangers of next-gen intelligent machines, the current generation of smart software could already pose a serious security threat.

A team of researchers from IBM demonstrated this disturbing possibility at the annual Black Hat security conference in Las Vegas this week with an in-house, proof-of-concept experiment, code-named DeepLocker, that uses artificial intelligence and deep learning (AI/ML) to bypass cyber-security protections.

The company describes DeepLocker as "a novel class of highly targeted and evasive attacks powered by artificial intelligence," and an example of "weaponized AI," which the company developed to better understand how existing AI models can be combined with current malware techniques to create a new kind of malware.

As cybercriminals begin exploring the possibilities of AI-enhanced malware, "cyber defenders must understand the mechanisms and implications of the malicious use of AI in order to stay ahead of these threats and deploy appropriate defenses," the company says.

DeepLocker combines several existing AI and malware techniques to create a highly evasive type of malware that conceals its malicious intent until it reaches a specific target. According to IBM, it achieves this level of targeted stealth by using a Deep Neural Network (DNN), which is a multi-layered AI-model that hides its attack payload in benign carrier applications. The payload is "unlocked" only when the intended target is reached.

DeepLocker leverages several attributes for target identification, including visual, audio, geolocation, and system-level features. It's also extremely challenging to reverse engineer.

"This class of AI-powered evasive malware conceals its intent until it reaches a specific victim," explained Marc Ph. Stoecklin, Principal Research Scientist and Manager of the Cognitive Cybersecurity Intelligence group at IBM, in a post on the Security Intelligence website. "It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition."

This new genre of AI techniques is set to take hacking to a new level with programs that can slip past top-tier defense measures. That's the bad news; the good news, says Ilia Kolochenko, CEO of web security company High-Tech Bridge, is that the good guys are using AI/ML, too.

"We are still pretty far from AI/ML hacking technologies that can outperform the brain of a criminal hacker, Kolochenko told Pure AI in an email. "Of course, cybercriminals are already actively using machine learning and big data technologies to increase their overall effectiveness and efficiency. But, [they] will not invent any substantially new hacking techniques or something beyond a new vector of exploitation or attack as all of those can be reliably mitigated by the existing defense technologies. Moreover, many cybersecurity companies also start leveraging machine learning with a lot of success, hindering cybercrime. Therefore, I see absolutely no reason for panic today."

IBM has issued warnings about the potential security threat of AI/ML-enhanced malware before. In April, the company unveiled an open-source toolkit called the Adversarial Robustness Toolbox, which is described on GitHub as "a library dedicated to adversarial machine learning" designed "to allow rapid crafting and analysis of attacks and defense methods for machine learning models."

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at jwaters@converge360.com.

Featured