News
        
        AI Will Lead to Increase in Ransomware, UK Cybersecurity Experts Say
        
        
        
        With the growth of AI-based technologies, cybersecurity experts are warning about the upcoming increase in attack attempts and complexity,  as cybercriminals increase their use of available tools. 
In a report by the National Cyber Security Centre (NCSC), a United Kingdom government  agency, one area that will benefit from generative AI is ransomware. According  to NCSC, available tools will allow attackers to craft more convincing phishing  attacks. 
"AI will primarily offer threat actors capability uplift in social engineering," read the report. "Generative AI can already be used to enable convincing interaction with victims, including the  creation of lure documents, without the translation, spelling and grammatical  mistakes that often reveal phishing. This will highly likely increase over the  next two years as models evolve and uptake increases."
The report said that through 2025, it will be increasingly  hard for the public, no matter their technology literacy level, to distinguish  between legitimate and malicious emails and password reset requests. 
NCSC also said that gen-AI technology will help attackers better identify "high-value assets" that will be targeted for personalized  phishing attacks, compared to a wider net many cyber rings deploy. 
  
  As for the development of new attacks and malware created with the assistance of gen-AI, NCSC said that, based on the tools available  today, attackers will still be limited by their own technical scope and  ability. Attackers will be limited in AI-created malware by their repository of  quality exploit data, and NCSC said that only more capable states have the  acceptable amount of malware data to train an AI model in the creation of  further malware. However, with the maturity of the tech and increase in gen-AI  spending by cybercriminal groups, an efficient system for creating new malware may be around the corner.
  "Commoditization of cybercrime capability, for example  ‘as-a-service’ business models, makes it almost certain that capable groups  will monetize AI-enabled cyber tools, making improved capability available to  anyone willing to pay."
The good news is that while cybercriminals will increase the  use of gen-AI in the targeting, sophistication and creation of attacks, the  same tools will be used to combat the evolving threat. NCSC said that  organizations are increasing their use of gen-AI in threat detection and to  design more secure systems. 
"While it is essential to focus on the risks posed by  AI, we must also seize the substantial opportunities it presents to cyber  defenders. For example, AI can improve the detection and triage of cyberattacks  and identify malicious emails and phishing campaigns, ultimately making them  easier to counteract."