9
mins read

How AI is reshaping the Cyber Threat Landscape

Explore the double-edged sword of AI in cybersecurity. This insightful blog delves into how artificial intelligence is revolutionizing defenses while also empowering cybercriminals. Understand the dual-use dilemma of AI in the ever-evolving cyber threat landscape.

Noel Varghese
November 8, 2023
Green Alert
Last Update posted on
February 3, 2024
Don't let your brand be used to trap users through fake URLs and phishing pages

Identify and counter malicious links and phishing attempts effectively with CloudSEK XVigil Fake URLs and Phishing module, bolstering your defense against cyber threats

Schedule a Demo
Table of Contents
Author(s)
No items found.

Artificial Intelligence (AI) has emerged as a revolutionary force in the cybersecurity domain, offering both robust defense mechanisms and, paradoxically, new avenues for cybercriminals. Its dual-use nature presents a unique challenge, where the very tools designed to enhance security are also exploited for malicious purposes​.

Executive Summary

What is the Threat?

  • AI, while a powerful tool for enhancing cybersecurity, also poses significant threats to the field.
  • These threats stem from the dual-use nature of AI, where the same technology that can be used to bolster cybersecurity can also be exploited by cybercriminals and state-sponsored actors in the use cases that will be covered below.

What can be the Impact?

  • Cyber risks - not limited to infostealer malware infection, business email compromise, risk of increased phishing attempts.
  • Malicious actors can use AI to automate and enhance cyberattacks, making them more sophisticated and difficult to detect. This includes automated phishing campaigns, advanced malware, and AI-driven social engineering attacks

Introduction: Testing the Waters

  • Ever since the tech boom that started from the 1970’s till now - the world has always been a cat and mouse game. We can classify this into two types - make or break. Make a product and on the other side you have someone with the expertise/ideas to break the product. Taking inputs from the adversary (breaker) in this case, helps the invention be break-resistant, helping the creators to understand the limitations or how the product can be abused to the adversary’s liking.
  • In recent times, the rise of LLM’s (Language Learning Models) and automation have helped humans to process their work faster. Taking inputs from Bard, ChatGPT etc have been the norm
  • Naturally comes the adversary into the picture = “How to break or abuse the norm to carry out malicious requirements as per the need and this is fueled by various motivations”
  • Below we will be detailing out the rise of various use cases where researchers have noticed AI being misused:-

1.Voice Cloning

  • Want to get a buddy pranked? Over on Github, there are an abundance of projects armored with LLM’s that can be trained at will by using a target’s voice. Doing the same will result in social-engineering, in turn which has been abused by threat actors. Voice cloning in recent times has been used by adversaries to clone the voice of people in authority, where the target has been fixed through watchful reconnaissance. 
  • Sample snippets of the target’s voice are used to train the LLM. This in turn has been used to force employees of the target company to transfer exorbitant amounts to bank accounts, controlled by adversaries. Due to very less awareness of this crime, more people are prone to be gullible for falling for such scams.

  

Figure 1 - Open-source repositories that facilitate voice-cloning scams by training LLM Models against the voice of the impersonating person 

2.Malware / Ransomware Code Generation

In late-November 2023, the phenomenon of OpenAI’s creation ChatGPT was released as a bit-demo. Upon exploiting the new toy on the block, threat actors started devising ideas on how it could be exploited. 

One instance was the creation of a Python script that had the ability to extract files of certain pre-specified formats when deployed in the victim’s environment and then proceed to upload the files to a hardcoded FTP Server controlled by the adversary. This led to more experiments by actors to test the limits of ChatGPT and other tools to generate the following:- 

  • Phishing Emails
  • Malicious Scripts
  • Deepfake Videos and Images

              

Figure 2 -Thread on an English-speaking Cybercrime forum in late 2022 - where a script generated by ChatGPT was presented with the detailed outline of it’s functionality

Figure 3 -A cleverly generated deepfake video generated by taking the voice of former US President Barack Obama and superimposing it over the speech being voiced by another individual 

3.Malicious ChatGPT aka WormGPT


WormGPT, an offshoot of mainstream AI tools, underscores the dark side of AI in cybercrime. Designed to bypass restrictions and support illicit activities, WormGPT facilitates the generation of phishing emails and malware codes, representing a significant threat to cybersecurity


Born out of necessity, as jailbreaks, bypasses and malicious activities were being blocked at every end on ChatGPT, WormGPT promised to support all illicit activities - which include generation of clever phishing emails, malware code generation etc. WormGPT initially made it’s appearance in underground cybercrime forums, and it is available to individuals at the price of a premium subscription.

  • The models used to train WormGPT remain confidential, to prevent copycat services that can perform the same requirements, hence leading to loss in value. 
  • Additionally WormGPT ensures privacy of user data, not leaking prompts or results generated for one user to others etc. Around this time, ChatGPT was facing privacy issues of it’s own - resulting in exposing user’s prompt history, ingestion of private/proprietary information from companies to generate results

Figure 4 - Sales advertisement of WormGPT on an underground forum, with it’s features being outlined

4.Usage of AI-Generated videos to spread information stealer malware

The inception of ChatGPT brought bloom to multiple ideas, one of which was Midjourney. Midjourney allows users to create AI-generated videos or images, using one-liner prompts. These videos can be flooded across YouTube and other video-sharing platforms in the guise of tutorial videos helping gullible people into downloading cracked software or riskware. 

In the description of these videos, are the links to cracked versions of legitimate software such as Adobe Photoshop Studio, Media Player software, AutoCad etc, that are hosted on sketchy file-hosting platforms such as Mega, Mediafire etc. 

Read How Threat Actors are Exploiting ChatGPT's Popularity to Spread Malware via Compromised Facebook Accounts Putting Over 500,000 People at Risk

What is Infostealer Malware?

  • Infostealer malware, also known as information-stealing malware, is a type of malicious software designed to infiltrate a computer system or network to collect sensitive information and transmit it to dedicated infrastructure controlled by adversaries (Command and Control Servers). This type of malware is specifically created to steal data, such as login credentials, personal information, financial data, and other confidential or proprietary information, which are auto-saved on search engine browsers.
Figure 5 - Example of a Youtube Video outlining steps to download and install a cracked version of Adobe’s proprietary software - Adobe Premiere Pro from a third-party file-hosting service

 

  Figure 6 - Research on the above use-case by Pavan gives a gist of the risk involved in downloading software from untrusted sources and which is facilitated by rapid flooding of AI-Generated videos on YouTube

Read How Threat Actors Abuse AI-Generated Youtube Videos to Spread Stealer Malware

Conclusion

In conclusion, while AI has brought significant advancements to cybersecurity, it has also introduced new challenges and potential harm. AI-driven cyber attacks are becoming more sophisticated, making it difficult for traditional defenses to keep up. Additionally, AI can be used to automate and amplify cyber threats, enabling faster and more targeted attacks.

On the defensive side, the over reliance on AI in cybersecurity can lead to vulnerabilities and false positives, potentially causing disruption to legitimate operations. Moreover, the scarcity of skilled professionals who can effectively manage and fine-tune AI-based security systems poses a significant challenge.

To mitigate the harm caused by AI in cybersecurity, a balanced approach is crucial. Combining AI with human expertise, regular system monitoring, and continuous improvement in threat detection and response strategies is essential to effectively defend against evolving cyber threats. Additionally, strong regulations and ethical considerations are necessary to ensure responsible AI use in cybersecurity, thereby maximizing its benefits while minimizing its risks.

References

Predict Cyber threats against your organization

Related Posts
Blog Image
October 25, 2024

The BRICS-Bait Rug Pull – How Scammers Use International Credibility to Deceive Investors

CloudSEK’s TRIAD team created this report based on an analysis of the increasing trend of cryptocurrency counterfeiting, in which tokens impersonate government organizations to provide some legitimacy to their “rug pull” scams. An example of this scam is covered in this report where threat actors have created a counterfeit token named “BRICS”. This token is aimed at exploiting the focus on the BRICS Summit held in Kazan, Russia, and the increased interest in investments and expansion of the BRICS government organization which comprises different countries (Brazil, Russia, India, China, South Africa, Egypt, Ethiopia, Iran, and the United Arab Emirates)

Analyzing Recent Cyber Attacks in the United States Coinciding with Columbus Day Celebration

Over recent months, the United States has faced a surge in cyber attacks, with ransomware incidents rising sharply from June to October 2024. Prominent groups, including Play, RansomHub, Lockbit, Qilin, and Meow, have targeted sectors such as Business Services, Manufacturing, IT, and Healthcare, compromising over 800 organizations. Major attacks included a breach of the City of Columbus by Rhysida ransomware and data leaks impacting Virginia’s Department of Elections and Healthcare.gov. Additionally, China’s "Salt Typhoon" espionage campaign is aggressively targeting U.S. ISPs, further complicating the cyber threat landscape. Hacktivist groups advocating pro-Russian and pro-Palestinian positions have also increased their attacks, affecting government entities and critical infrastructure. This report highlights the need for enhanced security protocols, regular audits, and public awareness initiatives to mitigate the growing cyber risks. Key recommendations include implementing multi-factor authentication, frequent employee training, and advanced threat monitoring to safeguard the nation's critical infrastructure and public trust.

Telegram Bots Masquerade as Digital Wallet Brands to push Referral Reward Scams to Indonesian Customers

In Indonesia, scammers are using Telegram bots to impersonate digital wallet brands, promoting fake referral reward schemes. These scams deceive users into sharing their account details, leading to significant financial losses. Discover the full details and protective measures in CloudSEK's comprehensive blog report.

Join 10,000+ subscribers

Keep up with the latest news about strains of Malware, Phishing Lures,
Indicators of Compromise, and Data Leaks.

Take action now

Secure your organisation with our Award winning Products

CloudSEK Platform is a no-code platform that powers our products with predictive threat analytic capabilities.

General Trends

9

min read

How AI is reshaping the Cyber Threat Landscape

Explore the double-edged sword of AI in cybersecurity. This insightful blog delves into how artificial intelligence is revolutionizing defenses while also empowering cybercriminals. Understand the dual-use dilemma of AI in the ever-evolving cyber threat landscape.

Authors
Noel Varghese
Co-Authors
No items found.

Artificial Intelligence (AI) has emerged as a revolutionary force in the cybersecurity domain, offering both robust defense mechanisms and, paradoxically, new avenues for cybercriminals. Its dual-use nature presents a unique challenge, where the very tools designed to enhance security are also exploited for malicious purposes​.

Executive Summary

What is the Threat?

  • AI, while a powerful tool for enhancing cybersecurity, also poses significant threats to the field.
  • These threats stem from the dual-use nature of AI, where the same technology that can be used to bolster cybersecurity can also be exploited by cybercriminals and state-sponsored actors in the use cases that will be covered below.

What can be the Impact?

  • Cyber risks - not limited to infostealer malware infection, business email compromise, risk of increased phishing attempts.
  • Malicious actors can use AI to automate and enhance cyberattacks, making them more sophisticated and difficult to detect. This includes automated phishing campaigns, advanced malware, and AI-driven social engineering attacks

Introduction: Testing the Waters

  • Ever since the tech boom that started from the 1970’s till now - the world has always been a cat and mouse game. We can classify this into two types - make or break. Make a product and on the other side you have someone with the expertise/ideas to break the product. Taking inputs from the adversary (breaker) in this case, helps the invention be break-resistant, helping the creators to understand the limitations or how the product can be abused to the adversary’s liking.
  • In recent times, the rise of LLM’s (Language Learning Models) and automation have helped humans to process their work faster. Taking inputs from Bard, ChatGPT etc have been the norm
  • Naturally comes the adversary into the picture = “How to break or abuse the norm to carry out malicious requirements as per the need and this is fueled by various motivations”
  • Below we will be detailing out the rise of various use cases where researchers have noticed AI being misused:-

1.Voice Cloning

  • Want to get a buddy pranked? Over on Github, there are an abundance of projects armored with LLM’s that can be trained at will by using a target’s voice. Doing the same will result in social-engineering, in turn which has been abused by threat actors. Voice cloning in recent times has been used by adversaries to clone the voice of people in authority, where the target has been fixed through watchful reconnaissance. 
  • Sample snippets of the target’s voice are used to train the LLM. This in turn has been used to force employees of the target company to transfer exorbitant amounts to bank accounts, controlled by adversaries. Due to very less awareness of this crime, more people are prone to be gullible for falling for such scams.

  

Figure 1 - Open-source repositories that facilitate voice-cloning scams by training LLM Models against the voice of the impersonating person 

2.Malware / Ransomware Code Generation

In late-November 2023, the phenomenon of OpenAI’s creation ChatGPT was released as a bit-demo. Upon exploiting the new toy on the block, threat actors started devising ideas on how it could be exploited. 

One instance was the creation of a Python script that had the ability to extract files of certain pre-specified formats when deployed in the victim’s environment and then proceed to upload the files to a hardcoded FTP Server controlled by the adversary. This led to more experiments by actors to test the limits of ChatGPT and other tools to generate the following:- 

  • Phishing Emails
  • Malicious Scripts
  • Deepfake Videos and Images

              

Figure 2 -Thread on an English-speaking Cybercrime forum in late 2022 - where a script generated by ChatGPT was presented with the detailed outline of it’s functionality

Figure 3 -A cleverly generated deepfake video generated by taking the voice of former US President Barack Obama and superimposing it over the speech being voiced by another individual 

3.Malicious ChatGPT aka WormGPT


WormGPT, an offshoot of mainstream AI tools, underscores the dark side of AI in cybercrime. Designed to bypass restrictions and support illicit activities, WormGPT facilitates the generation of phishing emails and malware codes, representing a significant threat to cybersecurity


Born out of necessity, as jailbreaks, bypasses and malicious activities were being blocked at every end on ChatGPT, WormGPT promised to support all illicit activities - which include generation of clever phishing emails, malware code generation etc. WormGPT initially made it’s appearance in underground cybercrime forums, and it is available to individuals at the price of a premium subscription.

  • The models used to train WormGPT remain confidential, to prevent copycat services that can perform the same requirements, hence leading to loss in value. 
  • Additionally WormGPT ensures privacy of user data, not leaking prompts or results generated for one user to others etc. Around this time, ChatGPT was facing privacy issues of it’s own - resulting in exposing user’s prompt history, ingestion of private/proprietary information from companies to generate results

Figure 4 - Sales advertisement of WormGPT on an underground forum, with it’s features being outlined

4.Usage of AI-Generated videos to spread information stealer malware

The inception of ChatGPT brought bloom to multiple ideas, one of which was Midjourney. Midjourney allows users to create AI-generated videos or images, using one-liner prompts. These videos can be flooded across YouTube and other video-sharing platforms in the guise of tutorial videos helping gullible people into downloading cracked software or riskware. 

In the description of these videos, are the links to cracked versions of legitimate software such as Adobe Photoshop Studio, Media Player software, AutoCad etc, that are hosted on sketchy file-hosting platforms such as Mega, Mediafire etc. 

Read How Threat Actors are Exploiting ChatGPT's Popularity to Spread Malware via Compromised Facebook Accounts Putting Over 500,000 People at Risk

What is Infostealer Malware?

  • Infostealer malware, also known as information-stealing malware, is a type of malicious software designed to infiltrate a computer system or network to collect sensitive information and transmit it to dedicated infrastructure controlled by adversaries (Command and Control Servers). This type of malware is specifically created to steal data, such as login credentials, personal information, financial data, and other confidential or proprietary information, which are auto-saved on search engine browsers.
Figure 5 - Example of a Youtube Video outlining steps to download and install a cracked version of Adobe’s proprietary software - Adobe Premiere Pro from a third-party file-hosting service

 

  Figure 6 - Research on the above use-case by Pavan gives a gist of the risk involved in downloading software from untrusted sources and which is facilitated by rapid flooding of AI-Generated videos on YouTube

Read How Threat Actors Abuse AI-Generated Youtube Videos to Spread Stealer Malware

Conclusion

In conclusion, while AI has brought significant advancements to cybersecurity, it has also introduced new challenges and potential harm. AI-driven cyber attacks are becoming more sophisticated, making it difficult for traditional defenses to keep up. Additionally, AI can be used to automate and amplify cyber threats, enabling faster and more targeted attacks.

On the defensive side, the over reliance on AI in cybersecurity can lead to vulnerabilities and false positives, potentially causing disruption to legitimate operations. Moreover, the scarcity of skilled professionals who can effectively manage and fine-tune AI-based security systems poses a significant challenge.

To mitigate the harm caused by AI in cybersecurity, a balanced approach is crucial. Combining AI with human expertise, regular system monitoring, and continuous improvement in threat detection and response strategies is essential to effectively defend against evolving cyber threats. Additionally, strong regulations and ethical considerations are necessary to ensure responsible AI use in cybersecurity, thereby maximizing its benefits while minimizing its risks.

References