Artificial Intelligence (AI) has emerged as a revolutionary force in the cybersecurity domain, offering both robust defense mechanisms and, paradoxically, new avenues for cybercriminals. Its dual-use nature presents a unique challenge, where the very tools designed to enhance security are also exploited for malicious purposes.
Executive Summary
What is the Threat?
- AI, while a powerful tool for enhancing cybersecurity, also poses significant threats to the field.
- These threats stem from the dual-use nature of AI, where the same technology that can be used to bolster cybersecurity can also be exploited by cybercriminals and state-sponsored actors in the use cases that will be covered below.
What can be the Impact?
- Cyber risks - not limited to infostealer malware infection, business email compromise, risk of increased phishing attempts.
- Malicious actors can use AI to automate and enhance cyberattacks, making them more sophisticated and difficult to detect. This includes automated phishing campaigns, advanced malware, and AI-driven social engineering attacks
Introduction: Testing the Waters
- Ever since the tech boom that started from the 1970’s till now - the world has always been a cat and mouse game. We can classify this into two types - make or break. Make a product and on the other side you have someone with the expertise/ideas to break the product. Taking inputs from the adversary (breaker) in this case, helps the invention be break-resistant, helping the creators to understand the limitations or how the product can be abused to the adversary’s liking.
- In recent times, the rise of LLM’s (Language Learning Models) and automation have helped humans to process their work faster. Taking inputs from Bard, ChatGPT etc have been the norm
- Naturally comes the adversary into the picture = “How to break or abuse the norm to carry out malicious requirements as per the need and this is fueled by various motivations”
- Below we will be detailing out the rise of various use cases where researchers have noticed AI being misused:-
1.Voice Cloning
- Want to get a buddy pranked? Over on Github, there are an abundance of projects armored with LLM’s that can be trained at will by using a target’s voice. Doing the same will result in social-engineering, in turn which has been abused by threat actors. Voice cloning in recent times has been used by adversaries to clone the voice of people in authority, where the target has been fixed through watchful reconnaissance.
- Sample snippets of the target’s voice are used to train the LLM. This in turn has been used to force employees of the target company to transfer exorbitant amounts to bank accounts, controlled by adversaries. Due to very less awareness of this crime, more people are prone to be gullible for falling for such scams.
2.Malware / Ransomware Code Generation
In late-November 2023, the phenomenon of OpenAI’s creation ChatGPT was released as a bit-demo. Upon exploiting the new toy on the block, threat actors started devising ideas on how it could be exploited.
One instance was the creation of a Python script that had the ability to extract files of certain pre-specified formats when deployed in the victim’s environment and then proceed to upload the files to a hardcoded FTP Server controlled by the adversary. This led to more experiments by actors to test the limits of ChatGPT and other tools to generate the following:-
- Phishing Emails
- Malicious Scripts
- Deepfake Videos and Images
3.Malicious ChatGPT aka WormGPT
WormGPT, an offshoot of mainstream AI tools, underscores the dark side of AI in cybercrime. Designed to bypass restrictions and support illicit activities, WormGPT facilitates the generation of phishing emails and malware codes, representing a significant threat to cybersecurity
Born out of necessity, as jailbreaks, bypasses and malicious activities were being blocked at every end on ChatGPT, WormGPT promised to support all illicit activities - which include generation of clever phishing emails, malware code generation etc. WormGPT initially made it’s appearance in underground cybercrime forums, and it is available to individuals at the price of a premium subscription.
- The models used to train WormGPT remain confidential, to prevent copycat services that can perform the same requirements, hence leading to loss in value.
- Additionally WormGPT ensures privacy of user data, not leaking prompts or results generated for one user to others etc. Around this time, ChatGPT was facing privacy issues of it’s own - resulting in exposing user’s prompt history, ingestion of private/proprietary information from companies to generate results
4.Usage of AI-Generated videos to spread information stealer malware
The inception of ChatGPT brought bloom to multiple ideas, one of which was Midjourney. Midjourney allows users to create AI-generated videos or images, using one-liner prompts. These videos can be flooded across YouTube and other video-sharing platforms in the guise of tutorial videos helping gullible people into downloading cracked software or riskware.
In the description of these videos, are the links to cracked versions of legitimate software such as Adobe Photoshop Studio, Media Player software, AutoCad etc, that are hosted on sketchy file-hosting platforms such as Mega, Mediafire etc.
Read How Threat Actors are Exploiting ChatGPT's Popularity to Spread Malware via Compromised Facebook Accounts Putting Over 500,000 People at Risk
What is Infostealer Malware?
- Infostealer malware, also known as information-stealing malware, is a type of malicious software designed to infiltrate a computer system or network to collect sensitive information and transmit it to dedicated infrastructure controlled by adversaries (Command and Control Servers). This type of malware is specifically created to steal data, such as login credentials, personal information, financial data, and other confidential or proprietary information, which are auto-saved on search engine browsers.
Read How Threat Actors Abuse AI-Generated Youtube Videos to Spread Stealer Malware
Conclusion
In conclusion, while AI has brought significant advancements to cybersecurity, it has also introduced new challenges and potential harm. AI-driven cyber attacks are becoming more sophisticated, making it difficult for traditional defenses to keep up. Additionally, AI can be used to automate and amplify cyber threats, enabling faster and more targeted attacks.
On the defensive side, the over reliance on AI in cybersecurity can lead to vulnerabilities and false positives, potentially causing disruption to legitimate operations. Moreover, the scarcity of skilled professionals who can effectively manage and fine-tune AI-based security systems poses a significant challenge.
To mitigate the harm caused by AI in cybersecurity, a balanced approach is crucial. Combining AI with human expertise, regular system monitoring, and continuous improvement in threat detection and response strategies is essential to effectively defend against evolving cyber threats. Additionally, strong regulations and ethical considerations are necessary to ensure responsible AI use in cybersecurity, thereby maximizing its benefits while minimizing its risks.
References
- *Intelligence source and information reliability - Wikipedia
- #Traffic Light Protocol - Wikipedia
- Cyber criminals are exploiting popularity of ChatGPT to spread malware through hijacked Facebook accounts
- WormGPT – The Generative AI Tool Cybercriminals Are Using to Launch Business Email Compromise Attacks
- OpwnAI: Cybercriminals starting to use ChatGPT