Advertisment

Examples of ChatGPT being used to develop malicious tools

Check Point Research (CPR) is seeing the first instances of cybercriminals using ChatGPT to develop malicious tools.

author-image
VoicenData Bureau
New Update
cyber security division

CPR is sharing three cases of recent observations to warn the public of the growing interest by cybercriminals in ChatGPT to scale and teach malicious activity.

Advertisment

Check Point Research (CPR) is seeing the first instances of cybercriminals using ChatGPT to develop malicious tools. In underground hacking forums, threat actors are creating infostealers, encryption tools and facilitating fraud activity. CPR warns of the fast-growing interest in ChatGPT by cybercriminals and shares  three recent cases, with screenshots, of the development and sharing of malicious tools using ChatGPT. 

Case 1: Threat actor recreates malware strains for an infostealer.

Case 2: Threat actor creates multi-layer encryption tool. 

Advertisment

Case 3: Threat actor shows how to create a Dark Web marketplace scripts for trading illegal goods using ChatGPT.

CPR is sharing three cases of recent observations to warn the public of the growing interest by cybercriminals in ChatGPT to scale and teach malicious activity.  

Case 1: Creating Infostealer On December 29, 2022, a thread named “ChatGPT - Benefits of Malware” appeared on a popular underground hacking forum. The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. In actuality, whilst this individual could be a tech-oriented threat actor, these posts seemed to be demonstrating less technically capable cybercriminals how to utilise ChatGPT for malicious purposes, with real examples they can immediately use.  

Advertisment

Case 2: Creating multi-layered encryption tool  On December 21, 2022, a threat actor dubbed USDoD posted a Python script, which he emphasized was the ‘first script he ever created’. When another cybercriminal commented that the style of the code resembles openAI code, USDoD confirmed that the OpenAI gave him a “nice hand to finish the script with a nice scope.”This could mean that potential cybercriminals who have little to no development skills at all, could leverage ChatGPT to develop malicious tools and become a fully-fledged cybercriminal with technical capabilities.  All of the afore-mentioned code can of course be used in a benign fashion. However, this script can easily be modified to encrypt someone’s machine completely without any user interaction. For example, it can potentially turn the code into ransomware if the script and syntax problems are fixed.

Case 3: Facilitating ChatGPT for Fraud Activity A cybercriminal shows how to create a Dark Web marketplace scripts using ChatGPT. The marketplace's main role in the underground illicit economy is to provide a platform for the automated trade of illegal or stolen goods like stolen accounts or payment cards, malware, or even drugs and ammunition, with all payments in cryptocurrencies. 

Sergey Shykevich, Threat Intelligence Group Manager at Check Point said: “Cybercriminals are finding ChatGPT attractive. In recent weeks, we’re seeing evidence of hackers starting to use it writing malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point. Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes. Although the tools that we analyze in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools. CPR will continue to investigate ChatGPT-related cybercrime in the weeks ahead.” 

cybersecurity cyber-attack checkpoint chatgpt malicious-tools
Advertisment