OpenAI’s ChatGPT, an AI tool taking the internet by storm, can be used to write malicious code: CPR report

ChatGPT is a revolutionary artificial intelligence technology that is quickly gaining popularity in the world of natural language processing, data science, and machine learning. The AI tool, developed by OpenAI, has been gaining traction primarily due to its ability to converse with humans more naturally than other AI systems. 

Like any other technology, AI-powered ChatGPT can be utilised for good and evil limitlessly. Realising the tool’s potential, cybercriminals have already jumped into it to leverage the existing ChatGPT tool for their attacks.

How to improve your open source security?

Follow these three steps and get on the path to stronger security practices.Show More
Follow these three steps and get on the path to stronger security practices.Show Less

What is ChatGPT?

ChatGPT is an AI-powered chatbot launched by OpenAI in November 2022. According to OpenAI, ChatGPT is fine-tuned from a model in the GPT-3.5 series, which finished training in early 2022.

“ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response,” says OpenAI.

In a recently published report, cybersecurity company Check Point Research (CPR) reveals that several major underground hacking communities have already shown the first instances of cybercriminals using OpenAI to develop malicious tools.

“As we suspected, some cases clearly showed that many cybercriminals using OpenAI have no development skills,” reveals the report. 

Check Point Research (CPR) explained a few instances, indicating cybercriminals’ increasing interest in ChatGPT. 

Case 1:

On December 29, 2022, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum. 

According to the analysis by CPR, the publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. 

“Our analysis of the script confirms the cybercriminal’s claims. This is indeed a basic stealer which searches for 12 common file types (such as MS Office documents, PDFs, and images) across the system. If any files of interest are found, the malware copies the files to a temporary directory, zips them, and sends them over the web. It is worth noting that the actor didn’t bother encrypting or sending the files securely so that the files might end up in the hands of 3rd parties as well,” reveals the CPR report. 

Case 2:

On December 21, a threat actor dubbed USDoD posted a Python script, which he emphasised was the “first script he ever created.”

When another cybercriminal commented that the style of the code resembles OpenAI code, USDoD confirmed that OpenAI gave him a “nice [helping] hand to finish the script with a nice scope.”

“Our analysis of the script verified that it is a Python script that performs cryptographic operations. To be more specific, it is a hodgepodge of different signing, encryption, and decryption functions,” the report exposes.

Case 3:

The third case shared by CPR discloses a discussion with the title “Abusing ChatGPT to create Dark Web Marketplaces scripts.” In this thread, the cybercriminal exhibits how easy it is to create a Dark Web marketplace using ChatGPT. 

“The primary role of the marketplace in the illicit underground economy is to provide a platform for the automated trade of illegal or stolen goods like stolen accounts or payment cards, malware, or even drugs and ammunition, with all payments in cryptocurrencies,” explains the report.  

To illustrate, the cybercriminal published a piece of code that uses a third-party API to get up-to-date cryptocurrency (Monero, Bitcoin, and Etherium) prices as part of the Dark Web market payment system.

In addition to these cases mentioned above, CPR states several threat actors opened discussions to focus on using ChatGPT for fraudulent schemes.

“Most of these focused on generating random art with another OpenAI technology (DALLE2) and selling them online using legitimate platforms like Etsy. In another example, the threat actor explains how to generate an e-book or short chapter for a specific topic (using ChatGPT) and sells this content online,” concludes the report. 

Half your staff are planning to quit soon!

Here’s how you can make them stay. Check it out!Show More
Here’s how you can make them stay. Check it out! Show Less