Research shows that ChatGPT can be used to create malicious code

OpenAI’s ChatGPT, a large language model (LLM)-based artificial intelligence (AI) text generator, appears to be able to be used to generate code for malicious tasks, a research note by the company says. Cybersecurity firm Check Point was observed on Tuesday. Researchers at Check Point used ChatGPT and Codex, another OpenAI natural language, to generate the code, using standard English instructions to generate code that could be used to launch attacks. online scam.

The biggest problem with such AI code generators is that natural language processing (NLP) engines can lower the barrier to entry for malicious hackers. With code generators that don’t require the user to be proficient in coding, any user can collate the logical flow of information used in a malicious tool from the open web and use the same logic to generate syntax for malicious tools.

Presenting the issue, Check Point introduces how the AI ​​code generator is used to prototype the basic code for the phishing email scam, and applies follow-up instructions in plain English to further improve the code. In what the attackers have demonstrated, therefore, any user with malicious intent can create an entire hacking campaign using these tools.

Sergey Shykevich, threat intelligence team manager at Check Point, says that tools like ChatGPT have “the potential to dramatically change the cyber threat landscape”.

“Hackers can also duplicate malicious code using ChatGPT and Codex. AI technologies represent another step in the dangerous evolution of increasingly sophisticated and effective cyber capabilities,” he added.

To be sure, while open source language models can also be used to create cyber defense tools, the lack of protection in terms of usage to create malicious tools can be alarming. motion. Check Point notes that while ChatGPT has stated that using its platform to create hacking tools is “against” its policy, there are no restrictions preventing it from doing so.

This isn’t the first time an AI language and image rendering service has shown potential for abuse. Lensa, an AI-based image editing and retouching tool by US-based Prisma, also highlighted the lack of filters based on body and nudity that can lead to offending images. privacy is created by an individual without consent.

catch them all Tech news and Live Mint Updates. Download Mint . News Application to get daily Market Update & Living Business newsletter.

Less than


News5s: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button