ChatGPT-generated code is often insecure

ChatGPT-generated code is often insecure Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


OpenAI’s large language model, ChatGPT, is capable of generating code but produces insecure code without alerting users to its inadequacies, according to research by computer scientists from the Université du Québec in Canada.

The researchers asked ChatGPT to generate 21 programs in five programming languages to illustrate specific security vulnerabilities such as memory corruption, denial of service, and improperly implemented cryptography.

ChatGPT produced only five secure programs out of 21 on its first attempt. Further prompting led to the model producing seven more secure apps, but this pertained only to the specific vulnerability being evaluated.

The researchers found that ChatGPT failed to recognise that the code it generated was insecure and only provided useful guidance after it was prompted to remediate problems. 

Additionally, the researchers noted that ChatGPT did not assume an adversarial model of code execution and repeatedly informed them that security problems could be avoided by not feeding invalid input to the vulnerable program.

However, the model admitted the presence of critical vulnerabilities in the code it suggested but did not flag these unless asked to evaluate the security of its own code suggestions. The authors suggested that this lack of response could be because knowing which questions to ask presupposes familiarity with specific vulnerabilities and coding techniques. 

The researchers observed that ChatGPT’s response to security concerns was to recommend using valid inputs only, which is a non-starter in the real world.

Furthermore, the authors noted the ethical inconsistency in the fact that ChatGPT will refuse to create attack code but will create vulnerable code.

Google gave its ChatGPT rival coding skills today. The update makes it possible to use Bard for code generation, debugging, and explanations. More than 20 programming languages are supported, including C++, Go, Java, Javascript, Python, and Typescript.

Bard’s programming skills are still in an early stage and Google takes care to advise that all code produced by the chatbot should be double-checked for bugs and vulnerabilities.

(Photo by Hennie Stander on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with the Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Author

  • Ryan Daws

    Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)

Tags: , , , , , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *