ChatGPT finds malware in NPM and Python packages

a computer screen with the open ai logo on it

Security scanner provider Socket also uses OpenAI ‘s chatbot to inspect packets.

While the chatbot ChatGPT is still wrong on some details and facts, the tool’s summaries can apparently be used to find malware and security vulnerabilities. This is reported by The Register magazine, citing the provider Socket, which sells a security scanner for Python and Javascript under a freemium model. Accordingly, the team found a total of 227 security gaps using ChatGPT.

The result of using ChatGPT surprised Socket, said CEO Feross Aboukhadijeh: “It worked much better than expected (…) Now I’m sitting on a few hundred vulnerabilities and malware packages and we rush to report them as soon as possible.” Socket is designed to detect so-called supply chain attacks, i.e. those that could be smuggled in via project dependencies.

It is not surprising that ChatGPT is in principle suitable for such work, since the system is also trained on a large amount of code and is therefore able to recognize common patterns. However, the vulnerabilities found fall into numerous different categories, such as “data leaks, SQL injections, hard-coded credentials, potential privilege escalation, and backdoors,” according to the report.

Not all of these gaps are public or even fixed. However, The Register magazine was able to verify some of the gaps that had already been published and listed various examples.

AI models can be of great help in automatically finding known patterns in hundreds of thousands of packages. A human check is simply too much effort for this. But relying on that alone will not be enough. The AI ​​system will probably fail very quickly due to particularly sophisticated attacks and tactics, especially if they are not already widespread.

Leave a Reply