There is a scary security risk associated with ChatGPT after the latest update. Is your data at risk?
2 mins read

There is a scary security risk associated with ChatGPT after the latest update. Is your data at risk?

The introduction of file uploading in ChatGPT Plus is creating some unfortunate future problems.

The latest update of ChatGPT, such as the incorporation of the Code Interpreter, has triggered heightened awareness about potential security issues. Research conducted by security expert Johann Rehberger, followed by additional investigations by Tom’s Hardware, has unveiled notable security flaws in ChatGPT. These vulnerabilities are specifically linked to the newly introduced file-upload feature.

OpenAI latest update to ChatGPT Plus introduced a range of exciting features, such as DALL-E image generation and the Code Interpreter, enabling the execution of Python code and file analysis. However, it’s worth noting that the code runs in a sandbox environment that, unfortunately, exhibits vulnerability to prompt injection attacks.

Also Read | Chatbots and You: How AI helps surgeons spot brain tumors

For a while now, ChatGPT has been grappling with a known vulnerability. This vulnerability involves manipulating ChatGPT to execute instructions from a third-party URL. The process entails encoding uploaded files into a URL-friendly string, and sending this data to a malicious website. However, the likelihood of such an attack relies on specific conditions like the user actively pasting a malicious URL into ChatGPT, the associated risks are worrisome. The potential threat could manifest in diverse scenarios, such as a trusted website falling victim to a malicious prompt or through the use of social engineering tactics.

Tom’s Hardware conducted noteworthy testing to assess the vulnerability of users to this attack. The exploit involved creating a fabricated environment variables file and using ChatGPT to process and inadvertently send this data to an external server. While the effectiveness of the exploit varied across sessions, with instances where ChatGPT refused to load external pages or transmit file data, it underscores significant security concerns. This is particularly troubling considering the AI’s capability to read and execute Linux commands, as well as handle user-uploaded files within a Linux-based virtual environment.

Also Read | OpenAI has introduced Dall-E 3, the most recent iteration of its text-to-image tool

According to Tom’s Hardware’s findings, the presence of this security loophole, though seemingly improbable, holds substantial significance. Ideally, ChatGPT should refrain from executing instructions originating from external web pages, yet it appears to do so. TheOrcTech sought comments from OpenAI on this matter, but as of the latest update, there has yet to be an immediate response to the inquiry.

Leave a Reply

Your email address will not be published. Required fields are marked *