bgr.com
ChatGPT memory exploit left your private chat data exposed, but OpenAI fixed it
As a longtime ChatGPT user, I want AI chatbots to be very secure and private. That is, I want the contents of my chats to be protected from would-be attackers and from OpenAI itself. OpenAI can of course use chats to train future models if you allow it, but I don't.
While I have to trust that OpenAI handles the security and privacy aspects of the ChatGPT experience, I also know that other ChatGPT enthusiasts will test everything that's possible with the chatbot. In the process, they can potentially identify serious security issues.
Such is the case with security researcher Johann Rehberger, who developed a way to exploit the ChatGPT memory feature to exfiltrate user data. The hacker fed a prompt to ChatGPT that wrote permanent instructions to the chatbot's memory, including directions to steal all user data from new chats and send the information to a server.
That sounds scary, and it is. It's also not as dangerous as it might seem at first because there are several big twists. And before I even describe the exploit, you should also know that OpenAI has already fixed it.
Continue reading...
The post ChatGPT memory exploit left your private chat data exposed, but OpenAI fixed it appeared first on BGR.
Today's Top Deals
Today’s deals: $199 iPad 9, $99 TP-Link WiFi 7 router, $400 off Narwal Freo X Ultra robot vacuum, more
Today’s deals: $510 M1 MacBook Air, $54 11-piece cookware set, $40 Anker MagSafe battery pack, more
Best Apple Watch deals for September 2024
Amazon gift card deals, offers & coupons 2024: Get $410+ free