This post explains an attack chain for the ChatGPT macOS application. Through prompt injection from untrusted data, attackers could insert long-term p

Spyware Injection Into Your ChatGPT's Long-Term Memory (SpAIware) · Embrace The Red

submited by
Style Pass
2024-09-21 04:00:02

This post explains an attack chain for the ChatGPT macOS application. Through prompt injection from untrusted data, attackers could insert long-term persistent spyware into ChatGPT’s memory. This led to continuous data exfiltration of any information the user typed or responses received by ChatGPT, including any future chat sessions.

OpenAI implemented a mitigation for a common data exfiltration vector end of last year via a call to an API named url_safe. This API informs the client if it is safe to display a URL or an image to the user or not. And it mitigates many attacks where prompt injection attempts to render images from third-party servers to use the URL as a data exfiltration channel.

As highlighted back then, the iOS application remained vulnerable. This was because the security check (url_safe) was performed client-side. Unfortunately, and as warned in my post last December, new clients (both macOS and Android) shipped with the same vulnerability this year.

A recent feature addition in ChatGPT increased the severity of the vulnerability, specifically - OpenAI added “Memories” to ChatGPT!

Leave a Comment