Google’s NotebookML is an experimental project that was released last year. It allows users to upload files and analyze them with a large langua

Bobby Tables but with LLM Apps - Google NotebookML Data Exfiltration · Embrace The Red

submited by
Style Pass
2024-04-16 04:00:03

Google’s NotebookML is an experimental project that was released last year. It allows users to upload files and analyze them with a large language model (LLM).

However, it is vulnerable to Prompt Injection, meaning that uploaded files can manipulate the chat conversation and control what the user sees in responses.

There is currently no known solution to these kinds of attacks, so users can’t implicitly trust responses from large language model applications when untrusted data is involved. Additionally though NotebookML is also vulnerable to data exfiltration when processing untrusted data.

Besides displaying incorrect information to the user (e.g scamming, etc.) during a prompt injection attack, NotebookML can also be instructed by an attacker to automatically render hyperlinks and images which can be used as a data exfiltration channel.

Users can usually control their own data in systems, like a profile description or name. This information might later be analyzed with other system, including LLM applications that are vulnerable to adversarial examples and prompt injection, like NotebookML.

Leave a Comment