Detecting LLM-Generated Text with Binoculars

submited by
Style Pass
2024-02-09 23:30:05

The ability to detect AI-generated text is an important issue, not only because of academic integrity issues, but also due to misinformation, security, and copyright concerns. A new method for detection of machine-generated text, called Binoculars, achieves over 90% accuracy in detection at a 0.01% false positive rate. In this notebook, I annotate key parts of the paper, explaining the mechanisms behind this new method and implementing it piece-by-piece. Code from the original paper is available here and this Jupyter Notebook is available here.

The motivation behind LLM Detection is harm reduction, to trace text origins, block spam, and identify fake news produced by LLMs. Preemptive detection methods attempt to “watermark” generated text, but requires full control of the generating models, which already seems to be impossible. Therefore, more recent works have been on post-hoc detection methods, which could be used without the cooperation of the text’s author. The paper’s authors suggest that there are two main groups for post-hoc detectors, the first being finetuning a pretrained language model to perform binary classification. There are many additional techniques that make this approach more effective, but all implementations will require training on text produced by the target model, which is both computationally expensive and limited by the number of new models that are being open-sourced.

The second group uses statistical signatures of machine-generated text, with the aim of zero-shot learning. This would allow for the detection of a wide range of models, with little to no training data. These methods use measures such as perplexity, perplexity curvature, log rank, intrinsic dimensionality, and n-gram analysis. The Binoculars paper proposes a focus on low false positive rate (FPR) and high performance on out-of-domain samples, rather than focusing on classifier AUCs for the high-stakes application of LLM detection.

Leave a Comment