Wikipedia : Large language models

submited by
Style Pass
2023-03-14 19:00:01

Large language models (LLMs) are computer programs for natural language processing that use deep learning and neural networks, like GPT-3. This policy covers how LLMs may and may not be used on Wikipedia to generate new or modify existing text. Potential problems include that generated contents are biased, non-verifiable, constitute original research, or violate copyrights. Because of this, LLMs should only be used for tasks in which the editor has substantial experience and their outputs must be rigorously scrutinized for compliance with all applicable policies. Furthermore, LLM use to generate or modify text must be declared in the edit summary and in-text attribution is required for articles and drafts. Editors retain full responsibility for LLM-assisted edits.

The use of LLMs to produce encyclopedic content on Wikipedia is associated with various risks. This clarifies key policies as they pertain to LLM application on the project, i.e. how the latter generally presents an issue with respect to the former. Note that this policy applies to all usages of LLMs independently of whether a provider or user of an LLM claims that, due to technological advances, it automatically complies with Wikipedia policies and guidelines.

Leave a Comment