A defining characteristic of ‘fake news’ is that it frequently presents false information in a context of factually correct information, w

Preventing ‘Hallucination’ In GPT-3 And Other Complex Language Models

submited by
Style Pass
2021-06-07 10:00:09

A defining characteristic of ‘fake news’ is that it frequently presents false information in a context of factually correct information, with the untrue data gaining perceived authority by a kind of literary osmosis – a worrying demonstration of the power of half-truths.

Sophisticated generative natural language processing (NLP) processing models such as GPT-3 also have a tendency to ‘hallucinate’ this kind of deceptive data. In part, this is because language models require the capability to rephrase and summarize long and often labyrinthine tracts of text, without any architectural constraint that’s able to define, encapsulate and ‘seal’ events and facts so that they are protected from the process of semantic reconstruction.

Therefore the facts are not sacred to an NLP model; they can easily end up treated in the context of ‘semantic Lego bricks’, particularly where complex grammar or arcane source material makes it difficult to separate discrete entities content from language structure.

Leave a Comment