Some law professor colleagues and I are writing about whether Large Language Model creators (e.g., OpenAI, the creator of ChatGPT-4) could be sued for libel. And some recent stories allege that OpenAI does yield false and defamatory statements; Ted Rall wrote an article so alleging yesterday at the Wall Street Journal, and another site published something last Sunday about this as well (though there the apparently false statement was about a dead person, so it's not technically libel). When I tried to ask the same questions those authors reported having asked, ChatGPT-4 gave different answers, but that's apparently normal for ChatGPT-4.
This morning, though, I tried this myself, and I saw not just what appear to be false accusations, but what appear to be spurious quotes, attributed to media sources such as Reuters and the Washington Post. I appreciate that Large Language Models just combine words from sources in the training data, and perhaps this one just assembled such words together with punctuation (quotation marks). But I would have thought that its creators would have programmed something to check its output, to confirm that anything reported in quotation marks is actually a legit quote. In the absence of such quotes, it appears that such AI tools might produce material that is especially likely to deceive viewers (as, say, a fake quote attributed to Reuters might), and is especially likely to damage the reputations of the subjects of the quotes.
I quote the exchange below; I've replaced the name of the person I was asking about with "R.R." (or "R.," when it's just the last name), because I don't want to associate him in Google search results with ChatGPT-4's falsehoods. Note that I did not design my question to prompt ChatGPT-4 to give me an answer about some guilty plea: My initial question does imply that R.R. was accused of something, but that is accurate—he in fact was publicly accused (by a coauthor of mine and me, in a blog post at the Washington Post) of arranging a scheme for fraudulently obtaining court orders as a means of hiding online criticisms of his clients. I never suggested to ChatGPT-4 or to anyone else that he was prosecuted for this, much less than that he pleaded guilty; to my knowledge no such prosecution or plea has taken place.