Since the advent of deep neural transformer architecture - a portfolio of techniques aim to correlate long distant semantic dependencies among words -

Rhetoric LLMs and Argumentation

submited by
Style Pass
2024-12-12 08:00:06

Since the advent of deep neural transformer architecture - a portfolio of techniques aim to correlate long distant semantic dependencies among words - large language models (LLMs) have shown an unprecedented ability on a multitude of tasks. Of course, by initial purpose of the design, the main task is the mastery of human language. Argumentation is a linguistic exercise among two or more persons intended to convey some sort of belief from one participant to another. LLMs linguistic skills can be directed towards a better argumentation, focusing on rhetoric and persuasive means.

Adopting neural processes in argumentative tasks is not a fresh idea. Usually there have been initiatives for argumentation mining - extraction of latent parts (claim, warrant, baking, etc.) from raw text - a kind of process similar to named entity recognition for example. Another task where deep learning have found applicability was argument generation which is reversal in the respect of mining. The generation phase transform a given argument’s structure into coherent and fluent paraphrases.

LLMs are quite effective in those tasks without particularly laborious setup. I’m referring to the capability of solving those tasks with minimal (few-shot learning) or even without (zero-shot learning) any sample on how to accomplish the required work.

Leave a Comment