Natural Language Processing (NLP) has had a relatively long development period. It is often broken down into smaller problems: text classification, Na

Has LLM killed traditional NLP?

submited by
Style Pass
2025-01-15 07:30:06

Natural Language Processing (NLP) has had a relatively long development period. It is often broken down into smaller problems: text classification, Named Entity Recognition (NER), summarization, etc. to solve concrete challenges.

For each smaller challenge, we have different small models to solve it, and sometimes, we must prepare large enough training data.

For example, to use text classification to detect when a guest asks about check-in time, we need to create a list of similar questions for the intentcheck-inin the following format (using Rasa NLU syntax):

With lots of intents, this file becomes bigger, takes more time to train, and when we add new intents or training phrases, we must retrain.

With the rise of Large Language Models like ChatGPT, it can tackle NLP problems more easily. With zero-shot prompts, we just need to put the guest’s question and list of intents in a prompt without any examples:

Leave a Comment