Instructions from the Instructor library offers a seamless way to make language models backward compatible with existing Python functions. By employin

Distilling python functions into LLM¶

submited by
Style Pass
2024-10-02 08:00:03

Instructions from the Instructor library offers a seamless way to make language models backward compatible with existing Python functions. By employing Pydantic type hints, it not only ensures compatibility but also facilitates fine-tuning gpt-3.5-turbo to emulate these functions end-to-end.

Replicating the behavior of a Python function in a language model involves intricate data preparation. For instance, teaching a model to execute three-digit multiplication is not as trivial as implementing def f(a, b): return a * b. OpenAI's fine-tuning script coupled with their function calling utility provides a structured output, thereby simplifying the data collection process. Additionally, this eliminates the need for passing the schema to the model, thus conserving tokens.

By using Instructions, you can annotate a Python function that returns a Pydantic object, thereby automating the dataset creation for fine-tuning. A handler for logging is all that's needed to build this dataset.

Leave a Comment