Jockey combines the capabilities of existing Large Language Models (LLMs) with Twelve Labs' APIs using LangGraph. This allows workloads to be allocate

Search code, repositories, users, issues, pull requests...

submited by
Style Pass
2024-06-08 23:00:07

Jockey combines the capabilities of existing Large Language Models (LLMs) with Twelve Labs' APIs using LangGraph. This allows workloads to be allocated to the appropriate foundation models for handling complex video workflows. LLMs are used to logically plan execution steps and interact with users, while video-related tasks are passed to Twelve Labs APIs, powered by video-foundation models (VFMs), to work with video natively, without the need for intermediary representations like pre-generated captions.

which can cause issues with the langgraph-cli. In such a case, you can install Docker Desktop to easily make the above a valid system command.

This is an easy and lightweight way to run an instance of Jockey in your terminal. Great for quick testing or validation during local dev work.

3. Currently, Jockey requires an Index ID (and in some cases a Video ID) are supplied as part of the conversation history. You are free to modify how this is handled. An example of an initial prompt might be something like:

Leave a Comment