Note: The benchmarks on Image Understanding and Vision Language Understanding will be available soon on HuggingFace. Please stay tuned. For example, t

Search code, repositories, users, issues, pull requests...

submited by
Style Pass
2024-09-20 02:30:05

Note: The benchmarks on Image Understanding and Vision Language Understanding will be available soon on HuggingFace. Please stay tuned.

For example, to run the FlenQA_Experiment_Pipeline experiment pipeline defined in eureka_ml_insights/configs/flenqa.py using the OpenAI GPT4 1106 Preview model, you can run the following command:

python main.py --exp_config FlenQA_Experiment_Pipeline --model_config OAI_GPT4_1106_PREVIEW_CONFIG --exp_logdir gpt4_1106_preveiw

The results of the experiment will be saved in a directory under logs/FlenQA_Experiment_Pipeline/gpt4_1106_preveiw. For each experiment you run with these configurations, a new directory will be created using the date and time of the experiment run. For other available experiment pipelines and model configurations, see the eureka_ml_insights/configs directory. In model_configs.py you can configure the model classes to use your API keys, Keu Vault urls, endpoints, and other model-specific configurations.

Experiment pipelines define the sequence of components that are run to process data, run inference, and evaluate the model outputs. You can find examples of experiment pipeline configurations in the configs directory. To create a new experiment configuration, you need to define a class that inherits from ExperimentConfig and implements the configure_pipeline method. In the configure_pipeline method you define the Pipeline config (arrangement of Components) for your Experiment. Once your class is ready, add it to configs/__init__.py import list.

Leave a Comment