In the computational age, life-scientists often have to write Python code to solve bio-image analysis (BIA) problems. Many of them have not been forma

Benchmarking Large Language Models for Bio-Image Analysis Code Generation

submited by
Style Pass
2024-04-26 06:00:07

In the computational age, life-scientists often have to write Python code to solve bio-image analysis (BIA) problems. Many of them have not been formally trained in programming though. Code-generation, or coding assistance in general, with Large Language Models (LLMs) can have a clear impact on BIA. To the best of our knowledge, the quality of the generated code in this domain has not been studied.We present a quantitative benchmark to estimate the capability of LLMs to generate code for solving common BIA tasks. Our benchmark currently consists of 57 human-written prompts with corresponding reference solutions in Python, and unit-tests to evaluate functional correctness of potential solutions. We demonstrate our benchmark here and compare 15 state-of-the-art LLMs. To ensure that we will cover most of our community needs we also outline mid- and long-term strategies to maintain and extend the benchmark by the BIA open-source community. This work should support users in deciding for an LLM and also guide LLM developers in improving the capabilities of LLMs in the BIA domain.

We added more samples from more LLMs - contributed by the new author Jean-Karim Heriche. He also contributed to text helping us to document the extended procedures and interpreting the new data correctly.

Leave a Comment