Today we’re announcing the release of Gretel Benchmark, a Python library for you to compare any model that generates synthetic data, with a set of s

Introducing Gretel Benchmark

submited by
Style Pass
2022-10-06 21:00:36

Today we’re announcing the release of Gretel Benchmark, a Python library for you to compare any model that generates synthetic data, with a set of standardized tests to evaluate those algorithms for synthetic data quality, runtime, and other machine learning use cases. 

It’s easy to define custom models, so you can use any algorithm, not just Gretel models, for synthetic data generation to compare in Benchmark. 

Learn more here about creating your custom model. Make sure to install any third-party libraries you use as dependencies wherever you are running Benchmark.

We’ve also made it easy for you to use Gretel models in Benchmark. Here’s a nifty summary of all the available Gretel models with default configurations:

Benchmark allows you to compare the synthetic data quality and runtime of multiple models (whether custom or Gretel models) on multiple datasets. 

To use your own data in Benchmark, you can follow the instructions for  `make_dataset` in the or check out the Benchmark notebook. 

Leave a Comment