MLCommons is a non-profit consortium of 50+ companies that was originally created to develop a common, reproducible and fair benchmarking methodology

MLPerf education and reproducibility workgroup

submited by
Style Pass
2022-09-23 15:00:22

MLCommons is a non-profit consortium of 50+ companies that was originally created to develop a common, reproducible and fair benchmarking methodology for new AI and ML hardware.

MLCommons has developed an open-source reusable module called loadgen that efficiently and fairly measures the performance of inference systems. It generates traffic for scenarios that were formulated by a diverse set of experts from MLCommons to emulate the workloads seen in mobile devices, autonomous vehicles, robotics, and cloud-based setups.

MLCommons has also prepared several reference ML tasks, models and datasets including vision, recommendation, language processing and speech recognition to let companies benchmark and compare their new hardware in terms of accuracy, latency, throughput and energy in a reproducible way twice a year.

The goal of this open education workgroup is to develop an MLPerf educational toolkit based on portable workflows with plug&play ML components to help newcomers start using MLPerf benchmarks and automatically plug in their own ML tasks, models, data sets, engines, software and hardware.

Leave a Comment