This blog is authored by Colin Putney (ML Engineer at Vannevar Labs), Shivam Dubey (Specialist SA Containers at AWS), Apoorva Kulkarni (Sr.Specialist

How Vannevar Labs cut ML inference costs by 45%

submited by
Style Pass
2024-11-21 17:00:03

This blog is authored by Colin Putney (ML Engineer at Vannevar Labs), Shivam Dubey (Specialist SA Containers at AWS), Apoorva Kulkarni (Sr.Specialist SA, Containers at AWS), and Rama Ponnuswami (Principal Container Specialist at AWS).

Vannevar Labs, a defense tech startup, successfully cut machine learning (ML) inference costs by 45% using Ray and Karpenter on Amazon Elastic Kubernetes Service (Amazon EKS). The company specializes in building advanced software and hardware to support various defense missions, including maritime vigilance, misinformation disruption, and nontraditional intelligence collection. Vannevar Labs leverages ML to process information from ingestion systems and perform user-driven tasks such as search and summarization. With a diverse set of models, including fine-tuned open source and in-house trained models, Vannevar Labs embarked on a mission to optimize their ML inference workloads. This optimization journey aimed to enhance their ML infrastructure for improved deployment speed, scalability, and cost-efficiency.

This post explores their approach, challenges, and solutions implemented using Amazon EKS, Ray, and Karpenter, resulting in a 45% reduction in costs and significantly improved performance.

Leave a Comment