Here at Bauplan, we build a fast, serverless, easy to use data lakehouse - we're on a mission to make working with huge datasets in the cloud as easy as import your_data; do_cool_stuff(your_data). Our serverless platform takes care of all the nasty infra bits, so developers can focus on building interesting stuff with data, rather than Kubernetes or Spark.
At the heart of our platform, we turn your files (e.g. Parquet, CSV) into Apache Iceberg tables automatically and expose them in simple way.
Iceberg bring sanity to data lakes with things like schema evolution, ACID transactions, and time travel, which in turns provides the ability to create lightweight data branches, so you can test out changes or experiment with production data without fear. The combination of the two it's like git for data.
To make the query side buttery smooth, we've integrated DuckDB as our execution engine. If you're not familiar with it, DuckDB is an embedded database that's been taking the data world by storm. Coined the "SQLite for analytics", it lets you run OLAP queries easily on your local files, no setup required. Pretty slick.