Databricks is a great platform when it comes to data management and governance, mostly due to the unity catalog. But Spark as an engine for processing the data is just ok'ish, especially when data is not really big. New engines like polars, datafusion or duckdb are better suited for this and provide interesting options.
Sure, you can run anything you want in Databricks Notebooks and workflows by just installing the library. But the interesting part is to access the data stored in the Databricks unity catalog.
For the impatient: you can find the code for both ways in this gist on github. This also includes all the imports which I mostly omitted in the examples below for the sake of brevity. Also, you need to install duckdb e.g. by %pip install duckdb in a notebook cell.
toArrow() is a new method added in Spark 4. Spark 4 is not released yet, but Databricks regularly adds new (unreleased) features from the open-source version to their runtime.