Since the inception of solid-state drives (SSDs), there has been a choice to make—either use SSDs for vastly superior speeds, especially with non-sequential read and writes (“random I/O”), or use legacy spinning rust hard disk drives (HDDs) for cheaper storage that’s a bit slow for sequential I/O1 and painfully slow for random I/O.
The idea of caching frequently used data on SSDs and storing the rest on HDDs is nothing new—solid-state hybrid drives (SSHDs) embodied this idea in hardware form, while filesystems like ZFS support using SSDs as L2ARC. However, with the falling price of SSDs, this no longer makes sense outside of niche scenarios with very large amounts of storage. For example, I have not needed to use HDDs in my PC for many years at this point, since all my data easily fits on an SSD.
One of the scenarios in which this makes sense is for the mirrors I host at home. Oftentimes, a project will require hundreds of gigabytes of data to be mirrored just in case anyone needs it, but only a few files are frequently accessed and could be cached on SSDs for fast access2. Similarly, I have many LLMs locally with Ollama, but there are only a few I use very frequently. The frequently used ones can be cached while the rest can be loaded slowly from HDD when needed.