I released LLM 0.17 last night, the latest version of my combined CLI tool and Python library for interacting with hundreds of different Large Language Models such as GPT-4o, Llama, Claude and Gemini.
The signature feature of 0.17 is that LLM can now be used to prompt multi-modal models—which means you can now use it to send images, audio and video files to LLMs that can handle them.
Here’s an example. First, install LLM—using brew install llm or pipx install llm or uv tool install llm, pick your favourite. If you have it installed already you made need to upgrade to 0.17, e.g. with brew upgrade llm.
The -a option stands for --attachment. Attachments can be specified as URLs, as paths to files on disk or as - to read from data piped into the tool.
The image features a brown pelican standing on rocky terrain near a body of water. The pelican has a distinct coloration, with dark feathers on its body and a lighter-colored head. Its long bill is characteristic of the species, and it appears to be looking out towards the water. In the background, there are boats, suggesting a marina or coastal area. The lighting indicates it may be a sunny day, enhancing the scene’s natural beauty.