LLM 0.18

from blog Simon Willison's Weblog, | ↗ original
↗ original
LLM 0.18 New release of LLM. The big new feature is asynchronous model support - you can now use supported models in async Python code like this: import llm model = llm.get_async_model("gpt-4o") async for chunk in model.prompt( "Five surprising names for a pet pelican" ): print(chunk, end="", flush=True) Also new in this release: support...