Model examples enact what happens at the LLM-client layer. Sources underDocumentation Index
Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt
Use this file to discover all available pages before exploring further.
examples/models/.
| Name | Scenario | Source |
|---|---|---|
models_streaming_chat | Stream a one-line explanation to a CLI chat UI — print tokens as they arrive. | src |
models_embedding | Find the most similar product description for a search query, using cosine similarity. | src |
models_routing | Route by intent — short greetings to a fast model, long technical questions to a bigger one. | src |
models_content_blocks | Caption an image — multimodal ContentPart::Image. (Needs a vision model: COGNIS_OLLAMA_MODEL=llava or hosted equivalent.) | src |
How to run
See also
Models and providers
Client::from_env, provider builders, the env-var table.Streaming
Tokens vs structured events.
Embeddings & stores
Where the embedders feed.