Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt

Use this file to discover all available pages before exploring further.

Model examples enact what happens at the LLM-client layer. Sources under examples/models/.
NameScenarioSource
models_streaming_chatStream a one-line explanation to a CLI chat UI — print tokens as they arrive.src
models_embeddingFind the most similar product description for a search query, using cosine similarity.src
models_routingRoute by intent — short greetings to a fast model, long technical questions to a bigger one.src
models_content_blocksCaption an image — multimodal ContentPart::Image. (Needs a vision model: COGNIS_OLLAMA_MODEL=llava or hosted equivalent.)src

How to run

COGNIS_PROVIDER=ollama COGNIS_OLLAMA_MODEL=llama3.1 \
  cargo run -p cognis-examples --example models_streaming_chat

# For multimodal:
ollama pull llava
COGNIS_PROVIDER=ollama COGNIS_OLLAMA_MODEL=llava \
  cargo run -p cognis-examples --example models_content_blocks

See also

Models and providers

Client::from_env, provider builders, the env-var table.

Streaming

Tokens vs structured events.

Embeddings & stores

Where the embedders feed.