Cognis abstracts LLMs behind a single trait —Documentation Index
Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt
Use this file to discover all available pages before exploring further.
LLMProvider — and bundles concrete clients for the major vendors. Most code touches Client, the provider-agnostic wrapper. Everything below it (request shapes, auth, streaming, tool serialization) is provider-specific and feature-gated.
What it is
Client is Runnable<Vec<Message>, Message> plus a few convenience methods. It wraps an Arc<dyn LLMProvider>, so swapping providers means changing one constructor call, not your chain.
Two ways to construct a Client
UseClient::from_env when env vars decide the provider — by far the most common path.
Use provider builders when you need provider-specific knobs (organization id, deployment name, custom headers).
- From env (recommended)
- Builder
- Custom provider
Reads See Installation → Set credentials for the full env-var table.
COGNIS_PROVIDER plus matching COGNIS_<PROVIDER>_* variables.Switching providers
The same agent, six ways. Same code; different env or different provider builder.- OpenAI
- Anthropic
- Google
- Ollama
- Azure OpenAI
- OpenRouter
What you can do with a Client
| Method | Returns | Use when |
|---|---|---|
invoke(messages) | Message | One-shot chat — fastest. |
stream(messages) | RunnableStream<StreamChunk> | Token-by-token streaming. |
chat(messages, ChatOptions) | ChatResponse | Need usage, finish_reason, model in the result. |
invoke_with_tools(messages, &[Arc<dyn Tool>]) | Message | One-shot with tools — but for full agentic loops, use AgentBuilder. |
How it works
Clientdoesn’t know the provider’s wire format.LLMProviderdoes.Clientpackages messages into a generic request and lets the provider serialize.Clientis aRunnable. Wrap it withwith_max_retries,with_timeout,with_fallback— same as anything else.- Tool calls are normalized. Whatever the provider returns (OpenAI’s
tool_calls, Anthropic’stool_useblocks, Gemini’sfunctionCalls), Cognis flattens toAiMessage.tool_calls: Vec<ToolCall>. - Streaming aggregates correctly. A streamed reply that includes a tool call decides — at the chunk level — to enter tool-dispatch mode without breaking the consumer.
Resilience patterns
Models fail. Cognis ships idiomatic recovery wrappers:See also
Tools
Give the model something to call.
Streaming
Tokens, events, and structured streams.
Structured output
Get typed structs back from the model.
Reference → cognis-llm
Full provider list and method signatures.