This guide walks you through building your first Cognis agent — a small assistant that can do arithmetic by calling aDocumentation Index
Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt
Use this file to discover all available pages before exploring further.
Calculator tool. By the end you’ll have a binary that runs against any of six LLM providers (OpenAI, Anthropic, Google, Ollama, Azure, OpenRouter), switching with a single environment variable.
Prerequisites
- Rust 1.75 or newer.
- An API key from a model provider, or a local Ollama install (no key required).
Step 1 — Add cognis
Cargo.toml
openai and ollama features come on by default. Enable others with anthropic, google, azure, or take everything with all-providers. OpenRouter uses the OpenAI wire format, so the openai feature already covers it — no separate flag.
Step 2 — Set credentials
Client::from_env() reads COGNIS_PROVIDER and a matching COGNIS_<PROVIDER>_API_KEY (and optionally COGNIS_<PROVIDER>_MODEL).
- OpenAI
- Anthropic
- Google
- Ollama (local)
- Azure
- OpenRouter
Cognis never reads
.env files. Use your shell, direnv, or envchain — see Installation for a recommended setup.Step 3 — Write the agent
src/main.rs
23 * 17 + 4 = 395.
What you just built
In about 11 lines, you ran an agent that:- Decided when to call a tool (the calculator) and when to answer directly.
- Handled the round-trip from prompt → model → tool call → tool result → final reply.
- Works against any of six providers with the same code — flip
COGNIS_PROVIDERand rerun. - Compiled to one binary with everything you imported. No runtime, no Python, no shim.
AgentBuilder, same with_* chain, just more parts wired in.
How it works
Behind the scenes,AgentBuilder compiled a small Graph<AgentState> and agent.run walked it:
- The model decided to call a tool. The system prompt told it
calculatorexists; the user asked an arithmetic question; the model emitted a tool call. - The tool dispatcher ran the call.
Calculatorparsed the expression and returned the number. - The model saw the result and produced a final answer. No more tool calls, so the loop terminated.
- Iteration limits kicked in if needed.
with_max_iterations(4)capped the round-trips. Hit it, and the loop returns the last assistant message even if more tool calls were pending.
AgentResponse — carries content (the final string) and messages (the full transcript), so you can replay or display the reasoning.
What’s next
Add memory
Make the agent remember earlier turns across calls.
Try a real tool
Define your own
Tool impl with typed arguments and JSON Schema.Switch to multi-agent
Hand off between specialized agents with Sequential, Supervisor, ParallelVote, or RoundRobin.
Run a Pattern
Full worked applications: research, code Q&A, debate, more.