Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through building your first Cognis agent — a small assistant that can do arithmetic by calling a Calculator tool. By the end you’ll have a binary that runs against any of six LLM providers (OpenAI, Anthropic, Google, Ollama, Azure, OpenRouter), switching with a single environment variable.

Prerequisites

  • Rust 1.75 or newer.
  • An API key from a model provider, or a local Ollama install (no key required).

Step 1 — Add cognis

Cargo.toml
[dependencies]
cognis = { version = "0.3", features = ["openai", "ollama"] }
tokio = { version = "1", features = ["full"] }
The openai and ollama features come on by default. Enable others with anthropic, google, azure, or take everything with all-providers. OpenRouter uses the OpenAI wire format, so the openai feature already covers it — no separate flag.

Step 2 — Set credentials

Client::from_env() reads COGNIS_PROVIDER and a matching COGNIS_<PROVIDER>_API_KEY (and optionally COGNIS_<PROVIDER>_MODEL).
export COGNIS_PROVIDER=openai
export COGNIS_OPENAI_API_KEY=sk-...
export COGNIS_OPENAI_MODEL=gpt-4o-mini   # optional
Cognis never reads .env files. Use your shell, direnv, or envchain — see Installation for a recommended setup.

Step 3 — Write the agent

src/main.rs
use std::sync::Arc;
use cognis::prelude::*;
use cognis::{AgentBuilder, Calculator, Client};

#[tokio::main]
async fn main() -> Result<()> {
    let client = Client::from_env()?;

    let mut agent = AgentBuilder::new()
        .with_llm(client)
        .with_tool(Arc::new(Calculator::new()))
        .with_system_prompt(
            "You are a math assistant. Use the calculator tool for any \
             arithmetic. Always state the final answer.",
        )
        .with_max_iterations(4)
        .build()?;

    let resp = agent.run(Message::human("What is 23 * 17 + 4?")).await?;
    println!("{}", resp.content);
    Ok(())
}
cargo run
You should see something like 23 * 17 + 4 = 395.

What you just built

In about 11 lines, you ran an agent that:
  • Decided when to call a tool (the calculator) and when to answer directly.
  • Handled the round-trip from prompt → model → tool call → tool result → final reply.
  • Works against any of six providers with the same code — flip COGNIS_PROVIDER and rerun.
  • Compiled to one binary with everything you imported. No runtime, no Python, no shim.
That’s the whole V2 surface in its smallest form. Every Pattern on this site builds on this shape — same AgentBuilder, same with_* chain, just more parts wired in.

How it works

Behind the scenes, AgentBuilder compiled a small Graph<AgentState> and agent.run walked it:
  • The model decided to call a tool. The system prompt told it calculator exists; the user asked an arithmetic question; the model emitted a tool call.
  • The tool dispatcher ran the call. Calculator parsed the expression and returned the number.
  • The model saw the result and produced a final answer. No more tool calls, so the loop terminated.
  • Iteration limits kicked in if needed. with_max_iterations(4) capped the round-trips. Hit it, and the loop returns the last assistant message even if more tool calls were pending.
The response object — AgentResponse — carries content (the final string) and messages (the full transcript), so you can replay or display the reasoning.

What’s next

Add memory

Make the agent remember earlier turns across calls.

Try a real tool

Define your own Tool impl with typed arguments and JSON Schema.

Switch to multi-agent

Hand off between specialized agents with Sequential, Supervisor, ParallelVote, or RoundRobin.

Run a Pattern

Full worked applications: research, code Q&A, debate, more.
Want to see the agent’s planning and tool calls in real time? Wire up observability — three lines and you’re streaming events to stdout, Langfuse, or your own observer.