LLMs talk in messages. Cognis models them as a closed enum so the compiler always knows which roles are possible, and aDocumentation Index
Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt
Use this file to discover all available pages before exploring further.
ContentPart enum so multimodal payloads ride alongside text without losing their type.
What it is
content string, plus role-specific fields:
HumanMessage { content, parts }— user input, optionally multimodal.AiMessage { content, tool_calls, parts }— assistant reply, possibly with tool calls.SystemMessage { content }— system instructions.ToolMessage { tool_call_id, content }— the result of a tool invocation, threaded back to the matching call.
Building messages
Constructors take anyimpl Into<String>.
Message::tool(call_id, content) automatically — you only construct these by hand if you’re driving the LLM yourself without an agent.
Multimodal content
HumanMessage and AiMessage both carry a parts: Vec<ContentPart> for non-text payloads.
ContentPart | Fields |
|---|---|
Text | text: String |
Image | source: ImageSource, mime: String |
Audio | source: AudioSource, mime: String |
ImageSource and AudioSource are themselves enums:
Url { url }— pass a public URL the model can fetch.Base64 { data }— inline payload for providers that accept it.
ImageSource::url("..."), ImageSource::base64(data).
Provider serialization
Each provider serializes parts into its own wire format — OpenAI’simage_url blocks, Anthropic’s image blocks, Gemini’s inline_data / file_data. Cognis ships the conversions; you don’t write them.
ContentPart::from_openai, from_anthropic, from_gemini parse provider responses back.
Reading messages
A few accessors for the common fields:ToolCall carries { id, name, arguments: Value }. The agent loop uses these to dispatch.
See also
Building agents → Models
Send messages through an LLM client.
Building agents → Tools
What
tool_calls and Message::tool are wired to.