LLMs talk in prose, but most of your code wants structs. Structured output is the bridge: ask the model to emit JSON that matches a schema, then parse it into a typed value.Documentation Index
Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt
Use this file to discover all available pages before exploring further.
How it works
Three pieces collaborate:- A struct deriving
JsonSchemaso the parser can describe the format to the model. - A parser —
StructuredOutputParser<T>— that injects format instructions into the prompt and parses replies back intoT. - A recovery strategy —
OutputFixingParserorRetryParser— for the cases when the model wanders off-format.
examples/v2/07_ollama_structured_output.rs.
When models drift
Smaller and instruction-tuned models sometimes produce almost valid JSON. Two recovery wrappers are available:- OutputFixingParser
- RetryParser
Re-prompt the model with the original output and the parse error; ask it to fix the JSON. One repair attempt by default.Useful when the model is capable of valid JSON but slipped — a fix attempt with the error in the prompt usually succeeds.
Other parsers
For simpler shapes, Cognis has lightweight parsers without JSON Schema:| Parser | Output |
|---|---|
StringParser | Identity — passes the message text through. |
BooleanParser | Parses a yes/no / true/false answer. |
NumberedListParser | Splits a numbered list into Vec<String>. |
CommaListParser | Splits a comma-separated list into Vec<String>. |
JsonParser / JsonExtractor | Best-effort JSON extraction (handles fenced code blocks). |
XmlParser | Parse XML when that’s what the model emits. |
How it composes
Parsers areRunnable<Message, T>, so they slot into chains:
with_retry work the same as anywhere else.
Provider tips
- Anthropic and Google generally produce excellent JSON when the schema is in the system prompt.
- OpenAI’s “JSON mode” / structured output API is a tighter contract. Cognis’
StructuredOutputParserworks with any provider; if you want the provider’s strict mode, build your ownTooldefinition and use the provider’s structured-output route. - Smaller Ollama models (
llama3.2:1b, etc.) drift often. Always wrap withOutputFixingParserin production.
How it works
- Schema goes in the prompt, not in the request. The parser appends
format_instructions()to the user prompt — that’s how providers without a structured-output API still emit valid JSON. - Fixing is a sub-call.
OutputFixingParsermakes another LLM call for the repair. Budget for it. - Errors are typed. Parse failures return
Err(CognisError::OutputParse { … })with the raw text and the parse error — surface those so users see a useful message, not a panic.
See also
Tools
Tools also use
JsonSchema for typed args.Patterns → Code Q&A
Structured-output answers over a code corpus.
Reference → cognis-core
Full parser list.