Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt

Use this file to discover all available pages before exploring further.

LLMs talk in prose, but most of your code wants structs. Structured output is the bridge: ask the model to emit JSON that matches a schema, then parse it into a typed value.

How it works

Three pieces collaborate:
  • A struct deriving JsonSchema so the parser can describe the format to the model.
  • A parserStructuredOutputParser<T> — that injects format instructions into the prompt and parses replies back into T.
  • A recovery strategyOutputFixingParser or RetryParser — for the cases when the model wanders off-format.
use cognis::prelude::*;
use cognis_core::output_parsers::{OutputParser, StructuredOutputParser};
use cognis_core::schemars::{self, JsonSchema};
use serde::Deserialize;

#[derive(Debug, Deserialize, JsonSchema)]
struct Recipe {
    title: String,
    ingredients: Vec<String>,
    steps: Vec<String>,
}

let parser: StructuredOutputParser<Recipe> = StructuredOutputParser::new();
let format_hint = OutputParser::format_instructions(&parser).unwrap_or_default();

let prompt = format!("Give me a recipe for scrambled eggs.\n\n{format_hint}");
let reply = client.invoke(vec![Message::human(prompt)]).await?;
let recipe: Recipe = parser.parse(reply.content())?;
Source: examples/v2/07_ollama_structured_output.rs.

When models drift

Smaller and instruction-tuned models sometimes produce almost valid JSON. Two recovery wrappers are available:
Re-prompt the model with the original output and the parse error; ask it to fix the JSON. One repair attempt by default.
use cognis_core::output_parsers::{OutputFixingParser, StructuredOutputParser};

let inner = StructuredOutputParser::<Recipe>::new();
let parser = OutputFixingParser::new(inner, fixer_client);
let recipe: Recipe = parser.parse(reply.content())?;
Useful when the model is capable of valid JSON but slipped — a fix attempt with the error in the prompt usually succeeds.

Other parsers

For simpler shapes, Cognis has lightweight parsers without JSON Schema:
ParserOutput
StringParserIdentity — passes the message text through.
BooleanParserParses a yes/no / true/false answer.
NumberedListParserSplits a numbered list into Vec<String>.
CommaListParserSplits a comma-separated list into Vec<String>.
JsonParser / JsonExtractorBest-effort JSON extraction (handles fenced code blocks).
XmlParserParse XML when that’s what the model emits.

How it composes

Parsers are Runnable<Message, T>, so they slot into chains:
let chain = prompt.pipe(model).pipe(parser);
let recipe: Recipe = chain.invoke(query, cfg).await?;
Wrappers like with_retry work the same as anywhere else.

Provider tips

  • Anthropic and Google generally produce excellent JSON when the schema is in the system prompt.
  • OpenAI’s “JSON mode” / structured output API is a tighter contract. Cognis’ StructuredOutputParser works with any provider; if you want the provider’s strict mode, build your own Tool definition and use the provider’s structured-output route.
  • Smaller Ollama models (llama3.2:1b, etc.) drift often. Always wrap with OutputFixingParser in production.

How it works

  • Schema goes in the prompt, not in the request. The parser appends format_instructions() to the user prompt — that’s how providers without a structured-output API still emit valid JSON.
  • Fixing is a sub-call. OutputFixingParser makes another LLM call for the repair. Budget for it.
  • Errors are typed. Parse failures return Err(CognisError::OutputParse { … }) with the raw text and the parse error — surface those so users see a useful message, not a panic.

See also

Tools

Tools also use JsonSchema for typed args.

Patterns → Code Q&A

Structured-output answers over a code corpus.

Reference → cognis-core

Full parser list.