Retrieval examples enact the layers of a RAG pipeline. Sources underDocumentation Index
Fetch the complete documentation index at: https://cognis.vasanth.xyz/llms.txt
Use this file to discover all available pages before exploring further.
examples/retrieval/.
| Name | Scenario | Source |
|---|---|---|
retrieval_text_splitters | Compare RecursiveCharSplitter, MarkdownSplitter, SentenceSplitter on the same Markdown blog post — see how each carves it. | src |
retrieval_rag_pipeline | End-to-end RAG — split, embed, store, retrieve, answer with the LLM. The canonical flow. | src |
retrieval_indexing_rag | Docs-site re-indexer — round 1 indexes 3 docs, edit one, round 2 only re-embeds the changed doc. | src |
retrieval_reranking | Vector search returns top-10; cross-encoder (LLM-judge) reranks to top-3 for the prompt. | src |
retrieval_caching_retriever | Chat session asks the same question twice — second call returns from cache (latency drops to ~0). | src |
How to run
Pick a starting point
- First time touching RAG?
retrieval_rag_pipelineis the canonical flow. - Re-indexing a corpus?
retrieval_indexing_rag— incremental updates with aRecordManager. - Improving retrieval quality?
retrieval_rerankingadds a cross-encoder pass.
See also
Building RAG
The user guide for every layer.
Patterns → Code Q&A
A worked RAG over a Rust codebase.
Reranking & compression
Why reranking matters.