Build agents that use retrieval-augmented generation (RAG) or tool-calling patterns.Documentation Index
Fetch the complete documentation index at: https://docs.m4trix.dev/llms.txt
Use this file to discover all available pages before exploring further.
Approach
- Define events — e.g.
rag-request(query + context) andrag-response(answer chunks) - Agent logic — Fetch from your vector store, build context, call the LLM with the augmented prompt
- Streaming — Emit response chunks as they arrive from the LLM
Tool-Calling Pattern
For agents that call tools (e.g. search, calculator):- Emit a
tool-requestevent with the tool name and params - Another agent (or external system) handles the tool and emits
tool-response - The original agent receives
tool-responseand continues
Example Repos
- core-example — Basic streaming agent; extend with RAG by adding retrieval before the LLM call
- open-ai-speech-to-speech-example — Uses OpenAI Realtime API; shows integration patterns