RAG + Tools
Build agents that use retrieval-augmented generation (RAG) or tool-calling patterns.
Approach
Define events — e.g.
rag-request(query + context) andrag-response(answer chunks)Agent logic — Fetch from your vector store, build context, call the LLM with the augmented prompt
Streaming — Emit response chunks as they arrive from the LLM
Tool-Calling Pattern
For agents that call tools (e.g. search, calculator):
Emit a
tool-requestevent with the tool name and paramsAnother agent (or external system) handles the tool and emits
tool-responseThe original agent receives
tool-responseand continues
This can be implemented with multiple agents on different channels, or with a single agent that manages tool state internally.
Example Repos
core-example — Basic streaming agent; extend with RAG by adding retrieval before the LLM call
open-ai-speech-to-speech-example — Uses OpenAI Realtime API; shows integration patterns
Common Recipes
See Common Recipes for copyable snippets (retrieval, tool loops, etc.).
Last updated