Agent Builders
Design multi-step LLM agents that reason, call tools, and stream progress.
- Planning + execution in one workflow
- Preview nodes to debug intermediate steps
- Trigger runs from Global Chat or CLI
Local-first AI workflow builder
NodeTool lets you compose text, audio, video, and automation nodes on a single canvas, run them on your machine, then ship the identical workflow to RunPod, Cloud Run, or your own infrastructure.
NodeTool is the local-first canvas for building AI workflows—connect text, audio, video, and automation nodes visually, then run them locally or deploy the exact same graph to RunPod, Cloud Run, or your own servers.
Design multi-step LLM agents that reason, call tools, and stream progress.
Index private corpora, run hybrid search, and ground every answer in sources.
Prototype creative pipelines mixing audio, vision, video, and structured tools.
Plan, call tools, and summarize results with streaming progress updates.
Agent pattern → Realtime Agent example →Ingest PDFs, chunk text, and answer questions grounded in citations.
RAG pattern → Chat with Docs example →Transcribe meetings, remove silence, add subtitles, or narrate generated imagery.
Transcribe Audio example → Story to Video example →Fetch data, transform it with AI nodes, and publish dashboards or reports.
Data Viz pipeline → Data processing pattern →All workflows, assets, and models execute on your machine for maximum privacy.
Mix local nodes with OpenAI, Anthropic, or RunPod workers when you need extra capacity.
Run LLMs, Whisper, and diffusion models locally without shipping data to third parties. Opt into APIs only when needed.
Create once in the editor, trigger from Global Chat, expose via Mini-Apps, or call it from the Workflow API—all backed by the same graph.
When you outgrow your laptop, push the same workflow to RunPod or Cloud Run. No refactoring required.
NodeTool is open-source under AGPL-3.0. Join the Discord, explore the GitHub repo, and share workflows with other builders.