Local-first AI workflow builder

Build AI workflows visually. Deploy anywhere.

NodeTool lets you compose text, audio, video, and automation nodes on a single canvas, run them on your machine, then ship the identical workflow to RunPod, Cloud Run, or your own infrastructure.

Start Here

NodeTool is the local-first canvas for building AI workflows—connect text, audio, video, and automation nodes visually, then run them locally or deploy the exact same graph to RunPod, Cloud Run, or your own servers.

Who uses NodeTool?

Agent Builders

Design multi-step LLM agents that reason, call tools, and stream progress.

  • Planning + execution in one workflow
  • Preview nodes to debug intermediate steps
  • Trigger runs from Global Chat or CLI

Knowledge & RAG Teams

Index private corpora, run hybrid search, and ground every answer in sources.

  • Document ingestion + retrieval on one canvas
  • Built-in ChromaDB collections
  • Automations via Mini-Apps or APIs

Multimodal Makers

Prototype creative pipelines mixing audio, vision, video, and structured tools.

  • Mix local diffusion with hosted APIs
  • Scriptable data prep & charting nodes
  • Deployable without rewriting code

What you can build right away

Your first 10 minutes

  1. Download NodeTool — install the desktop app for macOS, Windows, or Linux.
  2. Launch and pick models — install GPT-OSS + Flux in Model Manager for fast local runs.
  3. Open the Creative Story Ideas template — inspect nodes, press Run, and watch Preview stream results.
  4. Save & run from Global Chat — trigger the workflow directly from a chat thread.
  5. Publish as a Mini-App — hand teammates a form UI powered by the same workflow.

Choose your path

Local-first or cloud-augmented

Local-only mode

All workflows, assets, and models execute on your machine for maximum privacy.

  • Use MLX, llama.cpp, Whisper, and Flux locally
  • Store assets on disk or Supabase buckets you control
  • Disable outbound traffic entirely if needed
Storage guide →

Cloud-augmented mode

Mix local nodes with OpenAI, Anthropic, or RunPod workers when you need extra capacity.

  • Configure API keys in Settings → Providers
  • Deploy the same workflow to RunPod or Cloud Run
  • Automate runs through the Workflow API or chat APIs
Models & Providers overview →

Why teams choose NodeTool

Privacy-first

Run LLMs, Whisper, and diffusion models locally without shipping data to third parties. Opt into APIs only when needed.

Single workflow, many surfaces

Create once in the editor, trigger from Global Chat, expose via Mini-Apps, or call it from the Workflow API—all backed by the same graph.

Deploy without rewrites

When you outgrow your laptop, push the same workflow to RunPod or Cloud Run. No refactoring required.

Popular destinations

Build with the community

NodeTool is open-source under AGPL-3.0. Join the Discord, explore the GitHub repo, and share workflows with other builders.