The open creative AI workspace

Every model. Your keys. Your canvas.

One node-based canvas for making images, video, sound, and stories with AI. Wire up Flux, Qwen, Wan, Seedance, Sora, Veo, Kling, ElevenLabs, MusicGen, and the major LLMs side by side — bring your own keys, or run locally. Open source, AGPL-3.0.

NodeTool canvas

Studio or Cloud

Both editions are open source under AGPL-3.0 and built from the same code. Workflows are portable between them.

NodeTool Studio — desktop

Runs on macOS, Windows, and Linux. Local inference via Ollama, MLX, and GGUF on your hardware. Works offline. Files, prompts, and outputs stay on disk. Bring your keys for cloud providers when you want them.

For: local inference, offline work, GPU owners, privacy.

Download Studio →
NodeTool Cloud — browser

Hosted, no install. Same canvas, same nodes. Bring your keys for every cloud provider — OpenAI, Anthropic, Gemini, Replicate, FAL, ElevenLabs, HuggingFace. Does not run local models.

For: no setup, working across devices, no GPU.

Open Cloud →

No credit markup. Cloud is managed hosting of the code in this repo. You can self-host the same Docker images and CLI any time. You pay providers directly.

What you can build

Image and video

Posters, characters, scenes. Flux, Qwen, Wan, Seedance, Sora, Veo, Kling on one canvas.

Movie Posters →
Story to video

Turn a prompt into a storyboard, narrate it, animate it, score it.

Story to Video →
Sound and voice

Music, sound design, narration. ElevenLabs, MusicGen, Whisper wired into the graph.

Image to Audio Story →
Agents

Multi-step agents that plan, call tools, and drive your creative pipelines.

Realtime Agent →

More patterns — pipelines, data, RAG, email — in the Cookbook.

Get started

  1. Download NodeTool for macOS, Windows, or Linux.
  2. Open a template, press Run, watch results stream.
  3. Edit and iterate.

Explore

Open source

AGPL-3.0. Discord · GitHub.