NodeTool lets you combine local inference engines, downloaded checkpoints, and hosted APIs inside the same workflow. Use this page as the launchpad for everything related to model setup and provider configuration.
What to read next
- Supported Models — Full breakdown of llama.cpp, MLX, Whisper, Flux, Stable Diffusion, and more local engines.
- Providers Guide — Configure OpenAI, Anthropic, Gemini, Groq, Together, RunPod, Fal.ai, and custom endpoints.
- Models Manager — Download queues, disk usage tracking, and quick model switching.
- Proxy & Self-Hosted Deployments — Route remote workers securely and expose GPU hosts without leaking secrets.
Common tasks
-
Install baseline models
Open the in-app Models Manager and download GPT-OSS (LLM) plus Flux (image) so default templates work offline. See Getting Started – Step 1. -
Connect a cloud provider
Go to Settings → Providers, add your API key, and map nodes to that provider using the Models button on the node. Follow the Providers Guide. -
Mix local + remote nodes
Keep sensitive preprocessing local (Whisper, ChromaDB), then fan out to hosted generation nodes. The Cookbook patterns show hybrid examples. -
Deploy model-heavy workflows
When you need GPUs beyond your laptop, export the workflow unchanged to RunPod or Cloud Run using the Deployment Guide or Deployment Journeys.
Keeping models and providers organized here ensures every workflow can run locally for privacy, then burst to the cloud only when necessary.