The nodetool CLI manages local development workflows, servers, deployments, and admin tooling. Install the project and run nodetool --help (or python -m nodetool.cli --help) to see the top-level command list. Every sub-command exposes its own --help flag with detailed usage.
Getting Help
nodetool --helpβ list all top-level commands and groups.nodetool <command> --helpβ show command-specific options (e.g.nodetool serve --help).nodetool <group> --helpβ list sub-commands for grouped tooling (e.g.nodetool deploy --help).
Core Runtime Commands
nodetool agent
Runs an autonomous AI agent from start to finish using a YAML configuration file. Agents use the planning agent architecture to break down tasks, execute them iteratively, and achieve goals through tool usage.
Arguments:
--config FILE(required) β Path to agent YAML configuration file.--prompt TEXTβ Inline prompt for the agent to execute.--prompt-file FILEβ Load prompt from a text file.--interactive/-iβ Start interactive session with the agent.
Options:
--workspace DIRβ Override workspace directory from config.--max-iterations Nβ Override maximum planning iterations from config.--output FILEβ Save agent output to file.--jsonlβ Output in JSONL format for automation.--verbose/-vβ Enable DEBUG-level logging.
Examples:
# Run agent with inline prompt
nodetool agent --config research-agent.yaml --prompt "Research AI trends"
# Run agent with prompt from file
nodetool agent --config code-assistant.yaml --prompt-file task.txt
# Interactive mode for multi-turn conversations
nodetool agent --config content-creator.yaml --interactive
# Save output to file
nodetool agent --config agent.yaml --prompt "Task" --output result.txt
# JSONL output for automation
nodetool agent --config agent.yaml --prompt "Task" --jsonl
Agent Configuration:
Agents are configured via YAML files that specify:
- System prompt: Instructions defining agent behavior
- Model: Primary AI model (provider and model ID)
- Planning agent: Always enabled, coordinates task execution
- Tools: Available capabilities (search, code execution, file operations)
- Parameters: Token limits, temperature, iteration limits
- Workspace: Sandboxed directory for file operations
See Agent CLI Documentation for complete configuration reference and examples/agents/ for sample configurations.
nodetool serve
Runs the FastAPI backend server for the NodeTool platform. This serves the REST API, WebSocket endpoints, and optionally static assets or app bundles.
Options:
--host(default127.0.0.1) β bind address (use0.0.0.0for all interfaces).--port(default7777) β listen port.--static-folderβ path to folder containing static web assets (e.g., compiled React UI).--force-fp16β force FP16 precision for ComfyUI integrations if available (GPU optimization).--reloadβ enable auto-reload on file changes (development only).--productionβ enable production mode (stricter validation, optimizations).--remote-authβ enable remote authentication (Supabase-backed auth).--verbose/-vβ enable DEBUG-level logging for troubleshooting.
Examples:
# Development server with auto-reload
nodetool serve --reload --verbose
# Production server with static assets
nodetool serve --production --static-folder ./web/dist --host 0.0.0.0 --port 7777
# Development with remote auth
nodetool serve --remote-auth --verbose
nodetool run
Executes a workflow by ID, file path, or JSON payload. Supports multiple modes: interactive, JSONL for automation, and stdin for programmatic execution.
Arguments:
WORKFLOW(optional) β workflow ID or path to workflow JSON file.
Options:
--jsonlβ output raw JSONL (JSON Lines) format instead of pretty-printed output. Each line is a valid JSON object representing workflow progress. Useful for subprocess/automation integration.--stdinβ read an entireRunJobRequestJSON from stdin instead of from argument or interactive prompt.--user-id(default1) β user ID for workflow execution context.--auth-token(defaultlocal_token) β authentication token for workflow execution.--verbose/-vβ enable DEBUG-level logging.
Examples:
# Interactive mode: Run workflow by ID
nodetool run workflow_abc123
# Interactive mode: Run workflow from file
nodetool run ./my_workflow.json
# JSONL mode: Run with JSON output for parsing
nodetool run workflow_abc123 --jsonl
# Stdin mode: Run from piped JSON
cat run_request.json | nodetool run --stdin
# With custom user and auth token
nodetool run workflow_abc123 --user-id user123 --auth-token sk-token
nodetool worker
Starts a deployable worker process with OpenAI-compatible endpoints. This is used for running NodeTool as a backend service with chat/completion API support.
Options:
--host(default0.0.0.0) β bind address (listen on all interfaces for deployments).--port(default7777) β listen port.--remote-authβ require Supabase-backed authentication.--default-model(defaultgpt-oss:20b) β fallback model when client doesnβt specify one.--provider(defaultollama) β provider for the default model (e.g.,openai,anthropic,ollama).--toolsβ comma-separated list of tools to enable (e.g.,google_search,browser).--workflowβ one or more workflow JSON files to register with the worker.--verbose/-vβ enable DEBUG-level logging.
Examples:
# Basic worker with default Ollama model
nodetool worker
# Worker with custom model and tools
nodetool worker --default-model gpt-4 --provider openai --tools google_search,browser
# Worker with custom workflows
nodetool worker --workflow workflow1.json --workflow workflow2.json --host 0.0.0.0 --port 8080
# Deployable worker with auth
nodetool worker --remote-auth --host 0.0.0.0 --port 7777
Chat Client
nodetool chat-client
Interactive or non-interactive client for connecting to the OpenAI API, a local NodeTool chat server, or a RunPod serverless endpoint. Supports streaming responses and multi-turn conversations.
Options:
--server-urlβ override default OpenAI URL to point to a local chat server or custom endpoint.--runpod-endpointβ convenience shortcut for RunPod serverless endpoint IDs (e.g.,abc123xyz).--auth-tokenβ HTTP authentication token for server (falls back toRUNPOD_API_KEYenvironment variable).--messageβ send a single message in non-interactive mode (no conversation loop).--model(defaultgpt-4o-minifor OpenAI) β model to use (e.g.,gpt-4o,gpt-oss:20b).--providerβ AI provider when connecting to local server (e.g.,openai,anthropic,ollama).--verbose/-vβ enable DEBUG-level logging.
Examples:
# Interactive client with OpenAI
nodetool chat-client
# Single message to OpenAI
nodetool chat-client --message "What is the capital of France?"
# Connect to local chat server
nodetool chat-client --server-url http://localhost:8080 --provider ollama --model gpt-oss:20b
# Connect to RunPod endpoint
nodetool chat-client --runpod-endpoint abc123xyz --auth-token $RUNPOD_API_KEY
# Interactive session with custom model
nodetool chat-client --server-url http://localhost:8080 --model claude-3-opus --provider anthropic
Developer Tools
nodetool mcp
Starts the NodeTool Model Context Protocol (MCP) server implementation. This enables IDE integrations (e.g., Claude Code, other MCP-compatible IDEs) to access NodeTool workflows and capabilities.
See also: Model Context Protocol
Examples:
# Start MCP server (typically auto-started by IDEs)
nodetool mcp
nodetool test-runpod
Runs an automated health and inference check against a RunPod endpoint.
Options:
--endpoint-id(required) β RunPod serverless endpoint ID.--paramsβ JSON file with request parameters.--timeoutβ request timeout in seconds (default 60).--outputβ write JSON results to a file.--verbose/-vβ enable DEBUG logs.
Examples:
nodetool test-runpod --endpoint-id YOUR_ENDPOINT_ID
nodetool test-runpod --endpoint-id YOUR_ENDPOINT_ID --params examples/test_params_basic.json --timeout 30
nodetool codegen
Regenerates DSL (Domain-Specific Language) modules from node definitions. Scans node packages and generates Python code for type-safe workflow creation.
Behavior: Completely wipes and recreates corresponding src/nodetool/dsl/<namespace>/ directories before writing generated files.
Examples:
# Regenerate all DSL modules
nodetool codegen
# Verbose output
nodetool codegen --verbose
Secrets Management
nodetool secrets
Manage encrypted secrets stored in the database with per-user encryption.
Subcommands:
secrets listβ list stored secret metadata without revealing values.secrets storeβ securely store or update a secret value.
nodetool secrets list
List all stored secrets for a user (values not displayed).
Options:
--user-id/-u(default1) β user ID to list secrets for.--limit(default100) β maximum number of secrets to return.
Example:
nodetool secrets list --user-id user123
nodetool secrets store
Interactively store or update an encrypted secret. Prompts securely for the secret value (input masked).
Options:
--user-id/-u(default1) β user ID that owns the secret.--description/-dβ optional description for the secret.--forceβ store without requiring confirmation of the value.
Example:
nodetool secrets store OPENAI_API_KEY --description "My OpenAI API key"
nodetool secrets store HUGGINGFACE_TOKEN --user-id user123 --force
See also: Secret Storage and Master Key
Settings & Packages
nodetool settings
Commands for viewing and editing configuration settings and secrets files.
Subcommands:
settings showβ display the current settings table (readssettings.yamlorsecrets.yaml).settings edit [--secrets]β open editable YAML file in$EDITOR.
nodetool settings show
Display all configured settings or secrets.
Options:
--secretsβ show secrets instead of settings.
Example:
nodetool settings show
nodetool settings show --secrets
nodetool settings edit
Open the settings or secrets file in your configured editor ($EDITOR environment variable).
Options:
--secretsβ edit secrets.yaml instead of settings.yaml.
Example:
nodetool settings edit
nodetool settings edit --secrets
nodetool package
Commands for managing NodeTool packages and generating documentation.
Subcommands:
package listβ show installed packages.package list --availableβ show packages available in registry.package scanβ discover nodes and update package metadata.package initβ scaffold a new NodeTool package.package docsβ generate Markdown documentation for package nodes.
nodetool package list
List installed packages or available packages from the registry.
Options:
--available/-aβ list available packages from registry instead of installed packages.
Example:
nodetool package list
nodetool package list --available
nodetool package scan
Discover nodes in the current project and create/update package metadata.
Options:
--verbose/-vβ enable verbose output during scanning.
Example:
nodetool package scan --verbose
nodetool package init
Scaffold a new NodeTool package with pyproject.toml and metadata folder structure.
Example:
nodetool package init
nodetool package docs
Generate Markdown documentation for all nodes in the package.
Options:
--output-dir(defaultdocs) β directory where documentation will be generated.--compactβ generate compact documentation suitable for LLM input.--verbose/-vβ enable verbose output.
Example:
nodetool package docs --output-dir ./docs
nodetool package docs --compact --output-dir ./llm-docs
See the Package Registry Guide for publishing and metadata details.
Administration & Deployment
nodetool admin
Maintenance utilities for model assets and caches. Manage HuggingFace and Ollama model downloads, cache inspection, and cleanup.
Subcommands:
admin download-hfβ download HuggingFace models locally or via remote server.admin download-ollamaβ pre-pull Ollama model blobs.admin scan-cacheβ inspect cache usage and statistics.admin delete-hfβ remove cached HuggingFace repositories.admin cache-sizeβ report aggregate cache sizes.
nodetool admin download-hf
Download a HuggingFace model for local use or via a remote server.
Options:
--repo-id(required) β HuggingFace repository ID (e.g.,meta-llama/Llama-2-7b-hf).--file-pathβ specific file path within the repo to download.--server-urlβ download via remote server instead of locally.--ignore-patternsβ glob patterns to exclude from download.--allow-patternsβ glob patterns to include in download.
Example:
nodetool admin download-hf --repo-id meta-llama/Llama-2-7b-hf
nodetool admin download-hf --repo-id mistralai/Mistral-7B --server-url http://remote.server:7777
nodetool admin download-ollama
Pre-pull an Ollama model blob locally or via remote server.
Options:
--model-name(required) β Ollama model name (e.g.,llama2,mistral:latest).--server-urlβ download via remote server instead of locally.
Example:
nodetool admin download-ollama --model-name llama2
nodetool admin download-ollama --model-name mistral:latest --server-url http://remote.server:7777
nodetool admin scan-cache
Inspect cache directories and display usage statistics.
Options:
--server-urlβ scan remote server cache instead of local.
Example:
nodetool admin scan-cache
nodetool admin scan-cache --server-url http://remote.server:7777
nodetool admin delete-hf
Remove a cached HuggingFace repository from local disk or remote server.
Options:
--repo-id(required) β repository to delete.--server-urlβ delete from remote server instead of locally.
Example:
nodetool admin delete-hf --repo-id meta-llama/Llama-2-7b-hf
nodetool admin cache-size
Report aggregate cache sizes for HuggingFace and Ollama models.
Options:
--cache-dirβ custom cache directory path.--server-urlβ get remote server cache sizes.--api-keyβ API key for remote server authentication.
Example:
nodetool admin cache-size
nodetool admin cache-size --server-url http://remote.server:7777
nodetool deploy
Controls deployments described in deployment.yaml. Manage cloud and self-hosted deployments (RunPod, Google Cloud Run, self-hosted Docker, etc.).
Subcommands:
deploy initβ create a new deployment configuration.deploy showβ display deployment details.deploy addβ interactively add a new deployment.deploy editβ interactively edit a deployment.deploy listβ list all configured deployments.deploy planβ preview pending deployment changes.deploy applyβ apply deployment configuration to target environment.deploy statusβ query deployment status.deploy logsβ stream deployment logs.deploy destroyβ tear down a deployment.deploy workflowsβ manage workflows on deployed instances.deploy collectionsβ manage vector database collections on deployed instances.
nodetool deploy init
Create a new deployment.yaml configuration file.
Example:
nodetool deploy init
nodetool deploy show
Display detailed information about a deployment.
Arguments:
NAMEβ deployment name.
Example:
nodetool deploy show my-runpod-deployment
nodetool deploy add
Interactively add a new deployment configuration.
Arguments:
NAMEβ name for the deployment.TYPEβ deployment type (e.g.,runpod,google-cloud-run,self-hosted).
Example:
nodetool deploy add my-deployment runpod
nodetool deploy add prod-gcp google-cloud-run
nodetool deploy edit
Interactively edit an existing deployment configuration.
Arguments:
NAME(optional) β deployment to edit. If omitted, prompts for selection.
Example:
nodetool deploy edit my-deployment
nodetool deploy edit # Interactive selection
nodetool deploy list
List all configured deployments with their types and statuses.
Example:
nodetool deploy list
nodetool deploy plan
Preview pending deployment changes without applying them.
Arguments:
NAMEβ deployment to plan.
Example:
nodetool deploy plan my-deployment
nodetool deploy apply
Apply deployment configuration to the target environment (create/update resources).
Arguments:
NAMEβ deployment to apply.
Options:
--dry-runβ preview changes without applying.
Example:
nodetool deploy apply my-deployment
nodetool deploy apply my-deployment --dry-run
nodetool deploy status
Query the current status of a deployment.
Arguments:
NAMEβ deployment name.
Example:
nodetool deploy status my-deployment
nodetool deploy logs
Stream logs from a deployment.
Arguments:
NAMEβ deployment name.
Options:
--serviceβ specific service to get logs from.--follow/-fβ follow logs in real-time.--tail(default100) β number of previous lines to show.
Example:
nodetool deploy logs my-deployment --follow
nodetool deploy logs my-deployment --service worker --tail 50
nodetool deploy destroy
Tear down a deployment and delete resources.
Arguments:
NAMEβ deployment to destroy.
Options:
--force/-fβ skip confirmation prompt.
Example:
nodetool deploy destroy my-deployment
nodetool deploy destroy my-deployment --force
nodetool deploy workflows
Manage workflows on deployed instances.
Subcommands:
deploy workflows syncβ sync a workflow to a deployed instance.deploy workflows listβ list workflows on a deployed instance.deploy workflows deleteβ delete a workflow from a deployed instance.deploy workflows runβ run a workflow on a deployed instance.
Subcommand: deploy workflows sync
Sync a local workflow to a deployed instance, including models and assets.
Arguments:
DEPLOYMENT_NAMEβ deployment to sync to.WORKFLOW_IDβ workflow ID to sync.
Behavior: Downloads referenced models (HuggingFace, Ollama) and syncs assets automatically.
Example:
nodetool deploy workflows sync my-deployment workflow_abc123
Subcommand: deploy workflows list
List all workflows on a deployed instance.
Arguments:
DEPLOYMENT_NAMEβ deployment name.
Example:
nodetool deploy workflows list my-deployment
Subcommand: deploy workflows delete
Delete a workflow from a deployed instance.
Arguments:
DEPLOYMENT_NAMEβ deployment name.WORKFLOW_IDβ workflow ID to delete.
Options:
--force/-fβ skip confirmation.
Example:
nodetool deploy workflows delete my-deployment workflow_abc123
nodetool deploy workflows delete my-deployment workflow_abc123 --force
Subcommand: deploy workflows run
Run a workflow on a deployed instance with custom parameters.
Arguments:
DEPLOYMENT_NAMEβ deployment name.WORKFLOW_IDβ workflow ID to run.PARAMSβ workflow parameters askey=valuepairs.
Example:
nodetool deploy workflows run my-deployment workflow_abc123 prompt="Hello" model="gpt-4"
nodetool deploy collections
Manage vector database collections on deployed instances.
Subcommands:
deploy collections syncβ sync a local collection to a deployed instance.
Subcommand: deploy collections sync
Sync a local ChromaDB collection to a deployed instance.
Arguments:
DEPLOYMENT_NAMEβ deployment name.COLLECTION_NAMEβ collection name to sync.
Behavior: Creates collection on remote if needed and syncs all documents, embeddings, and metadata.
Example:
nodetool deploy collections sync my-deployment my-documents
nodetool sync
Synchronize database entries with a remote NodeTool server. Push local workflows and data to remote deployments.
Subcommands:
sync workflowβ sync a workflow to a remote server.
nodetool sync workflow
Push a local workflow to a remote NodeTool server.
Options:
--id(required) β workflow ID to sync.--server-url(required) β remote server base URL (e.g.,http://localhost:7777).
Examples:
nodetool sync workflow --id workflow_abc123 --server-url http://remote.server:7777
nodetool sync workflow --id workflow_abc123 --server-url https://api.example.com
Proxy Utilities
The proxy commands manage the Docker-aware reverse proxy used in self-hosted setups. The proxy handles container lifecycle (start on demand, stop after idle timeout) and TLS/ACME certificate management.
nodetool proxy
Start the async Docker reverse proxy server.
Options:
--config(required) β path to proxy configuration YAML file.--host(default0.0.0.0) β host to bind to.--port(default443) β port to bind to.--no-tlsβ disable TLS and serve HTTP only.--verbose/-vβ enable DEBUG-level logging.
Behavior: Routes HTTP requests to Docker containers, starting them on-demand and stopping after idle timeout. Supports Letβs Encrypt ACME for automatic TLS certificate management.
Examples:
# Start proxy with HTTPS on port 443
nodetool proxy --config /etc/proxy/config.yaml
# Start proxy on HTTP port 8080
nodetool proxy --config /etc/proxy/config.yaml --port 8080 --no-tls
# Start with verbose logging
nodetool proxy --config /etc/proxy/config.yaml --verbose
nodetool proxy-daemon
Run the FastAPI proxy with ACME HTTP and HTTPS listeners concurrently. Designed for use as a background service.
Options:
--config(required) β path to proxy configuration YAML file.--verbose/-vβ enable DEBUG-level logging.
Examples:
nodetool proxy-daemon --config /etc/proxy/config.yaml
nodetool proxy-daemon --config /etc/proxy/config.yaml --verbose
nodetool proxy-status
Check the status and health of running proxy services.
Options:
--config(required) β path to proxy configuration YAML file.--server-url(defaulthttp://localhost/status) β proxy status endpoint URL.--bearer-tokenβ authentication token (defaults to config value).
Display: Shows table of all managed services with status (running/stopped/not created) and last access time.
Examples:
# Check local proxy status
nodetool proxy-status --config /etc/proxy/config.yaml
# Check remote proxy status
nodetool proxy-status --config /etc/proxy/config.yaml \
--server-url https://proxy.example.com/status \
--bearer-token MY_TOKEN
nodetool proxy-validate-config
Validate proxy configuration file for errors before deployment.
Options:
--config(required) β path to proxy configuration YAML file.
Behavior: Loads and validates the configuration, checking service definitions and global settings. Displays all configured services in a table.
Examples:
nodetool proxy-validate-config --config /etc/proxy/config.yaml
Utility Commands
nodetool list-gcp-options
List available Google Cloud Run configuration options for deployments.
Display: Shows available regions, CPU options, memory options, and Docker registry options for GCP deployments.
Example:
nodetool list-gcp-options
Tips
- Commands that contact remote services load
.envfiles automatically viapython-dotenv. Ensure required environment variables are present. - Use
--verbose/-vwhere available to enable DEBUG-level logging for troubleshooting. - For deployment operations, ensure Docker is installed and configured with appropriate registry credentials.
- Configuration files (deployment.yaml, proxy configs) support environment variable substitution (e.g.,
${ENV_VAR_NAME}). - See Environment Variables Index for a complete list of configurable variables.