The nodetool CLI manages local development workflows, servers, deployments, and admin tooling. Install the project and run nodetool --help (or python -m nodetool.cli --help) to see the top-level command list. Every sub-command exposes its own --help flag with detailed usage.

Getting Help

  • nodetool --help β€” list all top-level commands and groups.
  • nodetool <command> --help β€” show command-specific options (e.g. nodetool serve --help).
  • nodetool <group> --help β€” list sub-commands for grouped tooling (e.g. nodetool deploy --help).

Core Runtime Commands

nodetool agent

Runs an autonomous AI agent from start to finish using a YAML configuration file. Agents use the planning agent architecture to break down tasks, execute them iteratively, and achieve goals through tool usage.

Arguments:

  • --config FILE (required) β€” Path to agent YAML configuration file.
  • --prompt TEXT β€” Inline prompt for the agent to execute.
  • --prompt-file FILE β€” Load prompt from a text file.
  • --interactive / -i β€” Start interactive session with the agent.

Options:

  • --workspace DIR β€” Override workspace directory from config.
  • --max-iterations N β€” Override maximum planning iterations from config.
  • --output FILE β€” Save agent output to file.
  • --jsonl β€” Output in JSONL format for automation.
  • --verbose / -v β€” Enable DEBUG-level logging.

Examples:

# Run agent with inline prompt
nodetool agent --config research-agent.yaml --prompt "Research AI trends"

# Run agent with prompt from file
nodetool agent --config code-assistant.yaml --prompt-file task.txt

# Interactive mode for multi-turn conversations
nodetool agent --config content-creator.yaml --interactive

# Save output to file
nodetool agent --config agent.yaml --prompt "Task" --output result.txt

# JSONL output for automation
nodetool agent --config agent.yaml --prompt "Task" --jsonl

Agent Configuration:

Agents are configured via YAML files that specify:

  • System prompt: Instructions defining agent behavior
  • Model: Primary AI model (provider and model ID)
  • Planning agent: Always enabled, coordinates task execution
  • Tools: Available capabilities (search, code execution, file operations)
  • Parameters: Token limits, temperature, iteration limits
  • Workspace: Sandboxed directory for file operations

See Agent CLI Documentation for complete configuration reference and examples/agents/ for sample configurations.

nodetool serve

Runs the FastAPI backend server for the NodeTool platform. This serves the REST API, WebSocket endpoints, and optionally static assets or app bundles.

Options:

  • --host (default 127.0.0.1) β€” bind address (use 0.0.0.0 for all interfaces).
  • --port (default 7777) β€” listen port.
  • --static-folder β€” path to folder containing static web assets (e.g., compiled React UI).
  • --force-fp16 β€” force FP16 precision for ComfyUI integrations if available (GPU optimization).
  • --reload β€” enable auto-reload on file changes (development only).
  • --production β€” enable production mode (stricter validation, optimizations).
  • --remote-auth β€” enable remote authentication (Supabase-backed auth).
  • --verbose / -v β€” enable DEBUG-level logging for troubleshooting.

Examples:

# Development server with auto-reload
nodetool serve --reload --verbose

# Production server with static assets
nodetool serve --production --static-folder ./web/dist --host 0.0.0.0 --port 7777

# Development with remote auth
nodetool serve --remote-auth --verbose

nodetool run

Executes a workflow by ID, file path, or JSON payload. Supports multiple modes: interactive, JSONL for automation, and stdin for programmatic execution.

Arguments:

  • WORKFLOW (optional) β€” workflow ID or path to workflow JSON file.

Options:

  • --jsonl β€” output raw JSONL (JSON Lines) format instead of pretty-printed output. Each line is a valid JSON object representing workflow progress. Useful for subprocess/automation integration.
  • --stdin β€” read an entire RunJobRequest JSON from stdin instead of from argument or interactive prompt.
  • --user-id (default 1) β€” user ID for workflow execution context.
  • --auth-token (default local_token) β€” authentication token for workflow execution.
  • --verbose / -v β€” enable DEBUG-level logging.

Examples:

# Interactive mode: Run workflow by ID
nodetool run workflow_abc123

# Interactive mode: Run workflow from file
nodetool run ./my_workflow.json

# JSONL mode: Run with JSON output for parsing
nodetool run workflow_abc123 --jsonl

# Stdin mode: Run from piped JSON
cat run_request.json | nodetool run --stdin

# With custom user and auth token
nodetool run workflow_abc123 --user-id user123 --auth-token sk-token

nodetool worker

Starts a deployable worker process with OpenAI-compatible endpoints. This is used for running NodeTool as a backend service with chat/completion API support.

Options:

  • --host (default 0.0.0.0) β€” bind address (listen on all interfaces for deployments).
  • --port (default 7777) β€” listen port.
  • --remote-auth β€” require Supabase-backed authentication.
  • --default-model (default gpt-oss:20b) β€” fallback model when client doesn’t specify one.
  • --provider (default ollama) β€” provider for the default model (e.g., openai, anthropic, ollama).
  • --tools β€” comma-separated list of tools to enable (e.g., google_search,browser).
  • --workflow β€” one or more workflow JSON files to register with the worker.
  • --verbose / -v β€” enable DEBUG-level logging.

Examples:

# Basic worker with default Ollama model
nodetool worker

# Worker with custom model and tools
nodetool worker --default-model gpt-4 --provider openai --tools google_search,browser

# Worker with custom workflows
nodetool worker --workflow workflow1.json --workflow workflow2.json --host 0.0.0.0 --port 8080

# Deployable worker with auth
nodetool worker --remote-auth --host 0.0.0.0 --port 7777

Chat Client

nodetool chat-client

Interactive or non-interactive client for connecting to the OpenAI API, a local NodeTool chat server, or a RunPod serverless endpoint. Supports streaming responses and multi-turn conversations.

Options:

  • --server-url β€” override default OpenAI URL to point to a local chat server or custom endpoint.
  • --runpod-endpoint β€” convenience shortcut for RunPod serverless endpoint IDs (e.g., abc123xyz).
  • --auth-token β€” HTTP authentication token for server (falls back to RUNPOD_API_KEY environment variable).
  • --message β€” send a single message in non-interactive mode (no conversation loop).
  • --model (default gpt-4o-mini for OpenAI) β€” model to use (e.g., gpt-4o, gpt-oss:20b).
  • --provider β€” AI provider when connecting to local server (e.g., openai, anthropic, ollama).
  • --verbose / -v β€” enable DEBUG-level logging.

Examples:

# Interactive client with OpenAI
nodetool chat-client

# Single message to OpenAI
nodetool chat-client --message "What is the capital of France?"

# Connect to local chat server
nodetool chat-client --server-url http://localhost:8080 --provider ollama --model gpt-oss:20b

# Connect to RunPod endpoint
nodetool chat-client --runpod-endpoint abc123xyz --auth-token $RUNPOD_API_KEY

# Interactive session with custom model
nodetool chat-client --server-url http://localhost:8080 --model claude-3-opus --provider anthropic

Developer Tools

nodetool mcp

Starts the NodeTool Model Context Protocol (MCP) server implementation. This enables IDE integrations (e.g., Claude Code, other MCP-compatible IDEs) to access NodeTool workflows and capabilities.

See also: Model Context Protocol

Examples:

# Start MCP server (typically auto-started by IDEs)
nodetool mcp

nodetool test-runpod

Runs an automated health and inference check against a RunPod endpoint.

Options:

  • --endpoint-id (required) β€” RunPod serverless endpoint ID.
  • --params β€” JSON file with request parameters.
  • --timeout β€” request timeout in seconds (default 60).
  • --output β€” write JSON results to a file.
  • --verbose / -v β€” enable DEBUG logs.

Examples:

nodetool test-runpod --endpoint-id YOUR_ENDPOINT_ID
nodetool test-runpod --endpoint-id YOUR_ENDPOINT_ID --params examples/test_params_basic.json --timeout 30

nodetool codegen

Regenerates DSL (Domain-Specific Language) modules from node definitions. Scans node packages and generates Python code for type-safe workflow creation.

Behavior: Completely wipes and recreates corresponding src/nodetool/dsl/<namespace>/ directories before writing generated files.

Examples:

# Regenerate all DSL modules
nodetool codegen

# Verbose output
nodetool codegen --verbose

Secrets Management

nodetool secrets

Manage encrypted secrets stored in the database with per-user encryption.

Subcommands:

  • secrets list β€” list stored secret metadata without revealing values.
  • secrets store β€” securely store or update a secret value.

nodetool secrets list

List all stored secrets for a user (values not displayed).

Options:

  • --user-id / -u (default 1) β€” user ID to list secrets for.
  • --limit (default 100) β€” maximum number of secrets to return.

Example:

nodetool secrets list --user-id user123

nodetool secrets store

Interactively store or update an encrypted secret. Prompts securely for the secret value (input masked).

Options:

  • --user-id / -u (default 1) β€” user ID that owns the secret.
  • --description / -d β€” optional description for the secret.
  • --force β€” store without requiring confirmation of the value.

Example:

nodetool secrets store OPENAI_API_KEY --description "My OpenAI API key"
nodetool secrets store HUGGINGFACE_TOKEN --user-id user123 --force

See also: Secret Storage and Master Key

Settings & Packages

nodetool settings

Commands for viewing and editing configuration settings and secrets files.

Subcommands:

  • settings show β€” display the current settings table (reads settings.yaml or secrets.yaml).
  • settings edit [--secrets] β€” open editable YAML file in $EDITOR.

nodetool settings show

Display all configured settings or secrets.

Options:

  • --secrets β€” show secrets instead of settings.

Example:

nodetool settings show
nodetool settings show --secrets

nodetool settings edit

Open the settings or secrets file in your configured editor ($EDITOR environment variable).

Options:

  • --secrets β€” edit secrets.yaml instead of settings.yaml.

Example:

nodetool settings edit
nodetool settings edit --secrets

nodetool package

Commands for managing NodeTool packages and generating documentation.

Subcommands:

  • package list β€” show installed packages.
  • package list --available β€” show packages available in registry.
  • package scan β€” discover nodes and update package metadata.
  • package init β€” scaffold a new NodeTool package.
  • package docs β€” generate Markdown documentation for package nodes.

nodetool package list

List installed packages or available packages from the registry.

Options:

  • --available / -a β€” list available packages from registry instead of installed packages.

Example:

nodetool package list
nodetool package list --available

nodetool package scan

Discover nodes in the current project and create/update package metadata.

Options:

  • --verbose / -v β€” enable verbose output during scanning.

Example:

nodetool package scan --verbose

nodetool package init

Scaffold a new NodeTool package with pyproject.toml and metadata folder structure.

Example:

nodetool package init

nodetool package docs

Generate Markdown documentation for all nodes in the package.

Options:

  • --output-dir (default docs) β€” directory where documentation will be generated.
  • --compact β€” generate compact documentation suitable for LLM input.
  • --verbose / -v β€” enable verbose output.

Example:

nodetool package docs --output-dir ./docs
nodetool package docs --compact --output-dir ./llm-docs

See the Package Registry Guide for publishing and metadata details.

Administration & Deployment

nodetool admin

Maintenance utilities for model assets and caches. Manage HuggingFace and Ollama model downloads, cache inspection, and cleanup.

Subcommands:

  • admin download-hf β€” download HuggingFace models locally or via remote server.
  • admin download-ollama β€” pre-pull Ollama model blobs.
  • admin scan-cache β€” inspect cache usage and statistics.
  • admin delete-hf β€” remove cached HuggingFace repositories.
  • admin cache-size β€” report aggregate cache sizes.

nodetool admin download-hf

Download a HuggingFace model for local use or via a remote server.

Options:

  • --repo-id (required) β€” HuggingFace repository ID (e.g., meta-llama/Llama-2-7b-hf).
  • --file-path β€” specific file path within the repo to download.
  • --server-url β€” download via remote server instead of locally.
  • --ignore-patterns β€” glob patterns to exclude from download.
  • --allow-patterns β€” glob patterns to include in download.

Example:

nodetool admin download-hf --repo-id meta-llama/Llama-2-7b-hf
nodetool admin download-hf --repo-id mistralai/Mistral-7B --server-url http://remote.server:7777

nodetool admin download-ollama

Pre-pull an Ollama model blob locally or via remote server.

Options:

  • --model-name (required) β€” Ollama model name (e.g., llama2, mistral:latest).
  • --server-url β€” download via remote server instead of locally.

Example:

nodetool admin download-ollama --model-name llama2
nodetool admin download-ollama --model-name mistral:latest --server-url http://remote.server:7777

nodetool admin scan-cache

Inspect cache directories and display usage statistics.

Options:

  • --server-url β€” scan remote server cache instead of local.

Example:

nodetool admin scan-cache
nodetool admin scan-cache --server-url http://remote.server:7777

nodetool admin delete-hf

Remove a cached HuggingFace repository from local disk or remote server.

Options:

  • --repo-id (required) β€” repository to delete.
  • --server-url β€” delete from remote server instead of locally.

Example:

nodetool admin delete-hf --repo-id meta-llama/Llama-2-7b-hf

nodetool admin cache-size

Report aggregate cache sizes for HuggingFace and Ollama models.

Options:

  • --cache-dir β€” custom cache directory path.
  • --server-url β€” get remote server cache sizes.
  • --api-key β€” API key for remote server authentication.

Example:

nodetool admin cache-size
nodetool admin cache-size --server-url http://remote.server:7777

nodetool deploy

Controls deployments described in deployment.yaml. Manage cloud and self-hosted deployments (RunPod, Google Cloud Run, self-hosted Docker, etc.).

Subcommands:

  • deploy init β€” create a new deployment configuration.
  • deploy show β€” display deployment details.
  • deploy add β€” interactively add a new deployment.
  • deploy edit β€” interactively edit a deployment.
  • deploy list β€” list all configured deployments.
  • deploy plan β€” preview pending deployment changes.
  • deploy apply β€” apply deployment configuration to target environment.
  • deploy status β€” query deployment status.
  • deploy logs β€” stream deployment logs.
  • deploy destroy β€” tear down a deployment.
  • deploy workflows β€” manage workflows on deployed instances.
  • deploy collections β€” manage vector database collections on deployed instances.

nodetool deploy init

Create a new deployment.yaml configuration file.

Example:

nodetool deploy init

nodetool deploy show

Display detailed information about a deployment.

Arguments:

  • NAME β€” deployment name.

Example:

nodetool deploy show my-runpod-deployment

nodetool deploy add

Interactively add a new deployment configuration.

Arguments:

  • NAME β€” name for the deployment.
  • TYPE β€” deployment type (e.g., runpod, google-cloud-run, self-hosted).

Example:

nodetool deploy add my-deployment runpod
nodetool deploy add prod-gcp google-cloud-run

nodetool deploy edit

Interactively edit an existing deployment configuration.

Arguments:

  • NAME (optional) β€” deployment to edit. If omitted, prompts for selection.

Example:

nodetool deploy edit my-deployment
nodetool deploy edit  # Interactive selection

nodetool deploy list

List all configured deployments with their types and statuses.

Example:

nodetool deploy list

nodetool deploy plan

Preview pending deployment changes without applying them.

Arguments:

  • NAME β€” deployment to plan.

Example:

nodetool deploy plan my-deployment

nodetool deploy apply

Apply deployment configuration to the target environment (create/update resources).

Arguments:

  • NAME β€” deployment to apply.

Options:

  • --dry-run β€” preview changes without applying.

Example:

nodetool deploy apply my-deployment
nodetool deploy apply my-deployment --dry-run

nodetool deploy status

Query the current status of a deployment.

Arguments:

  • NAME β€” deployment name.

Example:

nodetool deploy status my-deployment

nodetool deploy logs

Stream logs from a deployment.

Arguments:

  • NAME β€” deployment name.

Options:

  • --service β€” specific service to get logs from.
  • --follow / -f β€” follow logs in real-time.
  • --tail (default 100) β€” number of previous lines to show.

Example:

nodetool deploy logs my-deployment --follow
nodetool deploy logs my-deployment --service worker --tail 50

nodetool deploy destroy

Tear down a deployment and delete resources.

Arguments:

  • NAME β€” deployment to destroy.

Options:

  • --force / -f β€” skip confirmation prompt.

Example:

nodetool deploy destroy my-deployment
nodetool deploy destroy my-deployment --force

nodetool deploy workflows

Manage workflows on deployed instances.

Subcommands:

  • deploy workflows sync β€” sync a workflow to a deployed instance.
  • deploy workflows list β€” list workflows on a deployed instance.
  • deploy workflows delete β€” delete a workflow from a deployed instance.
  • deploy workflows run β€” run a workflow on a deployed instance.

Subcommand: deploy workflows sync

Sync a local workflow to a deployed instance, including models and assets.

Arguments:

  • DEPLOYMENT_NAME β€” deployment to sync to.
  • WORKFLOW_ID β€” workflow ID to sync.

Behavior: Downloads referenced models (HuggingFace, Ollama) and syncs assets automatically.

Example:

nodetool deploy workflows sync my-deployment workflow_abc123

Subcommand: deploy workflows list

List all workflows on a deployed instance.

Arguments:

  • DEPLOYMENT_NAME β€” deployment name.

Example:

nodetool deploy workflows list my-deployment

Subcommand: deploy workflows delete

Delete a workflow from a deployed instance.

Arguments:

  • DEPLOYMENT_NAME β€” deployment name.
  • WORKFLOW_ID β€” workflow ID to delete.

Options:

  • --force / -f β€” skip confirmation.

Example:

nodetool deploy workflows delete my-deployment workflow_abc123
nodetool deploy workflows delete my-deployment workflow_abc123 --force

Subcommand: deploy workflows run

Run a workflow on a deployed instance with custom parameters.

Arguments:

  • DEPLOYMENT_NAME β€” deployment name.
  • WORKFLOW_ID β€” workflow ID to run.
  • PARAMS β€” workflow parameters as key=value pairs.

Example:

nodetool deploy workflows run my-deployment workflow_abc123 prompt="Hello" model="gpt-4"

nodetool deploy collections

Manage vector database collections on deployed instances.

Subcommands:

  • deploy collections sync β€” sync a local collection to a deployed instance.

Subcommand: deploy collections sync

Sync a local ChromaDB collection to a deployed instance.

Arguments:

  • DEPLOYMENT_NAME β€” deployment name.
  • COLLECTION_NAME β€” collection name to sync.

Behavior: Creates collection on remote if needed and syncs all documents, embeddings, and metadata.

Example:

nodetool deploy collections sync my-deployment my-documents

nodetool sync

Synchronize database entries with a remote NodeTool server. Push local workflows and data to remote deployments.

Subcommands:

  • sync workflow β€” sync a workflow to a remote server.

nodetool sync workflow

Push a local workflow to a remote NodeTool server.

Options:

  • --id (required) β€” workflow ID to sync.
  • --server-url (required) β€” remote server base URL (e.g., http://localhost:7777).

Examples:

nodetool sync workflow --id workflow_abc123 --server-url http://remote.server:7777
nodetool sync workflow --id workflow_abc123 --server-url https://api.example.com

Proxy Utilities

The proxy commands manage the Docker-aware reverse proxy used in self-hosted setups. The proxy handles container lifecycle (start on demand, stop after idle timeout) and TLS/ACME certificate management.

nodetool proxy

Start the async Docker reverse proxy server.

Options:

  • --config (required) β€” path to proxy configuration YAML file.
  • --host (default 0.0.0.0) β€” host to bind to.
  • --port (default 443) β€” port to bind to.
  • --no-tls β€” disable TLS and serve HTTP only.
  • --verbose / -v β€” enable DEBUG-level logging.

Behavior: Routes HTTP requests to Docker containers, starting them on-demand and stopping after idle timeout. Supports Let’s Encrypt ACME for automatic TLS certificate management.

Examples:

# Start proxy with HTTPS on port 443
nodetool proxy --config /etc/proxy/config.yaml

# Start proxy on HTTP port 8080
nodetool proxy --config /etc/proxy/config.yaml --port 8080 --no-tls

# Start with verbose logging
nodetool proxy --config /etc/proxy/config.yaml --verbose

nodetool proxy-daemon

Run the FastAPI proxy with ACME HTTP and HTTPS listeners concurrently. Designed for use as a background service.

Options:

  • --config (required) β€” path to proxy configuration YAML file.
  • --verbose / -v β€” enable DEBUG-level logging.

Examples:

nodetool proxy-daemon --config /etc/proxy/config.yaml
nodetool proxy-daemon --config /etc/proxy/config.yaml --verbose

nodetool proxy-status

Check the status and health of running proxy services.

Options:

  • --config (required) β€” path to proxy configuration YAML file.
  • --server-url (default http://localhost/status) β€” proxy status endpoint URL.
  • --bearer-token β€” authentication token (defaults to config value).

Display: Shows table of all managed services with status (running/stopped/not created) and last access time.

Examples:

# Check local proxy status
nodetool proxy-status --config /etc/proxy/config.yaml

# Check remote proxy status
nodetool proxy-status --config /etc/proxy/config.yaml \
  --server-url https://proxy.example.com/status \
  --bearer-token MY_TOKEN

nodetool proxy-validate-config

Validate proxy configuration file for errors before deployment.

Options:

  • --config (required) β€” path to proxy configuration YAML file.

Behavior: Loads and validates the configuration, checking service definitions and global settings. Displays all configured services in a table.

Examples:

nodetool proxy-validate-config --config /etc/proxy/config.yaml

Utility Commands

nodetool list-gcp-options

List available Google Cloud Run configuration options for deployments.

Display: Shows available regions, CPU options, memory options, and Docker registry options for GCP deployments.

Example:

nodetool list-gcp-options

Tips

  • Commands that contact remote services load .env files automatically via python-dotenv. Ensure required environment variables are present.
  • Use --verbose / -v where available to enable DEBUG-level logging for troubleshooting.
  • For deployment operations, ensure Docker is installed and configured with appropriate registry credentials.
  • Configuration files (deployment.yaml, proxy configs) support environment variable substitution (e.g., ${ENV_VAR_NAME}).
  • See Environment Variables Index for a complete list of configurable variables.