NodeTool supports multiple deployment targets driven by a single deployment.yaml configuration. The nodetool deploy command family builds container images, applies configuration, and manages the lifecycle of remote services across self-hosted hosts, RunPod serverless, and Google Cloud Run.
Quick Reference: What Do You Want to Do?
| I want to⦠| What you need |
|---|---|
| Run NodeTool on my own server | β Self-Hosted Deployment with proxy |
| Deploy to RunPod for GPU access | β RunPod Deployment with Docker + RunPod API key |
| Deploy to Google Cloud Run | β GCP Deployment with gcloud CLI |
| Use Supabase for auth/storage | β Supabase Integration |
| Set up TLS/HTTPS | β See Self-Hosted or Proxy Reference |
| Configure environment variables | β Deployment Configuration |
Common Deployment Goals
I want to deploy NodeTool to my own server
- Set up your configuration β create a
deployment.yaml:nodetool deploy init nodetool deploy add my-server --type self-hosted - Configure host details β edit
~/.config/nodetool/deployment.yamlwith your host, SSH user, and image settings - Build and deploy:
nodetool deploy apply my-server - Verify:
nodetool deploy status my-server
See Self-Hosted Deployments for full details.
I want to run workflows on GPU via RunPod
- Get your RunPod API key from runpod.io
- Set up deployment:
export RUNPOD_API_KEY="your-key" nodetool deploy add my-runpod --type runpod - Configure GPU settings in
deployment.yaml(gpu_types,gpu_count) - Deploy:
nodetool deploy apply my-runpod
See RunPod Deployments for full details.
I want to deploy to Google Cloud
- Authenticate with gcloud:
gcloud auth login - Set up deployment:
nodetool deploy add my-gcp --type gcp - Configure region, CPU/memory in
deployment.yaml - Deploy:
nodetool deploy apply my-gcp
See GCP Deployments for full details.
I want to use Supabase for authentication and storage
- Create a Supabase project at supabase.com
- Create storage buckets (
assets,assets-temp) - Add to your deployment config:
env: SUPABASE_URL: https://your-project.supabase.co SUPABASE_KEY: your-service-role-key AUTH_PROVIDER: supabase ASSET_BUCKET: assets - Deploy:
nodetool deploy apply <name>
See Using Supabase for full details.
Deployment Workflow
-
Initialize configuration
nodetool deploy init nodetool deploy add <name>These commands scaffold
deployment.yamlusing the schema defined insrc/nodetool/config/deployment.py. Each entry specifies atype(self-hosted,runpod, orgcp), container image details, environment variables, and target-specific options. -
Review & plan
nodetool deploy list nodetool deploy show <name> nodetool deploy plan <name>Planning validates the configuration, renders the effective proxy files (for self-hosted targets), and shows pending actions without mutating remote resources.
-
Apply & monitor
nodetool deploy apply <name> nodetool deploy status <name> nodetool deploy logs <name> [--follow] nodetool deploy destroy <name>applybuilds/pushes container images, provisions infrastructure, updates proxy configuration, and records deployment state in the local cache (src/nodetool/deploy/state.py). Status and logs surface the remote service health.
Deployment Configuration
deployment.yaml accepts the following top-level keys (see SelfHostedDeployment, RunPodDeployment, and GCPDeployment in src/nodetool/config/deployment.py):
typeβ target platform (self-hosted,runpod,gcp)imageβ container image name/tag/registrypathsβ persistent storage paths (self-hosted)containerβ port, workflows, GPU configuration (self-hosted)proxyβ proxy services (self-hosted; see the Proxy Reference)runpod/gcpβ provider specific compute, region, scaling and credential optionsenvβ environment variables injected into the deployed containers
Store secrets (API keys, tokens) in secrets.yaml or environment variables; the deployer merges them at runtime without writing them to disk.
RunPod Deployments
The RunPod deployer (src/nodetool/deploy/deploy_to_runpod.py) builds an AMD64 Docker image, pushes it to your registry, and optionally creates RunPod templates/endpoints through GraphQL.
Requirements
- Docker (with Buildx for multi-arch builds) and registry credentials
RUNPOD_API_KEYin the environment- Optional: tuned GPU constraints (
gpu_types,gpu_count,idle_timeout, etc.)
Key configuration fields
template_id/endpoint_idβ existing resources to update (or leave empty to create)compute_type,gpu_types,gpu_countβ choose CPU/GPU fleetsworkers_min/workers_maxβ autoscaling boundsenvβ runtime settings exposed to the container
CLI shortcuts
nodetool deploy apply <name>β orchestrates build β push β template/endpoint updatesnodetool deploy logs <name>β streams RunPod logs (requires endpoint ID in deployment state)nodetool deploy destroy <name>β tears down templates/endpoints (leaves images untouched)
Google Cloud Run Deployments
src/nodetool/deploy/deploy_to_gcp.py and google_cloud_run_api.py manage the Cloud Run flow:
- Validate gcloud authentication, project, and enabled APIs
- Build/push the container to Artifact Registry or GCR
- Deploy or update the Cloud Run service
Requirements
- Docker and Google Cloud SDK (
gcloud) authenticated GOOGLE_CLOUD_PROJECT(and optionallyGOOGLE_APPLICATION_CREDENTIALS)- Enabled services: Cloud Run, Artifact Registry or Container Registry, Cloud Build (if used)
Key configuration fields
service_name,region,registryβ Cloud Run identifierscpu,memory,gpu_type,gpu_countβ resource allocationmin_instances,max_instances,concurrency,timeoutβ scaling behaviorservice_accountβ runtime identity (required for private resources)gcs_bucket/gcs_mount_pathβ attach Cloud Storage volumes if neededallow_unauthenticatedβ set to true for public endpoints (omit to require IAM auth)
Operational commands
nodetool deploy status <name>β shows the current Cloud Run URL and revisionnodetool deploy logs <name>β tails Cloud Run logs viagcloudnodetool deploy destroy <name>β deletes the Cloud Run service
Self-Hosted Deployments
Self-hosted targets pair a NodeTool worker container with the Docker-aware proxy described in Self-Hosted Deployment and Proxy Reference. Deployment state tracks the running container ID, generated bearer tokens, and hashed proxy configuration to avoid redundant restarts.
Quick checklist
- Populate
host,ssh.user, and image fields indeployment.yaml - Configure proxy services (port 80/443 by default) with TLS certificates or ACME settings
- Mount persistent volumes (workspace, caches) through the
services[].volumesmap - Provide
worker_auth_tokenor let the proxy generate one on first deploy
Apply with nodetool deploy apply <name>; the deployer copies proxy files, restarts containers when configuration changes, and runs health checks before reporting success.
Using Supabase
Supabase can provide both authentication and object storage in your deployment.
1) Configure environment variables in your deployment target:
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-service-role-key
ASSET_BUCKET=assets
# Optional for temporary assets
ASSET_TEMP_BUCKET=assets-temp
# Select authentication provider (none|local|static|supabase)
AUTH_PROVIDER=supabase
2) Create the buckets in Supabase Storage (e.g. assets, assets-temp).
- For direct public URLs, set the buckets to public in the Supabase dashboard.
- For private buckets, extend the adapter to sign URLs or front with a proxy that performs signing.
3) Deploy and verify:
- Logs should show βUsing Supabase storage for asset storageβ.
- Run a workflow that saves an image/dataframe and confirm links resolve under
β¦/storage/v1/object/public/<bucket>/β¦. - If using Supabase auth (
AUTH_PROVIDER=supabase), sendAuthorization: Bearer <supabase_jwt>.
Notes:
- If S3 variables are set alongside Supabase, NodeTool prefers Supabase when
SUPABASE_URL/SUPABASE_KEYare present. - For local development without Supabase/S3, NodeTool uses the filesystem backend.
Related Documentation
- Self-Hosted Deployment β proxy architecture and container layout
- Proxy Reference β on-demand routing, TLS, and command usage
- CLI Reference β command summaries
- Configuration Guide β environment, settings, and secret management
- Storage Guide β persistent storage options for deployments