API Families and Why They Exist
NodeTool exposes three closely related API surfaces:
- Editor API (NodeTool application / desktop)
- Served by
nodetool serve(@nodetool/websocket–server.ts). - Used by the NodeTool desktop app and local web UI to manage workflows, assets, jobs, and settings.
- Acts as the control plane for authoring and debugging; includes dev-only endpoints such as the terminal WebSocket and debug tooling.
- Intended to run on a trusted local machine, not as a public internet API.
- Served by
- Server API (deployable instance)
- Served by
nodetool serve --mode private(@nodetool/websocket–http-api.ts). - Provides a stable, hardened runtime surface for external clients: OpenAI-compatible chat, workflow execution, admin and storage routes, and health checks.
- Designed for self-hosted, RunPod, Cloud Run, and other remote deployments; all non-health endpoints sit behind Bearer auth and TLS.
- Served by
- Chat Server API (chat-only runtime)
- Served by
nodetool chat-server(@nodetool/chat–server.ts). - Minimal OpenAI-compatible
/v1/chat/completionsand/v1/modelsplus/healthfor environments where you only need chat, not workflows or admin routes.
- Served by
This split exists because:
- The desktop/editor needs full control over local resources and rich debug features, while deployed servers must not expose those capabilities.
- The server API is a small, stable contract you can safely integrate against and deploy widely; the editor API can evolve with the UI and internal architecture.
- Separating control plane (Editor API) from data plane (Server/Chat server) makes scaling, security hardening, and multi-environment deployments simpler.
Unified Endpoint Matrix
The table below summarizes key endpoints across the three surfaces. For detailed schemas, see Chat API and Workflow API.
| Surface | Area | Path / Prefix | Method / Protocol | Auth | Streaming | Notes |
|---|---|---|---|---|---|---|
| Editor, Server, Chat | Models | /v1/models |
GET |
Bearer when AUTH_PROVIDER enforces |
no | OpenAI-compatible model listing |
| Editor, Server, Chat | Chat | /v1/chat/completions |
POST |
Bearer when AUTH_PROVIDER enforces |
SSE when \"stream\": true |
OpenAI-compatible chat; SSE or single JSON |
| Editor | Workflows | /api/workflows |
GET |
Depends on AUTH_PROVIDER |
no | List workflows for the local app |
| Server | Workflows | /workflows |
GET |
Depends on AUTH_PROVIDER |
no | List workflows on a server instance |
| Server | Workflows | /workflows/{id}/run |
POST |
Depends on AUTH_PROVIDER |
no | Run a workflow once, return final outputs |
| Server | Workflows | /workflows/{id}/run/stream |
POST (SSE) |
Depends on AUTH_PROVIDER |
yes (SSE, server → client) | Stream workflow progress and results |
| Editor | Chat WS | /chat |
WebSocket | Bearer header or api_key query when enforced |
yes | Bidirectional chat, tools, and workflow triggering |
| Editor | Jobs WS | /predict |
WebSocket | Bearer header or api_key query when enforced |
yes | Workflow/job execution and reconnection |
| Editor | Updates | /updates |
WebSocket | Follows global auth settings | yes | System and job updates stream |
| Editor (dev-only) | Terminal | /terminal |
WebSocket | Same as /chat//predict (when enabled) |
yes | Host terminal access; gated by NODETOOL_ENABLE_TERMINAL_WS |
| Server | Health | /health |
GET |
none | no | JSON server health (public) |
| Server | Ping | /ping |
GET |
none | no | JSON ping with timestamp (public) |
| Editor, Chat | Health | /health |
GET |
none | no | Basic liveness; string or JSON |
| Server | Storage | /admin/storage/* |
HEAD/GET/PUT/DELETE |
Bearer when enforced | streaming for GET |
Admin asset/temp storage (full CRUD) |
| Server | Storage | /storage/* |
HEAD/GET |
none or proxy-protected | streaming for GET |
Public read-only asset/temp access |
When
AUTH_PROVIDERislocalornone, editor and server endpoints accept requests without a token for convenience. When it isstaticorsupabase, includeAuthorization: Bearer <token>on every request except/healthand/ping.
Authentication and Headers
NodeTool uses Bearer token authentication. The behavior depends on your AUTH_PROVIDER setting:
| AUTH_PROVIDER | Token Required? | Use Case |
|---|---|---|
local / none |
No | Local development, desktop app |
static |
Yes — use the configured static token | Simple deployments with a shared secret |
supabase |
Yes — use a Supabase JWT | Production deployments with user management |
How to include credentials
- HTTP requests:
Authorization: Bearer <token>header on all non-public routes - WebSocket (Editor API):
Authorization: Bearer <token>header (preferred) orapi_keyquery parameter - SSE streams:
Authorization: Bearer <token>andAccept: text/event-stream
Local development: When running locally with the default config (
AUTH_PROVIDER=local), no token is needed. You can omit theAuthorizationheader entirely.
See Authentication for full token handling rules.
Streaming Behavior
/v1/chat/completionsuses OpenAI-style SSE whenstreamis true; otherwise it returns a single JSON response.- Editor WebSockets:
/predictstreams workflow/job events until completion or cancellation./chatstreams chat tokens, tool calls, and agent/workflow events.
- Server SSE:
/workflows/{id}/run/streamsends job update and output events, then a final[DONE].
- Server storage routes stream file contents for large assets.
Headless Mode: Running Workflows via CLI/API
NodeTool can run entirely without the UI—perfect for automation, CI/CD pipelines, and programmatic integrations. This section shows how to execute workflows from the command line or via HTTP requests.
Quick Start: Run a Workflow via cURL
# Run a workflow and get results (non-streaming)
curl -X POST "http://localhost:7777/api/workflows/YOUR_WORKFLOW_ID/run" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"params": {
"prompt": "A cyberpunk cityscape at sunset",
"style": "photorealistic"
}
}'
Response:
{
"output": {
"image": {
"type": "image",
"uri": "http://localhost:7777/storage/assets/abc123.png"
},
"caption": "Generated image of a cyberpunk cityscape..."
}
}
Streaming Workflow Execution
For long-running workflows, use streaming to get real-time progress updates:
# Stream workflow execution (SSE)
curl -X POST "http://localhost:7777/workflows/YOUR_WORKFLOW_ID/run/stream" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Accept: text/event-stream" \
-d '{
"params": {
"prompt": "Analyze this document and extract key points"
}
}'
Streaming response (Server-Sent Events):
data: {"type": "job_update", "status": "running", "job_id": "job_123"}
data: {"type": "node_update", "node_id": "node_1", "node_name": "Agent", "status": "running"}
data: {"type": "node_progress", "node_id": "node_1", "progress": 50, "total": 100}
data: {"type": "node_update", "node_id": "node_1", "node_name": "Agent", "status": "completed"}
data: {"type": "job_update", "status": "completed", "result": {"output": "..."}}
data: [DONE]
Chat API (OpenAI-Compatible)
NodeTool exposes OpenAI-compatible endpoints, so you can use standard OpenAI clients:
# Simple chat completion
curl -X POST "http://localhost:7777/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"model": "gpt-4",
"messages": [
{"role": "user", "content": "Explain quantum computing in simple terms"}
]
}'
Response:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1699000000,
"model": "gpt-4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Quantum computing uses quantum mechanics..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 150,
"total_tokens": 160
}
}
Streaming Chat
# Streaming chat (prints tokens as they arrive)
curl -X POST "http://localhost:7777/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"model": "gpt-4",
"messages": [
{"role": "user", "content": "Write a haiku about programming"}
],
"stream": true
}'
Streaming response:
data: {"id":"chatcmpl-123","choices":[{"delta":{"role":"assistant"},"index":0}]}
data: {"id":"chatcmpl-123","choices":[{"delta":{"content":"Code"},"index":0}]}
data: {"id":"chatcmpl-123","choices":[{"delta":{"content":" flows"},"index":0}]}
data: {"id":"chatcmpl-123","choices":[{"delta":{"content":" like"},"index":0}]}
data: [DONE]
List Available Models
curl "http://localhost:7777/v1/models" \
-H "Authorization: Bearer YOUR_TOKEN"
Response:
{
"object": "list",
"data": [
{"id": "gpt-4", "object": "model", "owned_by": "openai"},
{"id": "gpt-3.5-turbo", "object": "model", "owned_by": "openai"},
{"id": "claude-3-opus", "object": "model", "owned_by": "anthropic"},
{"id": "gpt-oss:20b", "object": "model", "owned_by": "ollama"}
]
}
List Workflows
# List all workflows (Editor API)
curl "http://localhost:7777/api/workflows" \
-H "Authorization: Bearer YOUR_TOKEN"
# List workflows on a deployed server
curl "http://your-server:7777/workflows" \
-H "Authorization: Bearer YOUR_TOKEN"
Health Check
# Check if server is running (no auth required)
curl "http://localhost:7777/health"
Response:
{"status": "healthy"}
CLI Workflow Execution
You can also run workflows directly from the command line:
# Run workflow by ID
nodetool run workflow_abc123
# Run workflow from file
nodetool run ./my_workflow.json
# Run with JSONL output (for automation)
nodetool run workflow_abc123 --jsonl
# Run with parameters from stdin
echo '{"workflow_id": "abc123", "params": {"prompt": "test"}}' | nodetool run --stdin
TypeScript / Node.js Client Example
const BASE_URL = 'http://localhost:7777';
const TOKEN = 'your_token_here';
// Run a workflow
async function runWorkflow(workflowId, params) {
const response = await fetch(`${BASE_URL}/api/workflows/${workflowId}/run`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${TOKEN}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({ params })
});
return response.json();
}
// Stream workflow execution
async function streamWorkflow(workflowId, params) {
const response = await fetch(`${BASE_URL}/workflows/${workflowId}/run/stream`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${TOKEN}`,
'Content-Type': 'application/json',
'Accept': 'text/event-stream'
},
body: JSON.stringify({ params })
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const lines = decoder.decode(value).split('\n');
for (const line of lines) {
if (line.startsWith('data: ') && line !== 'data: [DONE]') {
const event = JSON.parse(line.slice(6));
console.log('Event:', event.type, event.status);
if (event.status === 'completed') {
return event.result;
}
}
}
}
}
// Using OpenAI SDK (works with NodeTool!)
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: TOKEN,
baseURL: `${BASE_URL}/v1`
});
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
});
console.log(completion.choices[0].message.content);
Python Client Example
import requests
import json
BASE_URL = "http://localhost:7777"
TOKEN = "your_token_here" # Not needed for local development
HEADERS = {
"Authorization": f"Bearer {TOKEN}",
"Content-Type": "application/json",
}
# List workflows
workflows = requests.get(f"{BASE_URL}/api/workflows", headers=HEADERS).json()
# Run a workflow
result = requests.post(
f"{BASE_URL}/api/workflows/{workflows[0]['id']}/run",
headers=HEADERS,
json={"params": {"prompt": "A sunset over mountains"}},
).json()
print("Output:", result["output"])
# Stream a workflow execution
response = requests.post(
f"{BASE_URL}/workflows/{workflows[0]['id']}/run/stream",
headers={**HEADERS, "Accept": "text/event-stream"},
json={"params": {"prompt": "Analyze this text"}},
stream=True,
)
for line in response.iter_lines():
if line and line.startswith(b"data: ") and line != b"data: [DONE]":
event = json.loads(line[6:])
print(f"Event: {event['type']} - {event.get('status', '')}")
# Use with OpenAI Python SDK (works with NodeTool!)
from openai import OpenAI
client = OpenAI(api_key=TOKEN, base_url=f"{BASE_URL}/v1")
completion = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}],
)
print(completion.choices[0].message.content)
Finding Your Workflow ID
To run a workflow via API, you need its ID. Here’s how to find it:
- From the UI: Open a workflow in the editor — the ID appears in the browser URL bar
- From the API: Call
GET /api/workflows(Editor) orGET /workflows(Server) to list all workflows with their IDs - From the CLI: Run
nodetool list workflows
Error Handling
API errors return standard HTTP status codes with JSON error bodies:
{
"error": {
"message": "Workflow not found: invalid_id",
"type": "not_found",
"code": 404
}
}
| Status Code | Meaning | Common Causes |
|---|---|---|
| 400 | Bad Request | Invalid parameters, malformed JSON |
| 401 | Unauthorized | Missing or invalid token |
| 403 | Forbidden | Token lacks permission |
| 404 | Not Found | Workflow/resource doesn’t exist |
| 422 | Validation Error | Parameter validation failed |
| 500 | Internal Error | Server-side error |
| 503 | Service Unavailable | Server overloaded or starting up |
Related Guides
- Chat API — OpenAI-compatible request/response schema and WebSocket usage.
- Workflow API — Editor vs Server workflow paths and streaming.
- API Server Overview — Editor API architecture and modules.
- Deployment Guide — How servers are built and exposed.
- Chat Server — Minimal chat-only deployments.
- CLI Reference — Commands for
serve,server, andchat-server.