NodeTool provides extensive support for AI models across multiple providers, from cutting-edge proprietary models to open-source alternatives. This comprehensive guide covers all supported models and their capabilities.

All providers are accessible through generic nodes (TextToImage, Agent, RealtimeAgent, etc.). Switching providers does not require editing the workflow structure.

Local Inference Engines

NodeTool provides comprehensive local model support with 1,655+ models across multiple frameworks.

For provider-based local inference (Ollama, vLLM), please refer to the Providers documentation.

llama.cpp & GGUF Format

llama.cpp is a highly optimized C/C++ inference library that enables efficient LLM inference on CPU and GPU hardware using the GGUF format. It supports 1.5-bit through 8-bit integer quantization for significantly reduced memory usage.

Models: Supports 300+ GGUF quantized models including Qwen, Llama, Gemma, DeepSeek, and GPT variants.

MLX Framework (Apple Silicon)

MLX is Apple’s open-source machine learning framework specifically optimized for Apple Silicon’s unified memory architecture. It enables efficient on-device AI for Mac users.

Capabilities:

  • LLMs: Native optimization for Llama, Qwen, Mistral, and others.
  • Vision: Multimodal models and FastVLM support.
  • Image Gen: FLUX models ported to MLX for faster generation.

Nunchaku (NVIDIA GPU)

Nunchaku is a high-performance inference engine specifically designed for 4-bit diffusion models on NVIDIA GPUs. It implements SVDQuant to maintain visual fidelity while reducing memory usage by 3.6x compared to BF16 models. It is ideal for running large diffusion models (like FLUX.1) on consumer NVIDIA GPUs.

HuggingFace Transformers

Transformers is the standard library for working with state-of-the-art ML models across text, vision, audio, and multimodal tasks. It provides access to the HuggingFace Hub with over 500,000 pre-trained models and supports automatic device detection (GPU/Apple Silicon/CPU).

Comparison Matrix

Framework Throughput Memory Efficiency Ease of Use Best Hardware Use Case
llama.cpp Medium Excellent Medium CPU, GPU Quantized models, edge devices
MLX Good Excellent Good Apple Silicon Mac, iOS, privacy
Nunchaku Excellent Excellent Medium NVIDIA GPU High-performance Diffusion
Transformers Medium Good Excellent Any Research, flexibility

Supported Model Types

NodeTool supports a wide range of model types across different domains. Below is an overview of the supported types and their available execution variants.

Variants Key

  • Full Precision: Standard execution using HuggingFace Transformers/Diffusers (supports CUDA, MPS, CPU).
  • MLX: Optimized execution for Apple Silicon (M-series chips).
  • Nunchaku: High-performance 4-bit quantization for NVIDIA GPUs.

Image Generation

Model Type Description Variants
Flux State-of-the-art text-to-image generation βœ… Full Precision
βœ… MLX
βœ… Nunchaku
Flux Fill Inpainting/Outpainting for Flux βœ… Full Precision
βœ… MLX
Flux Depth Depth-guided generation βœ… Full Precision
βœ… MLX
Flux Redux Image variation and mixing βœ… Full Precision
βœ… MLX
Flux Kontext Context-aware generation βœ… Full Precision
βœ… MLX
Stable Diffusion XL SDXL base and refiner models βœ… Full Precision
βœ… Nunchaku
Stable Diffusion 3 Latest Stable Diffusion architecture βœ… Full Precision
Stable Diffusion SD 1.5, 2.1, and variants βœ… Full Precision
Qwen Image Qwen-based text-to-image βœ… Full Precision
βœ… MLX
βœ… Nunchaku
Qwen Image Edit Instruction-based image editing βœ… Full Precision
βœ… MLX
ControlNet Structural guidance (Canny, Depth, etc.) βœ… Full Precision
βœ… MLX (Flux)
Text to Image Generic text-to-image models βœ… Full Precision
Image to Image Image transformation models βœ… Full Precision
Inpainting Mask-based image editing βœ… Full Precision

Vision & Video

Model Type Description Variants
Image Text to Text Vision-Language Models (VLM) βœ… Full Precision
βœ… MLX (Qwen2-VL)
Visual QA Visual Question Answering βœ… Full Precision
Document QA Document understanding and QA βœ… Full Precision
OCR Optical Character Recognition (GOT-OCR, etc.) βœ… Full Precision
Depth Estimation Monocular depth estimation βœ… Full Precision
Image Classification Categorize images βœ… Full Precision
Object Detection Detect objects in images βœ… Full Precision
Image Segmentation Pixel-level segmentation βœ… Full Precision
Zero-Shot Detection Open-vocabulary detection βœ… Full Precision
Mask Generation Segment Anything (SAM) variants βœ… Full Precision
Video Classification Categorize video content βœ… Full Precision
Text to Video Generate video from text βœ… Full Precision
Image to Video Animate images βœ… Full Precision
Text to 3D Generate 3D assets from text βœ… Full Precision
Image to 3D Generate 3D assets from images βœ… Full Precision

Natural Language Processing

Model Type Description Variants
Text Generation LLMs (Llama, Qwen, Mistral, etc.) βœ… Full Precision
βœ… MLX
Text to Text T5, BART, and seq2seq models βœ… Full Precision
Summarization Text summarization βœ… Full Precision
Translation Machine translation βœ… Full Precision
Question Answering Extractive QA βœ… Full Precision
Text Classification Sentiment analysis, etc. βœ… Full Precision
Token Classification NER, POS tagging βœ… Full Precision
Zero-Shot Class. Open-vocabulary classification βœ… Full Precision
Sentence Similarity Semantic similarity / Embeddings βœ… Full Precision
Reranker Search result reranking βœ… Full Precision
Feature Extraction General embeddings βœ… Full Precision
Fill Mask BERT-style masked modeling βœ… Full Precision

Audio

Model Type Description Variants
Text to Speech Generate speech from text βœ… Full Precision
βœ… MLX
Speech Recognition ASR (Whisper, etc.) βœ… Full Precision
βœ… MLX
Audio Classification Categorize audio events βœ… Full Precision
Voice Activity VAD (Silero, etc.) βœ… Full Precision
Audio to Audio Voice conversion, enhancement βœ… Full Precision

Components & Adapters

Model Type Description Variants
LoRA Low-Rank Adaptation weights βœ… Full Precision (SD, SDXL, Qwen)
IP Adapter Image Prompt Adapters βœ… Full Precision
VAE Variational Autoencoders βœ… Full Precision
CLIP Text/Image Encoders βœ… Full Precision
T5 Encoder Text Encoders for diffusion βœ… Full Precision
RealESRGAN Image Upscaling βœ… Full Precision

Cloud-Based State-of-the-Art Models

In addition to local models, NodeTool provides access to cutting-edge cloud-based models through provider integrations. These models offer the latest capabilities in video, image, and audio generation.

Video Generation (Cloud)

Model Provider Key Features Resolution Max Duration
Sora 2 Pro OpenAI Realistic motion, refined physics, native audio 1080p 15s
Veo 3.1 Google Realistic motion, multi-image refs, synced audio 1080p Extended
Grok Imagine xAI Multimodal T2V/I2V with coherent motion 1080p Short clips
Wan 2.6 Alibaba Multi-shot, stable characters, affordable 1080p Variable
Hailuo 2.3 MiniMax Expressive characters, complex lighting 1080p+ Variable
Kling 2.6 Kling Synced speech & effects, audio-visual coherence 1080p Variable

Access via: nodetool.video.TextToVideo, nodetool.video.ImageToVideo nodes

Image Generation (Cloud)

Model Provider Key Features Output Quality
FLUX.2 Black Forest Labs Photoreal, multi-reference consistency, accurate text High
Nano Banana Pro Google 2K native, 4K scaling, enhanced text & characters Very High

Access via: nodetool.image.TextToImage node

Advantages of Cloud Models

  • Latest Technology: Access to newest architectures and training data
  • No Local Resources: Run on any hardware without GPU requirements
  • Instant Availability: No download or installation needed
  • Continuous Updates: Models improve without local updates

Considerations

  • API Costs: Per-generation pricing varies by provider
  • Internet Required: Cannot run offline
  • Data Privacy: Content is processed on provider servers
  • Rate Limits: Subject to provider API quotas

Cost-Effective Alternative: kie.ai

All the cloud models listed above are available through kie.ai, an AI provider aggregator that:

  • Offers unified access to multiple providers through a single API
  • Often provides competitive or lower pricing than upstream providers
  • Simplifies API key management (one key for all models)
  • Enables easy cost comparison and optimization across providers

Important: Some models (xAI Grok Imagine, Alibaba Wan 2.6, Kling 2.6) currently require kie.ai or similar aggregators for access, as their API keys are not directly registered in NodeTool. Models with direct support include OpenAI Sora 2 Pro, Google Veo 3.1, and MiniMax Hailuo 2.3.

This can be particularly beneficial for workflows using multiple SOTA models from different providers.

For detailed provider configuration and usage, see the Providers Guide.