Glossary
Reference for Stacknet terminology
Network
aISP (AI Service Provider) — A node operator who connects a node key to provide compute (inference, generation, tool execution) to the Stacknet network. aISPs earn paperwork (proof for compensation) proportional to the tasks they execute. Any machine running the stacknet aISP app can become an aISP by running the software and registering at least one node key.
Node — A participant in the Stacknet network running the aISP app.
Coprocessor — A modular system registered with the Stacknet framework. Each coprocessor handles a specific domain: AI inference, payment settlement, storage (IPFS/Rack), MetaChain attestation, arena benchmarking, agent orchestration, and more. The coprocessor framework supports 12+ types with independent lifecycle, configuration, and task routing.
XRouter — The intelligent routing system on Stacknet. XRouter scores nodes by model availability (HOT/COLD), GPU load, reputation, and status to select the optimal executor for each task.
Confidential Computing — Optional end-to-end hardware level encrypted inference where neither the gateway nor the operator can ever view the prompt or response. Uses NVIDIA TEE (Trusted Execution Environment) attestation.
Keys
Node Key — A cryptographic credential that grants an operator the right to participate in the Stacknet network. Node keys can be generated via payment (via Credit Card or Crypto), carry a token balance that depletes with usage, and accumulates paperwork (USD compensation) for tasks executed.
API Key — A stack-scoped authentication credential (prefix sk_) for programmatic access to Stacknet APIs. Created per stack with read or write permissions.
Economy
Paper — The USD-denominated compensation earned by node operators for executing tasks on the network. Accumulated during payment settlement resulting in transfered cash directly to bank or crypto account of users choice.
Paperwork — The cumulative USD value earned by a node key from serving inference, generation, and tool execution tasks. When a node key is filed, its paperwork is included in the SHA-256 snapshot submitted as a MetaProof attestation, creating a tamper-proof ecord of the operator’s earnings.
Pile — The pile represents the total outstanding paperwork to be filed across the network.
File Paperwork — The process of closing a node key and submitting its final settlement for paper. i
Tokens — Tokens are created through tokenization, breaking down raw data and refining it into distinct units that are digestible to an ai model. Similar to how a software compiler translates human language into binary code that a computer can digest, tokenization interprets human language for AI programs so it can prepare responses. Tokens represent the internal unit of account for metering usage on Stacknet.
Context Window — The maximum number of tokens a model can process in a single request (prompt + response). Varies by model — e.g., 128K+ for the preview layer, 1m for the magma layer.
Content
Agent - A tool that uses AI to perform a series of multistep tasks beyond what a more basic AI chatbot does. Writing code, create documents, booking tickets or managing a social account.
Chain of Thought - Breaking down a problem into smaller, intermediate steps to improve the quality of the end result. It usually takes longer to get an answer, but the answer is more likely to be correct, especially in a logic or coding context. Reasoning models are optimized for chain-of-thought thinking thanks to reinforcement learning.
Prompt — The input text, images, or instructions sent to a model layer for inference. Stacknet routes prompts through the XRouter to the best available node based on models, capabilities, and load.
Prompt Caching — Automatic caching of prompt prefixes to reduce latency and cost on repeated requests. When a prompt shares a prefix with a recently processed request, the cached KV state is reused.
Prompt Contract — Atomic transaction records on Stacknet. They ensure bounded execution, guaranteeing a sender has the balance beforehand, and correctly paying operators, service providers and attributed (injected) sources upon verifiable completion.
Skill — A reusable, composable AI capability registered on Stacknet. Skills are specifications containing name, description, input schema, and execution parameters. Skills can be text generation, image generation, code execution, or multi-step agent workflows. Registration costs tokens (metered via the skill bridge).
Tensor — A large binary artifact (model weights, embeddings, datasets) registered on Stacknet. Tensors are content-addressed and can be referenced by skills during execution.
Looop — A recurring execution of a skill or prompt on a configurable interval running in a network sandbox. Looops enable periodic tasks like monitoring, data collection, content generation, or scheduled agent runs.
Models
Model — An AI model available for inference on Stacknet. Models are registered in the canonical model registry with pricing (per-million input/output tokens, per-megapixel, per-MB video/audio), VRAM requirements, backend configs, and capability sets.
Model Layer — A named abstraction over one or more underlying models. Layers map to specific models based on the stack’s configuration. Layers enable model upgrades without client changes and support per-stack customization.
MoM (Mixture of Models) — An ensemble inference strategy that routes a prompt to multiple models simultaneously, then synthesizes the best response. The duce layer uses MoM with cheap-then-escalate routing: fast models answer first, and if confidence is low, the response is escalated to a more capable model.
Infrastructure
Sandbox — An isolated execution environment for running user code safely. Standboxes include file system, command execution, and preview capabilities.
Lode — A binary serialization language used for model layers, skills and compact state encoding. Lode files (.lode) define schema versions for model metadata, enabling efficient over-the-wire transmission of registry state between nodes.
MetaProof — A tamper-proof cryptographic attestations. Stacknet uses MetaProofs for node key close settlements: the key’s final state (token balance, paperwork, user ID) is SHA-256 hashed to 32 bytes, then proved and verified in a single transaction for settlement.
MetaStream — Durable agent-oracled data streams.
TEE - TEE is a secure area inside a main processor that guarantees code and data protection. It provides hardware-level isolation, ensuring that sensitive operations are performed in a secure environment that cannot be accessed by the operating system, hypervisor, or other applications.
Tools — External capabilities available to AI models during inference via function calling. Stacknet supports 116+ tools at the network level across categories: web search, code execution, file management, blockchain operations, music analysis, image generation, research agents, and crypto analysis.
Context — The accumulated state of a conversation or task execution session, including message history, tool results, and system prompts.