Skip to main content

Constants System Architecture

This document provides a comprehensive overview of the Constants platform architecture, explaining how the system transforms scripts and prompts into governed, sandboxed Tools with run history, artifacts, agent integration, and multi-tenant organization support.

Executive Summary

Constants solves the “Terminal Wall” problem — valuable logic trapped behind environments, dependencies, credentials, CLI syntax, and operational edge cases. The platform transforms any script, prompt, or GitHub repository into a governed Tool with:
  • Verified Execution: Runs real code in isolated sandboxes, not approximations
  • Observable + Replayable: Full run history with logs, artifacts, and reruns
  • Agent-Ready: Callable via MCP, REST API, and Slack with the same contract and governance as the human UI
  • Multi-Tenant: Organizations with roles, quotas, shared credentials, and team management
  • AI-Powered Discovery: Scan GitHub repos to automatically discover and create tools

Core Primitives

PrimitiveDescription
Tool (Worker)Spec + entrypoint — a standardized execution unit with frontend UI and backend logic
RunInputs, status, logs, timing, cost, and output artifacts
ArtifactsOutputs from runs — files, tables, reports, generated content
CredentialsEncrypted secrets with scoped bindings, resolved per-tool at runtime
OrganizationMulti-tenant workspace with roles, quotas, and shared resources
InterfaceSame Tool exposed as UI, REST API, MCP endpoint, and Slack bot

System Architecture Overview

The platform follows a layered architecture with a separate agent worker process:
       Humans                 Agents                  Code               Slack
         │                      │                      │                   │
         ▼                      ▼                      ▼                   ▼
┌─────────────────────────────────────────────────────────────────────────────────┐
│                              Client Layer                                       │
│   ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐       │
│   │   Web UI     │  │   MCP        │  │   REST API   │  │   Slack Bot  │       │
│   │   (React)    │  │   (JSON-RPC) │  │   (V1)       │  │   (Events)   │       │
│   └──────┬───────┘  └──────┬───────┘  └──────┬───────┘  └──────┬───────┘       │
└──────────┼─────────────────┼─────────────────┼─────────────────┼────────────────┘
           │                 │                 │                 │
           ▼                 ▼                 ▼                 ▼
┌─────────────────────────────────────────────────────────────────────────────────┐
│                    API Layer (Next.js App Router, ~78 routes)                    │
│   ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌──────────────┐ │
│   │  Workers   │ │  Agent     │ │  GitHub    │ │  Orgs &    │ │  Proxy       │ │
│   │  CRUD+Run  │ │  Chat      │ │  Scan      │ │  Auth      │ │  APIs        │ │
│   └─────┬──────┘ └─────┬──────┘ └─────┬──────┘ └─────┬──────┘ └──────┬───────┘ │
└─────────┼──────────────┼──────────────┼──────────────┼───────────────┼──────────┘
          │              │              │              │               │
          ▼              ▼              ▼              ▼               ▼
┌─────────────────────────────────────────────────────────────────────────────────┐
│                            Services Layer                                        │
│   ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌──────────────┐ │
│   │  Worker    │ │  Agent     │ │  Credential│ │  Skills    │ │  Generation  │ │
│   │  Service   │ │  Worker    │ │  Vault     │ │  Loader    │ │  Pipeline    │ │
│   └─────┬──────┘ └─────┬──────┘ └─────┬──────┘ └─────┬──────┘ └──────┬───────┘ │
└─────────┼──────────────┼──────────────┼──────────────┼───────────────┼──────────┘
          │              │              │              │               │
          ▼              ▼              ▼              ▼               ▼
┌─────────────────────────────────────────────────────────────────────────────────┐
│                          Infrastructure Layer                                    │
│   ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐ ┌──────────────┐ │
│   │ PostgreSQL │ │ E2B        │ │ Supabase   │ │ Anthropic  │ │ External     │ │
│   │ (Drizzle)  │ │ Sandboxes  │ │ Auth+Store │ │ Claude API │ │ APIs         │ │
│   └────────────┘ └────────────┘ └────────────┘ └────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────────────────────────┐
│  Agent Worker Process (standalone, polls DB)                                     │
│  Claude Agent SDK → MCP tool execution → event streaming → result persistence   │
└─────────────────────────────────────────────────────────────────────────────────┘

Worker Generation Pipeline

Workers are generated through a Claude Agent SDK pipeline running inside an E2B sandbox. The process is orchestrated by WorkerGenerationService and executed by runner-claude.js.

Phase 1: Interface Schema Generation

The first phase uses a Claude agent to analyze the user’s request and define the contract between frontend and backend: Input: User prompt, optional project files (uploaded or from GitHub), optional existing credentials Output: WorkerInterfaceSchema containing:
  • inputSchema: Typed field definitions (string, number, file, directory, array) with validation
  • outputSchema: Output field definitions (string, number, file, json) for result rendering
  • requiredCredentials: Cloud or custom credentials needed at runtime
  • computeTier: Resource allocation (cpu-2, cpu-4, cpu-8, gpu)
  • pythonDependencies: Additional packages beyond the base template
  • title and summary: Human-readable metadata
  • actions: Interactive follow-up actions (for stateful tools)
The interface schema is the single source of truth — the same contract governs the web UI, MCP tool definitions, REST API, and Slack execution. Project Exploration: When project files are provided (via upload or GitHub import), the Claude agent explores the codebase autonomously using Read, Grep, and Glob tools to identify entry points, extract schemas, and map required credentials. Prompt: src/lib/generation/prompts/interface-schema.md

Phase 2: Parallel Component Generation

Once the interface schema is defined, frontend and backend are generated concurrently using forkSession from the same Claude session context:

Frontend Generation

Generates a self-contained HTML/CSS/JS tool UI:
  • HTML Structure: Form inputs matching the inputSchema, including file pickers and credential selectors
  • CSS Styling: Consistent design system with responsive layout
  • JavaScript Logic: Form validation, file handling, result display
  • Built-in components: Access to ConstantsUI library (CredentialPicker, DataTable, Chart, FilePicker, FileDownload, JsonViewer, StatCards)
The frontend runs in an iframe and communicates with the parent frame via postMessage:
  • Sends WORKER_RUN with collected input data
  • Receives WORKER_RESULT with execution output
  • Delegates file uploads to parent via WORKER_UPLOAD
Prompt: src/lib/generation/prompts/frontend.md

Backend Generation

Generates Python code that:
  • Reads input as JSON from stdin
  • Processes data using available skills (via from constants_utils import *)
  • Prints output as JSON to stdout
The Claude agent generates backend code with access to:
  • Skill documentation: Relevant skills loaded based on credential requirements and detected keywords
  • Project files: If provided, the agent can read and analyze existing code
  • Interface schema: Ensures generated code adheres to the input/output contract
Prompt: src/lib/generation/prompts/backend.md

Phase 3: Validation & Persistence

After generation completes:
  1. Frontend compiles JSX via esbuild (if used)
  2. Components are validated for correctness
  3. Results submitted to /api/workers/[id]/result
  4. Worker persisted with status “running”
  5. Conversation history updated

Edit Mode

Existing workers can be modified via conversation. Edit mode (IS_EDIT=1) gives the Claude agent access to the existing interface schema, backend code, and frontend code as editable files, with full Read/Write/Edit/Bash tools.

Security Model

The sandbox never holds raw API keys:
  • Receives a signed worker token (createWorkerRunToken)
  • Anthropic calls routed through /api/proxy/claude-sdk with token validation
  • Token scoped to specific user/org/worker context

Agent System

Architecture

The agent system uses a split app/worker model to handle long-running LLM + tool execution loops outside of serverless timeouts: Web App (/api/agent/chat): Handles authentication, creates conversation/message records, enqueues an agent_run in the database, returns immediately. Agent Worker (src/agent-worker/index.ts): Standalone Node.js process that polls the agent_runs table, claims pending runs, and executes them using the Claude Agent SDK. Supports up to 5 concurrent runs.

Provider Abstraction

src/lib/agent/provider.ts defines a provider-agnostic AgentProvider interface that returns AsyncIterable<AgentStreamEvent>. The Claude implementation (src/lib/agent/providers/claude-agent-sdk.ts) uses:
  • Model: claude-sonnet-4-6
  • Max turns: 25
  • Conversation history embedded as XML blocks in the system prompt
  • Each MCP tool mapped to a Zod-shaped SDK tool

Tool Execution

Agents use the same tool surface as MCP:
  • listUserTools discovers available workers
  • executeWorkerTool runs workers in E2B sandboxes
  • Results formatted via format-result.ts

File Handling

Chat attachments can be auto-mapped to worker file input fields:
  • MIME type matching against accept patterns in the interface schema
  • Files stored via addWorkerFile + blob storage
  • If required file fields are missing, tool execution returns needsInput with missingFields metadata

Caller Types

CallerSourceBehavior
agentAgent chatFull multi-turn conversation with tool use
mcpMCP endpointSingle tool execution, result on agent_runs
uiWeb UISingle tool execution triggered from UI
slackSlack botAgent run with Slack thread history as context

Event Streaming

Agent events are persisted to the agent_events table and consumed by the client via SSE for real-time streaming of text deltas, tool calls, and completion status.

Sandbox Execution Model

E2B Sandbox Architecture

All backend code runs in isolated E2B sandboxes with: Base Template: Custom constants-sandbox template with pre-installed packages:
  • Python 3.11+ with scientific computing stack (pandas, numpy, scipy)
  • Cloud SDKs (google-cloud-*, boto3)
  • Media tools (ffmpeg, Pillow)
  • Build tools (gcloud, gsutil)
Compute Tiers:
TierUse Case
cpu-2Light operations, simple scripts
cpu-4Moderate computation (default)
cpu-8Heavy computation, video processing
gpuML inference, image generation

Execution Flow

  1. Sandbox initialization: Create isolated sandbox with the specified compute tier
  2. Skills installation: Bundle skill utilities as constants_utils package
  3. File setup:
    • User uploads → /home/user/inputs/{fieldName}/
    • Output directory → /home/user/outputs/
    • Project files → /home/user/project/
    • Input data → /home/user/input.json
  4. Environment configuration:
    • Platform API URL and signed worker token
    • Cloud credentials (written to files, env vars set)
    • Storage configuration
  5. Code execution: python -u main.py < /home/user/input.json
  6. Result extraction:
    • Parse JSON from stdout
    • Collect output files from /home/user/outputs/
    • Record uploaded files

Security Boundaries

Network isolation: Sandbox cannot access internal networks or other sandboxes Credential protection: Raw API keys never reach the sandbox. Instead:
  • Sandbox receives a signed worker token
  • Token authorizes calls to platform proxy APIs
  • Proxy validates token and makes actual API calls server-side
Resource limits: Configurable timeouts and resource quotas per compute tier

Skills System

Architecture

Skills are pre-built Python utilities that provide common capabilities to worker backends. The system uses dynamic discovery and on-demand loading.

Available Skills

SkillKey FunctionsDescription
searchgoogle_search()Web and image search
scrapescrape_website()Web page scraping
llmask_llm(), extract_json()LLM API access (Claude, GPT)
gcp-storageget_gcs_client()Google Cloud Storage
gcp-firestoreget_firestore_client()Firestore operations
gcp-bigqueryget_bigquery_client()BigQuery queries
gcp-spannerget_spanner_client()Cloud Spanner operations
aws-s3get_s3_client()S3 operations
filesupload_file()Content-addressed file uploads
imagegenerate_image()Image generation
asrtranscribe_audio()Speech-to-text
ttstext_to_speech()Text-to-speech
pdfVariousPDF processing
mediaVariousFFmpeg video/audio processing
linkedinVariousLinkedIn data operations
youtube-downloadVariousYouTube video/audio download

Loading Strategy

Three-level loading:
  1. Manifest level: Brief summary of all skills always included in generation prompts
  2. Documentation level: Full SKILL.md loaded based on:
    • Credential requirements matching interface schema
    • Trigger keywords detected in user request
    • Services identified from project analysis
  3. Shared references: Authentication guides (_shared/GCP_AUTH.md, _shared/AWS_AUTH.md) loaded when relevant skills are selected

Runtime Installation

At execution time, the skill loader:
  1. Discovers all Python files from skill directories
  2. Parses module-level function exports
  3. Generates __init__.py with all exports
  4. Writes to /home/user/constants_utils/ in sandbox
  5. Worker code imports: from constants_utils import *

Credentials System

Architecture

The credentials system provides an encrypted vault with scoped bindings for secure secret management: Encryption: AES-256-GCM with a 32-byte key (CREDENTIAL_ENCRYPTION_KEY). Values encrypted at rest; metadata (e.g., GCP project ID, AWS key prefix) stored unencrypted for UI display. Ownership: Each credential belongs to either a user or an organization (ownerType).

Binding & Resolution

When a tool runs, its requiredCredentials are resolved through a priority chain:
  1. Organization binding — Credential explicitly bound to the tool at org level
  2. Personal binding — Credential bound by the running user
  3. Auto-match — If exactly one stored credential matches the required type
  4. Runtime paste — User pastes a credential value at run time
  5. Unresolved — Warning logged, field left empty

Dual Channel Output

  • Cloud credentials (GCP service account, AWS access key, Azure): Written to files and environment variables in the sandbox
  • Custom credentials (API keys, tokens): Merged into input.json under a credentials key

MCP Server Integration

Protocol Implementation

Constants implements the Model Context Protocol (MCP) to expose workers as tools for AI agents: Supported methods:
  • initialize — Returns server capabilities
  • tools/list — Lists available workers as tools (owned + shared)
  • tools/call — Executes a worker tool
  • ping — Keep-alive

Tool Conversion

Workers are converted to MCP tools with:
  • Sanitized tool name (title + ID suffix)
  • Description from worker summary
  • Input schema derived from the worker’s interface schema
  • File fields excluded (not supported in MCP context)

Execution Flow

  1. Agent sends tools/call with tool name and arguments
  2. MCP server authenticates via API key
  3. Worker identified by tool name suffix
  4. Access verified (ownership or sharing)
  5. Executed in E2B sandbox (same path as UI)
  6. Run logged with triggeredBy: "agent", triggerSource: "mcp"
  7. Result returned to agent

GitHub Scanning

Architecture

Repository scanning uses a Claude Agent SDK pipeline running in an E2B sandbox to discover tool candidates:

Scan Flow

  1. Initiation: User triggers scan via /api/github/scan with repo URL
  2. Sandbox setup: E2B sandbox created with scanner-runner.js and repo contents (zip download)
  3. AI analysis: Claude agent with read-only tools (Read, Grep, Glob) explores the codebase
  4. Candidate discovery: Agent outputs structured JSON with candidates (name, description, category, score)
  5. Progress streaming: Events posted to scan_events table via callbacks
  6. Review: User reviews candidates in the UI (ScanProgressBanner, ConnectCodebaseView)
  7. Bulk creation: Selected candidates created as workers via WorkerGenerationService

Security

Same proxy + token pattern as generation:
  • Sandbox receives a scan token (createScanToken)
  • Anthropic calls routed through /api/proxy/claude-sdk
  • Token scoped to specific user/org/scan context

Slack Integration

Architecture

The Slack bot enables tool discovery and execution from Slack: OAuth flow: /api/slack/connect → Slack OAuth → /api/slack/callback → store slackConnections with bot token and API key Event handling (/api/slack/events/route.ts):
  1. Verify Slack signature (HMAC-SHA256, 5-minute timestamp skew)
  2. Handle url_verification challenge
  3. On app_mention: resolve connection, post “Working on it…” message, collect thread history (last 50 messages), create agent_run with Slack callback metadata
  4. Agent worker processes the run, updates the Slack message with results via Block Kit formatting

Queue-Based Processing

Slack HTTP handler stays fast (acknowledges within 3 seconds). Heavy work runs in the agent worker process, which:
  • Claims the Slack-origin agent_run
  • Runs Claude Agent SDK with thread history as context
  • Posts results back to Slack via chat.update or postMessage

Interactive Sessions

Architecture

Interactive workers maintain a running E2B sandbox for stateful, multi-step interactions: Session tracking: worker_sessions table tracks active sandboxes per user/worker with status and lastActiveAt HTML assembly (src/lib/interactive-base/): The interactive UI is assembled from:
  • React + ReactDOM UMD bundles
  • ConstantsUI component bundle (DataTable, Chart, JsonViewer, FilePicker, etc.)
  • Base HTML/CSS/JS shell with loading overlay and error handling
  • LLM-generated tool JS and CSS
Follow-up actions: POST /api/workers/[id]/interact sends actions to the same running sandbox, with log streaming from server.log

Organizations

Architecture

Multi-tenant workspace system for team collaboration:
  • Personal org: Auto-created for each user on first API call
  • Team org: Created explicitly with name, slug, avatar
  • Roles: owner, admin, member, viewer — enforced via withOrgAccess middleware
  • Quotas: Per-org limits on LLM, search, scrape, image, ASR, TTS usage with tier-based defaults and custom overrides
  • Invite flow: Email-based invites with pending status, auto-accepted on login
The x-organization-id header is used across API routes to scope operations to the correct organization context.

V1 REST API

Architecture

External REST API for programmatic tool access, authenticated via API keys with scoped permissions:
EndpointMethodScopeDescription
/v1/toolsGETmcp:readList available tools with schemas
/v1/run/[toolName]POSTmcp:executeExecute a tool (waits up to 50s for result)
/v1/skill/[toolName]GETmcp:readDownload skill documentation as markdown
/v1/skills/searchGETSearch available skills
/v1/uploadPOSTUpload files for tool execution
The V1 API uses the same execution path as MCP (callerType: "mcp"), so one worker implementation serves UI, MCP, V1, and Slack.

Frontend/Backend Communication

The Interface Schema Contract

The WorkerInterfaceSchema (defined in src/types/worker-interface.ts using Zod) serves as the definitive contract:
{
  inputSchema: {
    type: "object",
    properties: [
      { name: "query", type: "string", description: "Search query", accept: null, multiple: null },
      { name: "videoFile", type: "file", description: "Video to process", accept: [".mp4", ".mov"], multiple: false }
    ],
    required: ["query"]
  },
  outputSchema: {
    type: "object",
    properties: [
      { name: "results", type: "json", description: "Search results" },
      { name: "processedCount", type: "number", description: "Number processed" }
    ]
  },
  requiredCredentials: [
    { key: "gcp_service_account", label: "GCP Service Account", type: "textarea", instructions: "..." }
  ],
  computeTier: "cpu-4",
  pythonDependencies: { packages: ["custom-lib==1.0"] }
}

postMessage Protocol

Frontend → Platform (WORKER_RUN):
{
  type: 'WORKER_RUN',
  payload: {
    query: "search term",
    videoFile: "/path/injected/by/platform",
    credentials: { gcp_service_account: "..." }
  }
}
Platform → Frontend (WORKER_RESULT):
{
  type: 'WORKER_RESULT',
  payload: {
    results: [...],
    processedCount: 42
  }
}

Data Flow Examples

Worker Creation

User Prompt (+ optional files/GitHub URL)

Phase 1: Interface Schema Generation (Claude Agent SDK)

Phase 2: Parallel Frontend + Backend Generation (forkSession)

Phase 3: Validation, Compilation, Persistence

Worker Ready (status: "running")

Worker Execution (UI)

Frontend iframe: WORKER_RUN postMessage

API: /api/workers/[id]/run

Resolve credentials (vault bindings + auto-match)

Download input files from blob storage

E2B Sandbox: execute python with skills + credentials

Extract results + upload output files

Database: log run, store outputs

Frontend iframe: WORKER_RESULT postMessage

Worker Execution (Agent / MCP / V1)

Agent/MCP/V1: tool call with arguments

Authenticate (API key or agent context)

Resolve worker by tool name

Verify access (ownership, sharing, or featured)

Resolve credentials + execute in E2B Sandbox

Log run (triggeredBy, triggerSource, callerType)

Return result to caller

GitHub Repo Scan

User: connect GitHub + trigger scan

Create repoScans record + E2B sandbox

scanner-runner.js: Claude Agent SDK explores repo (read-only)

Stream progress events → scan_events table

Output: candidate list with names, descriptions, scores

User reviews candidates in UI

Bulk create: WorkerGenerationService per selected candidate

Security Boundaries

  1. Sandbox isolation: No access to host system, controlled network, fresh environment per run
  2. Proxy pattern: Raw API keys never reach sandboxes — signed tokens + platform proxy for all external calls
  3. Credential encryption: AES-256-GCM at rest, scoped bindings, resolution chain with audit trail
  4. API key security: Hashed storage, scoped permissions (mcp:read, mcp:execute), revocation support
  5. Access control: Ownership + sharing + organization role verification on every operation
  6. Audit trail: All runs logged with full context (caller type, trigger source, credential sources, input summary)
  7. Slack verification: HMAC-SHA256 signature verification with timestamp skew protection
  8. Scan tokens: Scoped, short-lived tokens for GitHub scan sandbox operations

Database Schema

Table Groups

Core: workers, workerConversations, workerFiles, workerRunLogs, runLogLines, workerEvents, runFiles, workerData, workerSessions, notifications Organizations: organizations, organizationMembers, organizationQuotas Credentials: credentials (encrypted vault), workerCredentialBindings Agent: agentConversations, agentMessages, agentRuns, agentEvents GitHub: githubConnections, githubRepoHistory, repoScans, scanCandidates, scanEvents Slack: slackConnections Access & Usage: apiKeys, workerShares, apiUsage, userPreferences Storage: fileBlobs (content-addressed, SHA256 hash as primary key)