Constants System Architecture
This document provides a comprehensive overview of the Constants platform architecture, explaining how the system transforms scripts and prompts into governed, sandboxed Tools with run history, artifacts, agent integration, and multi-tenant organization support.Executive Summary
Constants solves the “Terminal Wall” problem — valuable logic trapped behind environments, dependencies, credentials, CLI syntax, and operational edge cases. The platform transforms any script, prompt, or GitHub repository into a governed Tool with:- Verified Execution: Runs real code in isolated sandboxes, not approximations
- Observable + Replayable: Full run history with logs, artifacts, and reruns
- Agent-Ready: Callable via MCP, REST API, and Slack with the same contract and governance as the human UI
- Multi-Tenant: Organizations with roles, quotas, shared credentials, and team management
- AI-Powered Discovery: Scan GitHub repos to automatically discover and create tools
Core Primitives
| Primitive | Description |
|---|---|
| Tool (Worker) | Spec + entrypoint — a standardized execution unit with frontend UI and backend logic |
| Run | Inputs, status, logs, timing, cost, and output artifacts |
| Artifacts | Outputs from runs — files, tables, reports, generated content |
| Credentials | Encrypted secrets with scoped bindings, resolved per-tool at runtime |
| Organization | Multi-tenant workspace with roles, quotas, and shared resources |
| Interface | Same Tool exposed as UI, REST API, MCP endpoint, and Slack bot |
System Architecture Overview
The platform follows a layered architecture with a separate agent worker process:Worker Generation Pipeline
Workers are generated through a Claude Agent SDK pipeline running inside an E2B sandbox. The process is orchestrated byWorkerGenerationService and executed by runner-claude.js.
Phase 1: Interface Schema Generation
The first phase uses a Claude agent to analyze the user’s request and define the contract between frontend and backend: Input: User prompt, optional project files (uploaded or from GitHub), optional existing credentials Output: WorkerInterfaceSchema containing:- inputSchema: Typed field definitions (string, number, file, directory, array) with validation
- outputSchema: Output field definitions (string, number, file, json) for result rendering
- requiredCredentials: Cloud or custom credentials needed at runtime
- computeTier: Resource allocation (cpu-2, cpu-4, cpu-8, gpu)
- pythonDependencies: Additional packages beyond the base template
- title and summary: Human-readable metadata
- actions: Interactive follow-up actions (for stateful tools)
src/lib/generation/prompts/interface-schema.md
Phase 2: Parallel Component Generation
Once the interface schema is defined, frontend and backend are generated concurrently usingforkSession from the same Claude session context:
Frontend Generation
Generates a self-contained HTML/CSS/JS tool UI:- HTML Structure: Form inputs matching the inputSchema, including file pickers and credential selectors
- CSS Styling: Consistent design system with responsive layout
- JavaScript Logic: Form validation, file handling, result display
- Built-in components: Access to
ConstantsUIlibrary (CredentialPicker, DataTable, Chart, FilePicker, FileDownload, JsonViewer, StatCards)
- Sends
WORKER_RUNwith collected input data - Receives
WORKER_RESULTwith execution output - Delegates file uploads to parent via
WORKER_UPLOAD
src/lib/generation/prompts/frontend.md
Backend Generation
Generates Python code that:- Reads input as JSON from stdin
- Processes data using available skills (via
from constants_utils import *) - Prints output as JSON to stdout
- Skill documentation: Relevant skills loaded based on credential requirements and detected keywords
- Project files: If provided, the agent can read and analyze existing code
- Interface schema: Ensures generated code adheres to the input/output contract
src/lib/generation/prompts/backend.md
Phase 3: Validation & Persistence
After generation completes:- Frontend compiles JSX via esbuild (if used)
- Components are validated for correctness
- Results submitted to
/api/workers/[id]/result - Worker persisted with status “running”
- Conversation history updated
Edit Mode
Existing workers can be modified via conversation. Edit mode (IS_EDIT=1) gives the Claude agent access to the existing interface schema, backend code, and frontend code as editable files, with full Read/Write/Edit/Bash tools.
Security Model
The sandbox never holds raw API keys:- Receives a signed worker token (
createWorkerRunToken) - Anthropic calls routed through
/api/proxy/claude-sdkwith token validation - Token scoped to specific user/org/worker context
Agent System
Architecture
The agent system uses a split app/worker model to handle long-running LLM + tool execution loops outside of serverless timeouts: Web App (/api/agent/chat): Handles authentication, creates conversation/message records, enqueues an agent_run in the database, returns immediately.
Agent Worker (src/agent-worker/index.ts): Standalone Node.js process that polls the agent_runs table, claims pending runs, and executes them using the Claude Agent SDK. Supports up to 5 concurrent runs.
Provider Abstraction
src/lib/agent/provider.ts defines a provider-agnostic AgentProvider interface that returns AsyncIterable<AgentStreamEvent>. The Claude implementation (src/lib/agent/providers/claude-agent-sdk.ts) uses:
- Model:
claude-sonnet-4-6 - Max turns: 25
- Conversation history embedded as XML blocks in the system prompt
- Each MCP tool mapped to a Zod-shaped SDK tool
Tool Execution
Agents use the same tool surface as MCP:listUserToolsdiscovers available workersexecuteWorkerToolruns workers in E2B sandboxes- Results formatted via
format-result.ts
File Handling
Chat attachments can be auto-mapped to worker file input fields:- MIME type matching against
acceptpatterns in the interface schema - Files stored via
addWorkerFile+ blob storage - If required file fields are missing, tool execution returns
needsInputwithmissingFieldsmetadata
Caller Types
| Caller | Source | Behavior |
|---|---|---|
agent | Agent chat | Full multi-turn conversation with tool use |
mcp | MCP endpoint | Single tool execution, result on agent_runs |
ui | Web UI | Single tool execution triggered from UI |
slack | Slack bot | Agent run with Slack thread history as context |
Event Streaming
Agent events are persisted to theagent_events table and consumed by the client via SSE for real-time streaming of text deltas, tool calls, and completion status.
Sandbox Execution Model
E2B Sandbox Architecture
All backend code runs in isolated E2B sandboxes with: Base Template: Customconstants-sandbox template with pre-installed packages:
- Python 3.11+ with scientific computing stack (pandas, numpy, scipy)
- Cloud SDKs (google-cloud-*, boto3)
- Media tools (ffmpeg, Pillow)
- Build tools (gcloud, gsutil)
| Tier | Use Case |
|---|---|
| cpu-2 | Light operations, simple scripts |
| cpu-4 | Moderate computation (default) |
| cpu-8 | Heavy computation, video processing |
| gpu | ML inference, image generation |
Execution Flow
- Sandbox initialization: Create isolated sandbox with the specified compute tier
- Skills installation: Bundle skill utilities as
constants_utilspackage - File setup:
- User uploads →
/home/user/inputs/{fieldName}/ - Output directory →
/home/user/outputs/ - Project files →
/home/user/project/ - Input data →
/home/user/input.json
- User uploads →
- Environment configuration:
- Platform API URL and signed worker token
- Cloud credentials (written to files, env vars set)
- Storage configuration
- Code execution:
python -u main.py < /home/user/input.json - Result extraction:
- Parse JSON from stdout
- Collect output files from
/home/user/outputs/ - Record uploaded files
Security Boundaries
Network isolation: Sandbox cannot access internal networks or other sandboxes Credential protection: Raw API keys never reach the sandbox. Instead:- Sandbox receives a signed worker token
- Token authorizes calls to platform proxy APIs
- Proxy validates token and makes actual API calls server-side
Skills System
Architecture
Skills are pre-built Python utilities that provide common capabilities to worker backends. The system uses dynamic discovery and on-demand loading.Available Skills
| Skill | Key Functions | Description |
|---|---|---|
| search | google_search() | Web and image search |
| scrape | scrape_website() | Web page scraping |
| llm | ask_llm(), extract_json() | LLM API access (Claude, GPT) |
| gcp-storage | get_gcs_client() | Google Cloud Storage |
| gcp-firestore | get_firestore_client() | Firestore operations |
| gcp-bigquery | get_bigquery_client() | BigQuery queries |
| gcp-spanner | get_spanner_client() | Cloud Spanner operations |
| aws-s3 | get_s3_client() | S3 operations |
| files | upload_file() | Content-addressed file uploads |
| image | generate_image() | Image generation |
| asr | transcribe_audio() | Speech-to-text |
| tts | text_to_speech() | Text-to-speech |
| Various | PDF processing | |
| media | Various | FFmpeg video/audio processing |
| Various | LinkedIn data operations | |
| youtube-download | Various | YouTube video/audio download |
Loading Strategy
Three-level loading:- Manifest level: Brief summary of all skills always included in generation prompts
- Documentation level: Full
SKILL.mdloaded based on:- Credential requirements matching interface schema
- Trigger keywords detected in user request
- Services identified from project analysis
- Shared references: Authentication guides (
_shared/GCP_AUTH.md,_shared/AWS_AUTH.md) loaded when relevant skills are selected
Runtime Installation
At execution time, the skill loader:- Discovers all Python files from skill directories
- Parses module-level function exports
- Generates
__init__.pywith all exports - Writes to
/home/user/constants_utils/in sandbox - Worker code imports:
from constants_utils import *
Credentials System
Architecture
The credentials system provides an encrypted vault with scoped bindings for secure secret management: Encryption: AES-256-GCM with a 32-byte key (CREDENTIAL_ENCRYPTION_KEY). Values encrypted at rest; metadata (e.g., GCP project ID, AWS key prefix) stored unencrypted for UI display.
Ownership: Each credential belongs to either a user or an organization (ownerType).
Binding & Resolution
When a tool runs, itsrequiredCredentials are resolved through a priority chain:
- Organization binding — Credential explicitly bound to the tool at org level
- Personal binding — Credential bound by the running user
- Auto-match — If exactly one stored credential matches the required type
- Runtime paste — User pastes a credential value at run time
- Unresolved — Warning logged, field left empty
Dual Channel Output
- Cloud credentials (GCP service account, AWS access key, Azure): Written to files and environment variables in the sandbox
- Custom credentials (API keys, tokens): Merged into
input.jsonunder acredentialskey
MCP Server Integration
Protocol Implementation
Constants implements the Model Context Protocol (MCP) to expose workers as tools for AI agents: Supported methods:initialize— Returns server capabilitiestools/list— Lists available workers as tools (owned + shared)tools/call— Executes a worker toolping— Keep-alive
Tool Conversion
Workers are converted to MCP tools with:- Sanitized tool name (title + ID suffix)
- Description from worker summary
- Input schema derived from the worker’s interface schema
- File fields excluded (not supported in MCP context)
Execution Flow
- Agent sends
tools/callwith tool name and arguments - MCP server authenticates via API key
- Worker identified by tool name suffix
- Access verified (ownership or sharing)
- Executed in E2B sandbox (same path as UI)
- Run logged with
triggeredBy: "agent",triggerSource: "mcp" - Result returned to agent
GitHub Scanning
Architecture
Repository scanning uses a Claude Agent SDK pipeline running in an E2B sandbox to discover tool candidates:Scan Flow
- Initiation: User triggers scan via
/api/github/scanwith repo URL - Sandbox setup: E2B sandbox created with
scanner-runner.jsand repo contents (zip download) - AI analysis: Claude agent with read-only tools (Read, Grep, Glob) explores the codebase
- Candidate discovery: Agent outputs structured JSON with candidates (name, description, category, score)
- Progress streaming: Events posted to
scan_eventstable via callbacks - Review: User reviews candidates in the UI (
ScanProgressBanner,ConnectCodebaseView) - Bulk creation: Selected candidates created as workers via
WorkerGenerationService
Security
Same proxy + token pattern as generation:- Sandbox receives a scan token (
createScanToken) - Anthropic calls routed through
/api/proxy/claude-sdk - Token scoped to specific user/org/scan context
Slack Integration
Architecture
The Slack bot enables tool discovery and execution from Slack: OAuth flow:/api/slack/connect → Slack OAuth → /api/slack/callback → store slackConnections with bot token and API key
Event handling (/api/slack/events/route.ts):
- Verify Slack signature (HMAC-SHA256, 5-minute timestamp skew)
- Handle
url_verificationchallenge - On
app_mention: resolve connection, post “Working on it…” message, collect thread history (last 50 messages), createagent_runwith Slack callback metadata - Agent worker processes the run, updates the Slack message with results via Block Kit formatting
Queue-Based Processing
Slack HTTP handler stays fast (acknowledges within 3 seconds). Heavy work runs in the agent worker process, which:- Claims the Slack-origin
agent_run - Runs Claude Agent SDK with thread history as context
- Posts results back to Slack via
chat.updateorpostMessage
Interactive Sessions
Architecture
Interactive workers maintain a running E2B sandbox for stateful, multi-step interactions: Session tracking:worker_sessions table tracks active sandboxes per user/worker with status and lastActiveAt
HTML assembly (src/lib/interactive-base/): The interactive UI is assembled from:
- React + ReactDOM UMD bundles
ConstantsUIcomponent bundle (DataTable, Chart, JsonViewer, FilePicker, etc.)- Base HTML/CSS/JS shell with loading overlay and error handling
- LLM-generated tool JS and CSS
POST /api/workers/[id]/interact sends actions to the same running sandbox, with log streaming from server.log
Organizations
Architecture
Multi-tenant workspace system for team collaboration:- Personal org: Auto-created for each user on first API call
- Team org: Created explicitly with name, slug, avatar
- Roles: owner, admin, member, viewer — enforced via
withOrgAccessmiddleware - Quotas: Per-org limits on LLM, search, scrape, image, ASR, TTS usage with tier-based defaults and custom overrides
- Invite flow: Email-based invites with pending status, auto-accepted on login
x-organization-id header is used across API routes to scope operations to the correct organization context.
V1 REST API
Architecture
External REST API for programmatic tool access, authenticated via API keys with scoped permissions:| Endpoint | Method | Scope | Description |
|---|---|---|---|
/v1/tools | GET | mcp:read | List available tools with schemas |
/v1/run/[toolName] | POST | mcp:execute | Execute a tool (waits up to 50s for result) |
/v1/skill/[toolName] | GET | mcp:read | Download skill documentation as markdown |
/v1/skills/search | GET | — | Search available skills |
/v1/upload | POST | — | Upload files for tool execution |
callerType: "mcp"), so one worker implementation serves UI, MCP, V1, and Slack.
Frontend/Backend Communication
The Interface Schema Contract
TheWorkerInterfaceSchema (defined in src/types/worker-interface.ts using Zod) serves as the definitive contract:
postMessage Protocol
Frontend → Platform (WORKER_RUN):Data Flow Examples
Worker Creation
Worker Execution (UI)
Worker Execution (Agent / MCP / V1)
GitHub Repo Scan
Security Boundaries
- Sandbox isolation: No access to host system, controlled network, fresh environment per run
- Proxy pattern: Raw API keys never reach sandboxes — signed tokens + platform proxy for all external calls
- Credential encryption: AES-256-GCM at rest, scoped bindings, resolution chain with audit trail
- API key security: Hashed storage, scoped permissions (
mcp:read,mcp:execute), revocation support - Access control: Ownership + sharing + organization role verification on every operation
- Audit trail: All runs logged with full context (caller type, trigger source, credential sources, input summary)
- Slack verification: HMAC-SHA256 signature verification with timestamp skew protection
- Scan tokens: Scoped, short-lived tokens for GitHub scan sandbox operations
Database Schema
Table Groups
Core:workers, workerConversations, workerFiles, workerRunLogs, runLogLines, workerEvents, runFiles, workerData, workerSessions, notifications
Organizations: organizations, organizationMembers, organizationQuotas
Credentials: credentials (encrypted vault), workerCredentialBindings
Agent: agentConversations, agentMessages, agentRuns, agentEvents
GitHub: githubConnections, githubRepoHistory, repoScans, scanCandidates, scanEvents
Slack: slackConnections
Access & Usage: apiKeys, workerShares, apiUsage, userPreferences
Storage: fileBlobs (content-addressed, SHA256 hash as primary key)