[HopX by Bunnyshell]

BLAZING-FAST SANDBOXES FOR

|

Spin up isolated Linux micro-VMs in milliseconds. Unlimited runtime, secure by default. SDKs for Python, JS/TS, Go, .NET, Java, PHP.

⚑ Milliseconds startupπŸ›‘οΈ Linux micro-VMs♾️ Unlimited runtime

Install the HopX SDK

pip install hopx-ai

Create and use a sandbox

1from hopx_ai import Sandbox
2
3# Create sandbox
4sandbox = Sandbox.create(
5  template="code-interpreter"
6)
7
8# Execute code
9result = sandbox.run_code(
10  "print('Hello, World!')"
11)
12print(result.stdout)
13
14# Cleanup
15sandbox.kill()
SPEED

Startup in ~100ms

Sandboxes launch from prebuilt snapshots, allowing near-instant cold starts instead of seconds or minutes.

SECURITY

Isolated at the VM Level

Firecracker microVMs provide hardware-level security and kernel isolation β€” far beyond containers or serverless functions.

STABILITY

Run Continuously

No execution time limits. Keep agents, notebooks, or jobs running for hours, days, or weeks β€” with full state persistence.

Simple, Powerful SDK

Connect to sandboxes with clean APIs in your favorite language

Execute Code

Run Python, JavaScript, and more with rich output capture

Stream Output

Real-time streaming of code execution output via WebSocket

File Operations

Upload, download, watch files with full filesystem access

Commands

Execute shell commands and capture stdout/stderr

Processes

Start, monitor, and manage long-running background processes

Templates

List available templates and create custom sandboxes

Desktop Automation

Control desktop environments and automate GUI interactions

Metrics

Monitor CPU, memory, network, and disk in real time

sandbox-example.js
1import { Sandbox } from '@hopx-ai/sdk';
2
3const sandbox = await Sandbox.create({
4  template: 'code-interpreter',
5  apiKey: process.env.HOPX_API_KEY
6});
7
8// Execute Python code
9const result = await sandbox.runCode(`
10import sys
11print(f"Python {sys.version}")
12print("Hello from HopX!")
13`);
14
15console.log(result.stdout);
16// Output: Python 3.11...
17//         Hello from HopX!
18
19await sandbox.kill();
MCP Server

Give your AI assistant SUPERPOWERS
with secure, isolated code execution

The Hopx MCP server enables Claude, Cursor, and other AI assistants to execute Python, JavaScript, Bash, and Go code in blazing-fast (0.1s startup), isolated cloud sandboxes.

Quick Install
$ uvx hopx-mcp

Select your IDE

Configuration file location:
.cursor/mcp.json in your project or workspace
Replace your-api-key-here with your actual API key from hopx.ai.
1{
2  "mcpServers": {
3    "hopx-sandbox": {
4      "command": "uvx",
5      "args": ["hopx-mcp"],
6      "env": {
7        "HOPX_API_KEY": "your-api-key-here"
8      }
9    }
10  }
11}

Quick Start:

  1. Get your free API key from console.hopx.ai
  2. Add the configuration to your IDE's MCP settings file
  3. Replace your-api-key-here with your actual API key
  4. Restart your IDE/assistant and start executing code securely!

Execute Code

Python, JavaScript, Bash, Go in isolated sandboxes

Pre-installed Libraries

pandas, numpy, matplotlib ready to use

Auto-cleanup

Sandboxes destroy after use - no manual cleanup

BUILT FOR AI AGENTS, LLM EXECUTION, AND SECURE MCP INFRASTRUCTURE

Run Untrusted Code

Safely execute user-submitted or LLM-generated code in isolated microVMs.

See docs
β– ~100ms cold start (Firecracker microVMs)
β– Full Linux with file/exec/PTY access, no host exposure
β– Simple APIs: execute, execute/rich, commands/stream

Run AI Agents

Launch agents that write and execute code in dedicated runtimes.

See agent API
β– Multi-language execution + real-time WebSocket streaming
β– Persistent state via IPython kernel and filesystem
β– Per-agent isolation with controllable background processes

Run Data Analysis Notebooks

Spin up Jupyter notebooks with ML libraries preinstalled.

Jupyter guide
β– Persistent IPython + rich outputs (matplotlib/plotly/pandas)
β– File upload/download and result caching
β– Easy port exposure to access Jupyter securely

Run Deep Research Agents

Autonomous agents that iterate, gather data, and analyze continuously.

Research pattern
β– Long-running processes with no arbitrary timeouts
β– File watching, logging, and system metrics endpoints
β– Fast relaunch/snapshot flows (~100ms)

Run Autonomous AI Workflows

Multi-step pipelines that need code execution and tools.

Workflow examples
β– execute/background + API-level orchestration
β– Result caching and idempotent restarts
β– Fine-grained resource/timeout controls per step

Run Computer Use Agents

Agents that control a full cloud desktop (GUI, browser, apps).

Desktop automation API
β– Desktop automation: mouse, keyboard, windows, clipboard
β– VNC/noVNC streaming, screenshots, and screen recording
β– Optional browser automation (Firefox/Chromium)

Run Background Automations

Workers, schedulers, and recurring jobs that must keep running.

Background jobs
β– Background processes with list/kill/status APIs
β– Live logs via WebSocket; health & metrics endpoints
β– Safe self-update to restart/upgrade the agent

Run Reinforcement Learning

Train/evaluate RL agents in isolated, repeatable environments.

RL template
β– Continuous execution with on-disk persistence
β– Stream metrics, videos, and artifacts in real time
β– Strong isolation for reproducible experiments

Run Secure MCP Servers

Host MCP servers/tools in a controlled, isolated perimeter.

MCP runner guide
β– Per-server network and filesystem isolation
β– Safe port exposure for tools and endpoints
β– Rapid startup (~100ms) with persistent sessions

Run Long-Running Jobs

Jobs that need hours or days without forced shutdowns.

Long-running jobs
β– No artificial runtime limits
β– Process management + large artifact downloads
β– VM-level stability with near-native performance

Run Multi-Agent Mesh

Coordinate specialized agents across a mesh of isolated micro-VMs, orchestrated via LangGraph or AutoGen with fine-grained resource and permission control.

Mesh guide
β– One sandbox per agent (micro-VM) with strict security policies: network scope, file sandboxing, and port-to-port messaging/WebSocket.
β– Orchestrator-friendly: compatible with LangGraph/AutoGen; snapshots for branching/rollback; scheduling & webhooks for long-running tasks.
β– Safe scaling: parallel execution with ~100 ms cold start, per-agent limits (CPU/RAM/IO), metrics, audit logs, and automatic cleanup.

Only Pay When Your Code Is Running.

Hopx lets you launch fully isolated sandboxes in milliseconds β€” and shut them down instantly when you're done. Build, test, and run agents without waste.

Compute

vCPU$0.00001400/s

Memory

GiB$0.00000450/s

Storage

GiB$0.00000003/s

Elastic compute for AI-driven workloads.

Launch thousands of isolated sandboxes on demand β€” optimized for speed, cost, and security. Scale up in milliseconds, scale down to zero automatically.

Get started with $200 in free credits.

Frequently Asked Questions

Everything you need to know about Hopx sandboxes

Category

Deploy your first agent runtime today

Whether you're building agents, MCP servers, or LLM infrastructure, Bunnyshell gives you the runtime. Firecracker-powered sandboxes with full control, zero lock-in in under 100 ms.

No credit card required β€’ Free $200 credits β€’ Cancel anytime