Back to Blog

HopX vs Docker vs AWS Lambda: Choosing the Right Execution Environment

Use CasesAlex Oprisan9 min read

HopX vs Docker vs AWS Lambda: Choosing the Right Execution Environment

When building AI agents that execute code, you need an execution environment. The three most common options are Docker containers, AWS Lambda, and microVM-based sandboxes like HopX.

Each has its place. This guide helps you choose the right one.

Quick Comparison

FeatureDockerAWS LambdaHopX
IsolationProcess-levelMicroVMMicroVM
Cold Start500ms - 2s1-5s~100ms
Max DurationUnlimited15 minutesUnlimited
Persistent FSYesNoYes
Custom PackagesBuild timeLayers (250MB limit)Runtime or template
Network AccessFull controlConfigurableFull with controls
Pricing ModelSelf-hostedPer-invocationPer-second
Best ForLong-running servicesEvent-driven functionsAI agent code execution

Docker Containers

Docker is the industry standard for packaging and deploying applications. It uses OS-level virtualization to run isolated processes.

Pros

  • Mature ecosystem - Vast library of pre-built images
  • Developer familiarity - Most developers know Docker
  • Full control - You manage everything
  • No duration limits - Run as long as needed
  • Persistent storage - Volumes survive restarts

Cons

  • Not a security boundary - Containers share the host kernel
  • Container escapes - Regular CVEs (2019, 2020, 2022 had major ones)
  • Infrastructure overhead - You manage orchestration, scaling, updates
  • Cold start for new containers - 500ms-2s typically
  • Resource management - Manual configuration of limits

When to Use Docker

✅ Running trusted code that you wrote ✅ Long-running services (web servers, APIs) ✅ Development environments ✅ CI/CD pipelines

❌ Executing untrusted or LLM-generated code ❌ Multi-tenant workloads requiring isolation ❌ Security-critical applications

Docker Example

dockerfile
1
FROM python:3.11-slim
2
WORKDIR /app
3
COPY requirements.txt .
4
RUN pip install -r requirements.txt
5
COPY . .
6
CMD ["python", "main.py"]
7
 
bash
1
docker build -t my-agent .
2
docker run --rm my-agent
3
 

AWS Lambda

Lambda is AWS's serverless compute service. Code runs in microVMs managed by AWS.

Pros

  • True isolation - Each function runs in a dedicated microVM
  • Zero infrastructure - AWS handles everything
  • Auto-scaling - From 0 to thousands of concurrent executions
  • Pay-per-use - Only pay when code runs
  • Integrated with AWS - Easy access to S3, DynamoDB, etc.

Cons

  • Cold starts - 1-5 seconds for new instances
  • 15-minute limit - Long tasks must be split
  • No persistent filesystem - /tmp is cleared between invocations
  • 250MB package limit - Layers help but are complex
  • Vendor lock-in - AWS-specific patterns
  • Expensive at scale - Warm functions still cost money

When to Use Lambda

✅ Event-driven workloads (webhooks, queue processing) ✅ Infrequent, short-duration tasks ✅ AWS-centric architectures ✅ Batch processing

❌ Real-time AI agents (cold starts too slow) ❌ Long-running computations (15-min limit) ❌ Tasks requiring persistent state ❌ Heavy package dependencies

Lambda Example

python
1
# handler.py
2
def lambda_handler(event, context):
3
    code = event.get('code', '')
4
    # Execute code (but with all Lambda limitations)
5
    exec(code)  # Don't do this in production!
6
    return {'statusCode': 200}
7
 
yaml
1
# serverless.yml
2
functions:
3
  executor:
4
    handler: handler.lambda_handler
5
    timeout: 900  # Max 15 minutes
6
    memorySize: 1024
7
 

HopX Sandboxes

HopX provides microVM-based sandboxes optimized for AI workloads. Each sandbox is an isolated Linux VM with its own kernel.

Pros

  • True isolation - Hardware-level separation via microVMs
  • 100ms cold starts - Fast enough for real-time AI
  • No duration limits - Run for hours if needed
  • Persistent filesystem - Files survive between calls
  • Runtime package installation - pip install anything
  • Full Linux environment - Root access, any tool
  • Simple SDK - Python and JavaScript

Cons

  • Newer platform - Less ecosystem than Docker/Lambda
  • Requires API key - Not self-hosted
  • Cost for idle sandboxes - Pay while running (pause to save)

When to Use HopX

✅ AI agent code execution ✅ Running LLM-generated code safely ✅ Multi-tenant SaaS with code execution ✅ Data analysis and notebook workloads ✅ Browser automation and desktop testing ✅ Long-running agent tasks

❌ Simple web application hosting ❌ Event-driven queue processing ❌ Extremely high-frequency, low-latency calls

HopX Example

python
1
from hopx_ai import Sandbox
2
 
3
# Create isolated sandbox
4
with Sandbox.create(template="code-interpreter") as sandbox:
5
    # Install any package at runtime
6
    sandbox.commands.run("pip install pandas matplotlib")
7
    
8
    # Execute untrusted code safely
9
    result = sandbox.run_code("""
10
import pandas as pd
11
df = pd.DataFrame({'x': [1,2,3], 'y': [4,5,6]})
12
print(df.describe())
13
""")
14
    print(result.stdout)
15
 

Real-World Scenario Comparisons

Scenario 1: AI Coding Assistant

You're building a coding assistant that executes user code to help debug.

AspectDockerLambdaHopX
User runs import os; os.system('rm -rf /')🔴 Deletes container files, potential escape🟡 Limited damage, 15-min max🟢 Contained, sandbox destroyed after
User runs 30-minute ML training🟢 Works🔴 Timeout after 15min🟢 Works
User needs custom packages🟡 Rebuild image🔴 Redeploy with layers🟢 pip install at runtime
Cold start for new user🟡 1-2s🔴 1-5s🟢 ~100ms

Winner: HopX - Built for this exact use case.

Scenario 2: Webhook Processing

You receive webhooks and need to process them quickly.

AspectDockerLambdaHopX
Scale to 1000 concurrent🟡 Need K8s/ECS🟢 Automatic🟢 Automatic
Cost at low volume🔴 Always running🟢 Pay per invocation🟡 Pay per second
Integration with AWS🟡 Manual setup🟢 Native🟡 Via API
Execution time (50ms avg)🟢 Fast🟢 Fast🟢 Fast

Winner: Lambda - Designed for event-driven, short tasks.

Scenario 3: Long-Running Data Pipeline

You have a data pipeline that runs for 2 hours processing large datasets.

AspectDockerLambdaHopX
2-hour runtime🟢 Works🔴 Impossible🟢 Works
Large package dependencies🟢 Any size🔴 250MB limit🟢 Any size
Persistent intermediate files🟢 Volumes🔴 No persistence🟢 Sandbox FS
Cost optimization🟡 Manual scaling🔴 N/A🟢 Pause when idle

Winner: Docker/HopX - Lambda can't handle this.

Scenario 4: Multi-Tenant SaaS

You're building a SaaS where each customer can run custom code.

AspectDockerLambdaHopX
Tenant isolation🔴 Weak (shared kernel)🟢 Strong (microVM)🟢 Strong (microVM)
Noisy neighbor protection🟡 Requires careful config🟢 Automatic🟢 Automatic
Custom environments per tenant🟡 Image per tenant🔴 Complex🟢 Template per tenant
Compliance requirements🔴 Hard to prove isolation🟢 AWS attestation🟢 Hardware isolation

Winner: HopX/Lambda - Docker lacks sufficient isolation for multi-tenant.

Cost Comparison

Let's compare costs for a typical AI agent workload: 10,000 executions/day, 30 seconds average, 1 vCPU, 1GB RAM.

Docker (self-hosted on AWS EC2)

text
1
c5.large (2 vCPU, 4GB): $0.085/hour
2
Monthly: $0.085 × 24 × 30 = $61.20
3
+ Reserved capacity for spikes: ~$100/month
4
Total: ~$160/month
5
 

But you're paying for idle time and managing infrastructure.

AWS Lambda

text
1
10,000 executions × 30 days = 300,000/month
2
Duration: 300,000 × 30s = 9,000,000 GB-seconds
3
Cost: 9,000,000 × $0.0000166667 = $150/month
4
+ Requests: 300,000 × $0.20/million = $0.06
5
Total: ~$150/month
6
 

But cold starts hurt UX, and 15-minute limit is restrictive.

HopX

text
1
Compute: 9,000,000 vCPU-seconds × $0.000014 = $126
2
Memory: 9,000,000 GB-seconds × $0.0000045 = $40.50
3
Total: ~$167/month
4
 

But you get 100ms cold starts, no duration limits, and full Linux environment.

Cost verdict: All three are competitive. Choose based on features, not cost.

Decision Framework

Use this flowchart to choose:

text
1
Is the code trusted (you wrote it)?
2
 Yes  Docker (full control, mature ecosystem)
3
 No  Continue...
4
 
5
Is the code LLM-generated or user-submitted?
6
 Yes  Need strong isolation
7
    Tasks under 15 minutes?  Lambda is an option
8
    Longer tasks or real-time?  HopX
9
 No  Depends on requirements
10
 
11
Do you need sub-second cold starts?
12
 Yes  HopX (~100ms)
13
 No  Lambda (1-5s) is acceptable
14
 
15
Do you need persistent filesystem?
16
 Yes  Docker or HopX
17
 No  Lambda works
18
 
19
Are you already deep in AWS ecosystem?
20
 Yes  Lambda for integration benefits
21
 No  Evaluate based on other factors
22
 

Hybrid Approaches

You don't have to choose just one. Many teams use:

  1. Docker for their main application (web servers, APIs)
  2. Lambda for event processing (webhooks, queues)
  3. HopX for AI agent code execution
python
1
# Your main app (Docker/K8s)
2
@app.post("/execute")
3
async def execute_code(request: CodeRequest):
4
    # Delegate unsafe execution to HopX
5
    with Sandbox.create(template="code-interpreter") as sandbox:
6
        result = sandbox.run_code(request.code)
7
        return {"output": result.stdout}
8
 
9
# Meanwhile, Lambda handles webhooks
10
# HopX handles AI agent tasks
11
 

Conclusion

Use CaseRecommendation
Web applicationsDocker
Event-driven functionsLambda
AI agent code executionHopX
Running untrusted codeHopX
Multi-tenant SaaSHopX or Lambda
Long-running computationsDocker or HopX
Data pipelinesDocker (complex) or HopX (simple)

The right choice depends on your specific requirements. For AI agents that execute code, HopX provides the best combination of security, speed, and flexibility.


Ready to try HopX? Sign up for free and get $200 in credits.