Custom Templates: Pre-bake Your Perfect Environment
Every time you create a sandbox, you need specific tools, libraries, and configurations. Installing them on each run wastes time and resources. Custom templates solve this by pre-baking everything into a reusable image.
This guide shows you how to create, build, and use custom templates that start in milliseconds with everything ready.
Why Custom Templates?
Without custom templates:
- Install dependencies every sandbox creation
- Download packages from the internet repeatedly
- Configure tools each time
- Cold start: 30+ seconds
With custom templates:
- All dependencies pre-installed
- Configuration already applied
- Files and data included
- Cold start: ~100ms
Template Basics
A HopX template is essentially a Docker image optimized for sandbox execution. You define what goes into the template, and HopX handles the rest.
Available Base Templates
HopX provides these base templates:
| Template | Description | Use Case |
|---|
code-interpreter | Python 3.13 with scientific packages | Data analysis, AI/ML |
desktop | Full desktop with browsers | Browser automation, RPA |
node | Node.js 20+ environment | JavaScript execution |
base | Minimal Linux | Custom builds |
Creating Your First Custom Template
Method 1: Build from Dockerfile
The most flexible approach is using a Dockerfile:
| 1 | from hopx import Template |
| 2 | |
| 3 | # Create template from Dockerfile |
| 4 | template = Template.build( |
| 5 | name="my-ml-template", |
| 6 | dockerfile=""" |
| 7 | FROM python:3.13-slim |
| 8 | |
| 9 | # Install system dependencies |
| 10 | RUN apt-get update && apt-get install -y \\ |
| 11 | build-essential \\ |
| 12 | git \\ |
| 13 | curl \\ |
| 14 | && rm -rf /var/lib/apt/lists/* |
| 15 | |
| 16 | # Install Python packages |
| 17 | RUN pip install --no-cache-dir \\ |
| 18 | numpy==1.26.0 \\ |
| 19 | pandas==2.1.0 \\ |
| 20 | scikit-learn==1.3.0 \\ |
| 21 | torch==2.1.0 \\ |
| 22 | transformers==4.35.0 \\ |
| 23 | matplotlib==3.8.0 \\ |
| 24 | seaborn==0.13.0 |
| 25 | |
| 26 | # Set up working directory |
| 27 | WORKDIR /app |
| 28 | |
| 29 | # Pre-download a model (optional but saves time) |
| 30 | RUN python -c "from transformers import AutoTokenizer; AutoTokenizer.from_pretrained('bert-base-uncased')" |
| 31 | """ |
| 32 | ) |
| 33 | |
| 34 | print(f"Template built: {template.name}") |
| 35 | print(f"Template ID: {template.id}") |
| 36 | |
Method 2: Build from Base Image
Start from an existing image and add your customizations:
| 1 | from hopx import Template |
| 2 | |
| 3 | template = Template.build( |
| 4 | name="data-science-template", |
| 5 | base_image="python:3.13-slim", |
| 6 | commands=[ |
| 7 | "apt-get update && apt-get install -y build-essential", |
| 8 | "pip install numpy pandas scikit-learn matplotlib jupyter", |
| 9 | "mkdir -p /data /notebooks" |
| 10 | ] |
| 11 | ) |
| 12 | |
Method 3: Extend HopX Templates
Build on top of existing HopX templates:
| 1 | from hopx import Template |
| 2 | |
| 3 | # Extend the code-interpreter template |
| 4 | template = Template().from_code_interpreter_image("3.13") |
| 5 | template.add_commands([ |
| 6 | "pip install openai anthropic langchain", |
| 7 | "pip install chromadb faiss-cpu" |
| 8 | ]) |
| 9 | template.build(name="ai-agent-template") |
| 10 | |
Using Custom Templates
Once built, use your template just like built-in ones:
| 1 | from hopx import Sandbox |
| 2 | |
| 3 | # Create sandbox from custom template |
| 4 | sandbox = Sandbox.create(template="my-ml-template") |
| 5 | |
| 6 | # All packages are already installed! |
| 7 | result = sandbox.commands.run(""" |
| 8 | python -c " |
| 9 | import torch |
| 10 | import transformers |
| 11 | print(f'PyTorch: {torch.__version__}') |
| 12 | print(f'Transformers: {transformers.__version__}') |
| 13 | print('GPU available:', torch.cuda.is_available()) |
| 14 | " |
| 15 | """) |
| 16 | print(result.stdout) |
| 17 | |
Advanced Template Patterns
Adding Files and Data
Include files in your template:
| 1 | from hopx import Template |
| 2 | |
| 3 | template = Template.build( |
| 4 | name="with-data-template", |
| 5 | dockerfile=""" |
| 6 | FROM python:3.13-slim |
| 7 | WORKDIR /app |
| 8 | |
| 9 | # Copy local files into the image |
| 10 | COPY requirements.txt . |
| 11 | RUN pip install -r requirements.txt |
| 12 | |
| 13 | # Include data files |
| 14 | COPY data/ /data/ |
| 15 | COPY config/ /config/ |
| 16 | """, |
| 17 | context_files={ |
| 18 | "requirements.txt": "numpy\npandas\nscikit-learn", |
| 19 | "data/sample.csv": "id,value\n1,100\n2,200", |
| 20 | "config/settings.json": '{"debug": false, "threads": 4}' |
| 21 | } |
| 22 | ) |
| 23 | |
Environment Variables
Set default environment variables:
| 1 | from hopx import Template |
| 2 | |
| 3 | template = Template.build( |
| 4 | name="configured-template", |
| 5 | dockerfile=""" |
| 6 | FROM python:3.13-slim |
| 7 | |
| 8 | ENV PYTHONUNBUFFERED=1 |
| 9 | ENV MODEL_PATH=/models |
| 10 | ENV LOG_LEVEL=INFO |
| 11 | |
| 12 | # Your setup... |
| 13 | """ |
| 14 | ) |
| 15 | |
| 16 | # Variables are available in sandbox |
| 17 | sandbox = Sandbox.create(template="configured-template") |
| 18 | result = sandbox.commands.run("echo $MODEL_PATH") |
| 19 | print(result.stdout) # /models |
| 20 | |
Multi-Stage Builds for Smaller Images
Keep images small by using multi-stage builds:
| 1 | from hopx import Template |
| 2 | |
| 3 | template = Template.build( |
| 4 | name="optimized-template", |
| 5 | dockerfile=""" |
| 6 | # Build stage - compile dependencies |
| 7 | FROM python:3.13 AS builder |
| 8 | RUN pip install --target=/deps numpy pandas scikit-learn |
| 9 | |
| 10 | # Runtime stage - minimal image |
| 11 | FROM python:3.13-slim |
| 12 | COPY --from=builder /deps /usr/local/lib/python3.13/site-packages/ |
| 13 | WORKDIR /app |
| 14 | """ |
| 15 | ) |
| 16 | |
Pre-trained Models
Include ML models in your template:
| 1 | from hopx import Template |
| 2 | |
| 3 | template = Template.build( |
| 4 | name="llm-template", |
| 5 | dockerfile=""" |
| 6 | FROM python:3.13-slim |
| 7 | |
| 8 | RUN pip install transformers torch sentence-transformers |
| 9 | |
| 10 | # Pre-download models during build |
| 11 | RUN python -c " |
| 12 | from sentence_transformers import SentenceTransformer |
| 13 | model = SentenceTransformer('all-MiniLM-L6-v2') |
| 14 | model.save('/models/all-MiniLM-L6-v2') |
| 15 | " |
| 16 | |
| 17 | ENV MODEL_PATH=/models |
| 18 | """ |
| 19 | ) |
| 20 | |
| 21 | # Models are ready instantly |
| 22 | sandbox = Sandbox.create(template="llm-template") |
| 23 | result = sandbox.commands.run(""" |
| 24 | python -c " |
| 25 | from sentence_transformers import SentenceTransformer |
| 26 | model = SentenceTransformer('/models/all-MiniLM-L6-v2') |
| 27 | embedding = model.encode('Hello world') |
| 28 | print(f'Embedding shape: {embedding.shape}') |
| 29 | " |
| 30 | """) |
| 31 | print(result.stdout) |
| 32 | |
Template for Specific Frameworks
LangChain Agent Template
| 1 | from hopx import Template |
| 2 | |
| 3 | langchain_template = Template.build( |
| 4 | name="langchain-agent", |
| 5 | dockerfile=""" |
| 6 | FROM python:3.13-slim |
| 7 | |
| 8 | RUN pip install --no-cache-dir \\ |
| 9 | langchain==0.1.0 \\ |
| 10 | langchain-openai \\ |
| 11 | langchain-community \\ |
| 12 | chromadb \\ |
| 13 | faiss-cpu \\ |
| 14 | pypdf \\ |
| 15 | tiktoken |
| 16 | |
| 17 | WORKDIR /app |
| 18 | """ |
| 19 | ) |
| 20 | |
FastAPI Microservice Template
| 1 | from hopx import Template |
| 2 | |
| 3 | fastapi_template = Template.build( |
| 4 | name="fastapi-service", |
| 5 | dockerfile=""" |
| 6 | FROM python:3.13-slim |
| 7 | |
| 8 | RUN pip install --no-cache-dir \\ |
| 9 | fastapi \\ |
| 10 | uvicorn[standard] \\ |
| 11 | pydantic \\ |
| 12 | httpx \\ |
| 13 | python-multipart |
| 14 | |
| 15 | WORKDIR /app |
| 16 | EXPOSE 8000 |
| 17 | """ |
| 18 | ) |
| 19 | |
Data Pipeline Template
| 1 | from hopx import Template |
| 2 | |
| 3 | pipeline_template = Template.build( |
| 4 | name="data-pipeline", |
| 5 | dockerfile=""" |
| 6 | FROM python:3.13-slim |
| 7 | |
| 8 | RUN apt-get update && apt-get install -y \\ |
| 9 | postgresql-client \\ |
| 10 | && rm -rf /var/lib/apt/lists/* |
| 11 | |
| 12 | RUN pip install --no-cache-dir \\ |
| 13 | pandas \\ |
| 14 | sqlalchemy \\ |
| 15 | psycopg2-binary \\ |
| 16 | pyarrow \\ |
| 17 | duckdb \\ |
| 18 | polars |
| 19 | |
| 20 | WORKDIR /app |
| 21 | """ |
| 22 | ) |
| 23 | |
Template Management
Listing Templates
| 1 | from hopx import Template |
| 2 | |
| 3 | # List all templates |
| 4 | templates = Template.list() |
| 5 | for t in templates: |
| 6 | print(f"{t.name}: {t.id}") |
| 7 | |
Updating Templates
When you need to update a template:
| 1 | from hopx import Template |
| 2 | |
| 3 | # Rebuild with the same name |
| 4 | template = Template.build( |
| 5 | name="my-template", # Same name = update |
| 6 | dockerfile=""" |
| 7 | FROM python:3.13-slim |
| 8 | # Updated dependencies |
| 9 | RUN pip install numpy==1.27.0 # New version |
| 10 | """ |
| 11 | ) |
| 12 | |
Deleting Templates
| 1 | from hopx import Template |
| 2 | |
| 3 | Template.delete("old-template-name") |
| 4 | |
Minimize Image Layers
| 1 | # Bad - many layers |
| 2 | RUN apt-get update |
| 3 | RUN apt-get install -y curl |
| 4 | RUN apt-get install -y git |
| 5 | RUN rm -rf /var/lib/apt/lists/* |
| 6 | |
| 7 | # Good - single layer |
| 8 | RUN apt-get update && apt-get install -y \ |
| 9 | curl \ |
| 10 | git \ |
| 11 | && rm -rf /var/lib/apt/lists/* |
| 12 | |
Use .dockerignore
| 1 | template = Template.build( |
| 2 | name="my-template", |
| 3 | dockerfile="...", |
| 4 | context_files={...}, |
| 5 | dockerignore=""" |
| 6 | *.pyc |
| 7 | __pycache__ |
| 8 | .git |
| 9 | .env |
| 10 | node_modules |
| 11 | """ |
| 12 | ) |
| 13 | |
Order Commands by Frequency of Change
| 1 | # Least likely to change first |
| 2 | FROM python:3.13-slim |
| 3 | RUN apt-get update && apt-get install -y build-essential |
| 4 | |
| 5 | # More stable dependencies |
| 6 | RUN pip install numpy pandas |
| 7 | |
| 8 | # Frequently changing dependencies last |
| 9 | RUN pip install my-custom-package==1.2.3 |
| 10 | |
Template Versioning
Use version tags for production templates:
| 1 | from hopx import Template |
| 2 | from datetime import datetime |
| 3 | |
| 4 | version = datetime.now().strftime("%Y%m%d") |
| 5 | |
| 6 | template = Template.build( |
| 7 | name=f"production-template-v{version}", |
| 8 | dockerfile="""...""" |
| 9 | ) |
| 10 | |
| 11 | # Keep track of versions |
| 12 | # production-template-v20251122 |
| 13 | # production-template-v20251123 |
| 14 | |
Troubleshooting
Build Failures
| 1 | try: |
| 2 | template = Template.build(name="test", dockerfile="...") |
| 3 | except BuildError as e: |
| 4 | print(f"Build failed: {e}") |
| 5 | print(f"Build logs: {e.logs}") |
| 6 | |
Image Too Large
Check what's consuming space:
| 1 | # Add this to debug |
| 2 | RUN du -sh /* 2>/dev/null | sort -hr | head -20 |
| 3 | |
Missing Dependencies at Runtime
Verify installation during build:
| 1 | RUN pip install my-package && python -c "import my_package" |
| 2 | |
Best Practices Summary
- Start minimal - Only include what you need
- Version dependencies - Pin specific versions
- Pre-download models - Don't download at runtime
- Use multi-stage builds - Keep final image small
- Test locally first - Build and test before deploying
- Document your templates - Add comments explaining choices
- Version templates - Use tags for production
Conclusion
Custom templates are the key to fast, reliable sandboxes. By pre-baking your environment:
- Faster starts - 100ms vs 30+ seconds
- Consistent environments - Same setup every time
- Lower costs - Less runtime computation
- Better reliability - No network dependency issues
Start building your custom templates today and transform your sandbox workflow.
Resources