Pricing

Pay per compute-second. No idle charges. Free tier for self-hosted.

Free
$0
self-hosted, unlimited
  • Run on your own hardware
  • Unlimited sandboxes
  • All templates included
  • Custom templates
  • No telemetry
Enterprise
Custom
volume discounts
  • Dedicated bare metal
  • Custom SLAs
  • Volume discounts
  • Private deployment
  • Dedicated support

How billing works

You're billed for compute time only. The meter starts when a sandbox enters the ready state and stops when it is destroyed.

MetricUnitPrice
vCPU timeper vCPU-second$0.000011
Memoryincluded with vCPU--
Storagesession ephemeral--
NetworkegressFree (fair use)

Example costs

WorkloadConfigDurationCost
Claude Code session2 vCPU, 2 GiB30 min$0.040
CI pipeline2 vCPU, 512 MiB5 min$0.007
Data analysis2 vCPU, 512 MiB2 min$0.003
200 hours/month (free tier)2 vCPU200 hr$0.000

Free tier details

The Pro plan includes 200 free vCPU-hours per month. That's equivalent to:

  • 400 thirty-minute Claude Code sessions (2 vCPU each)
  • 2,400 five-minute CI runs (2 vCPU each)
  • Any combination that totals 720,000 vCPU-seconds

Unused free hours do not roll over. If you exceed the free tier, you're billed at $0.000011/vCPU-second for the overage.

Self-hosted

upstream is open source. Run it on your own hardware with zero cost and zero limits. Requirements:

  • Linux host with KVM support
  • Firecracker v1.14.1+
  • containerd
  • At least 4 GiB RAM (for the controller + one sandbox)

Self-hosted deployments get access to the same templates, the same API, and the same SDK. The only difference is you manage the metal.

Track your usage

from upstream import Upstream

client = Upstream(api_key="sk-...")
usage = client.usage()
print(f"Sessions: {usage.total_sessions}")
print(f"Compute: {usage.total_vcpu_seconds:.0f} vCPU-seconds")

Or via the API:

$ curl -X POST https://api.upstream.build/upstream.platform.v1.Platform/GetUsage \
  -H "Authorization: Bearer sk-..." \
  -H "Content-Type: application/json" \
  -d '{"since": "2026-04-01T00:00:00Z"}'