Deployment Brief
Cloud-native production reference for adversaryai.io
Architecture Posture
Separate the experience layer from the operational core.
The public site and product app run at the edge on Vercel. The Rust API and private ML lane run on Fly. Managed Postgres and Redis services remove the need for local cluster infrastructure in production.
Edge Experience
Next.js marketing and product projects on Vercel with separate domains and deploy pipelines.
Core API
Rust Axum app on Fly for analytics, identity, routing, and operational state.
ML Service
Internal-only service on Fly private networking so score execution stays behind the API boundary.
Data Plane
Supabase Postgres and Redis-backed caching / coordination for production operation.
Deployment Runbook
Four steps to keep the stack aligned
01 / Build
Vercel preview deployments validate the marketing site and the product launcher before promotion.
02 / Promote
Vercel production release updates the root, www, and app surfaces independently when needed.
03 / Deploy API
Fly deploy pushes the Rust API and private ML lane while preserving separate scaling controls.
04 / Verify
Health, analytics, and browser checks confirm the marketing site, launcher, and command surface stay aligned.
Domain Map
Public routing and service ownership
Hostname
Role
Target
adversaryai.io
Marketing / root domain
Vercel marketing project
www.adversaryai.io
Marketing mirror
Vercel marketing project
app.adversaryai.io
Product application
Vercel product project
api.adversaryai.io
Public API hostname
Fly.io Rust API
Environment Controls
Production values that matter most
Variable
Surface
Purpose
NEXT_PUBLIC_API_BASE_URL
Vercel product app
Browser-facing API target for analytics, auth, and live command data.
DATABASE_URL
Fly API
Primary production Postgres connection string.
REDIS_URL
Fly API
Low-latency cache and pub/sub channel for operational state.
JWT_SECRET
Fly API
Session and token signing for backend identity controls.
ML_SERVICE_URL
Fly API
Internal private-network target for score inference requests.