██╗ ██████╗ █████╗ ██████╗ ████████╗███████╗███████╗████████╗
██║ ██╔═══██╗██╔══██╗██╔══██╗ ╚══██╔══╝██╔════╝██╔════╝╚══██╔══╝
██║ ██║ ██║███████║██║ ██║ ██║ █████╗ ███████╗ ██║
██║ ██║ ██║██╔══██║██║ ██║ ██║ ██╔══╝ ╚════██║ ██║
███████╗╚██████╔╝██║ ██║██████╔╝ ██║ ███████╗███████║ ██║
╚══════╝ ╚═════╝ ╚═╝ ╚═╝╚═════╝ ╚═╝ ╚══════╝╚══════╝ ╚═╝
Autonomous performance monitoring with 1,000 concurrent workers, real-time browser dashboard, and AI-driven API optimization.
🚀 Fire 5,000 requests across 1,000 virtual users simultaneously.
📊 Watch live metrics update in your browser in real time.
🤖 Auto-detect slow endpoints and optimize them — autonomously.
- Overview
- Features
- Architecture
- Project Structure
- Quick Start
- Configuration
- API Endpoints
- Dashboard
- Agent System
- Performance Thresholds
- Tech Stack
Load Testing Engine is a full-stack autonomous performance testing platform built on Next.js 16. It combines:
- A high-throughput traffic generator using Node.js Worker Threads (1,000 parallel virtual users)
- A real-time browser dashboard with live sparkline charts and request feeds
- An autonomous editor agent that detects slow endpoints and applies Prisma optimizations
- A master orchestrator that runs the full loop — test → analyze → fix → retest — up to 5 cycles
Everything is observable through a beautiful dark-mode UI at localhost:3000.
| Feature | Description |
|---|---|
| ⚡ 1,000 Concurrent Workers | Node.js Worker Threads — true parallelism, not async queuing |
| 📡 5,000 Requests per Run | Configurable count, weighted across 3 endpoints |
| 📊 Real-time Dashboard | Live sparklines, latency bars, status code histograms |
| 🔄 Auto-polling UI | Dashboard refreshes every 1.5s during active tests |
| 🤖 Editor Agent | Detects endpoints exceeding 200ms → applies source-level fixes |
| 🔁 Retry Loop | Reruns up to 5× until all endpoints pass the threshold |
| 🗂 Prisma + PostgreSQL | Indexed models for User, Post, Product |
| 📄 Metrics Files | Full request log + summary report written to disk |
┌─────────────────────────────────────────────────────────────────────┐
│ ORCHESTRATOR (master) │
│ testing/orchestrator.ts │
└─────────────────┬───────────────────────────────────┬───────────────┘
│ │
┌─────────▼──────────┐ ┌──────────▼──────────┐
│ TERMINAL AGENT │ │ EDITOR AGENT │
│ traffic-generator │ │ performance-monitor │
│ ───────────────── │ │ ─────────────────── │
│ Spawns 1,000 │ │ Reads metrics file │
│ Worker Threads │ │ Finds slow > 200ms │
│ Fires HTTP reqs │ │ Rewrites Prisma │
│ Writes metrics.txt │ │ query optimizations │
└─────────┬──────────┘ └──────────┬──────────┘
│ │
┌─────────▼──────────┐ │
│ WORKER THREADS │ │
│ worker-thread.ts │◄───────────────────────┘
│ ───────────────── │ (retriggers after fix)
│ 1,000 × virtual │
│ users in parallel │
│ /api/users 40% │
│ /api/posts 35% │
│ /api/products 25% │
└────────────────────┘
│
┌─────────▼──────────┐
│ NEXT.JS API │
│ ───────────────── │
│ /api/users │
│ /api/posts │
│ /api/products │
│ /api/metrics │◄──── Dashboard polls here
│ /api/load/start │◄──── Dashboard triggers here
└────────────────────┘
load-testing-engine/
│
├── app/ # Next.js App Router
│ ├── api/
│ │ ├── users/route.ts # GET /api/users — indexed Prisma query
│ │ ├── posts/route.ts # GET /api/posts — filtered + cached
│ │ ├── products/route.ts # GET /api/products — category + price filter
│ │ ├── metrics/route.ts # GET /api/metrics — parses metrics file → JSON
│ │ └── load/start/route.ts # POST /api/load/start — triggers traffic gen
│ ├── globals.css # Design system (dark, glassmorphism tokens)
│ ├── layout.tsx # Root layout + SEO metadata
│ └── page.tsx # ⭐ Live real-time dashboard
│
├── testing/ # Agent system
│ ├── orchestrator.ts # 🤖 Master controller — runs full loop
│ ├── traffic-generator.ts # 📡 Terminal Agent — 1,000 worker spawner
│ ├── worker-thread.ts # 🧵 Worker Thread — single virtual user
│ └── performance-monitor.ts # 👁 Editor Agent — detects + fixes slow routes
│
├── prisma/
│ └── schema.prisma # User, Post, Product models with indexes
│
├── lib/
│ └── prisma.ts # Singleton PrismaClient
│
├── prisma.config.ts # Prisma v7 config (reads DATABASE_URL)
├── .env # Environment variables
└── package.json
git clone <your-repo-url>
cd load-testing-engine
npm installCopy the template and fill in your database credentials:
cp .env.example .env # or edit .env directly# .env
# Supabase / PostgreSQL connection string
# Format: postgresql://user:password@host:port/database
DATABASE_URL=postgresql://postgres.YOUR_PROJECT_REF:YOUR_PASSWORD@aws-0-us-east-1.pooler.supabase.com:6543/postgres?pgbouncer=true&connection_limit=1
# Optional: Supabase client keys
NEXT_PUBLIC_SUPABASE_URL=https://YOUR_PROJECT_REF.supabase.co
NEXT_PUBLIC_SUPABASE_PUBLISHABLE_KEY=sb_publishable_...💡 Getting your Supabase
DATABASE_URL:
Dashboard → Project Settings → Database → Connection string → Transaction mode (port 6543)
npx prisma migrate dev --name initThis creates the User, Post, and Product tables with the following indexes:
-- Auto-created by Prisma schema
CREATE INDEX ON "User"("email");
CREATE INDEX ON "Post"("authorId", "published");
CREATE INDEX ON "Product"("category", "price");npm run devOpen http://localhost:3000 — you'll see the live dashboard.
Option A — From the browser: Click ▶ Run Load Test in the dashboard header.
Option B — From the terminal:
# Traffic generator only (1,000 workers, 5,000 requests)
npm run load:run
# Full autonomous loop (test → analyze → fix → retest)
npm run load:allAll tuning knobs live at the top of testing/traffic-generator.ts:
const CONFIG = {
BASE_URL: 'http://localhost:3000', // Target server
TOTAL_USERS: 1000, // Concurrent virtual users
REQUESTS_PER_USER: 5, // Requests each user fires
CONCURRENCY_BATCH: 50, // Max workers spawned at once
SLOW_THRESHOLD_MS: 200, // Flag anything above this
ENDPOINTS: [
{ method: 'GET', path: '/api/users', weight: 40 }, // 40% traffic share
{ method: 'GET', path: '/api/posts', weight: 35 },
{ method: 'GET', path: '/api/products', weight: 25 },
],
}| Parameter | Default | Description |
|---|---|---|
TOTAL_USERS |
1000 |
Number of parallel Worker Threads |
REQUESTS_PER_USER |
5 |
Requests each worker sends |
CONCURRENCY_BATCH |
50 |
Workers spawned per batch (OS limit guard) |
SLOW_THRESHOLD_MS |
200 |
Latency above this triggers the editor agent |
weights |
40/35/25 |
Traffic distribution across endpoints |
| Method | Path | Description | Cache |
|---|---|---|---|
GET |
/api/users |
All users (indexed by email) | No |
GET |
/api/posts |
Published posts filtered by authorId |
60s |
GET |
/api/products |
Products filtered by category + price |
No |
| Method | Path | Description |
|---|---|---|
GET |
/api/metrics |
Parses performance-metrics.txt → JSON summary |
POST |
/api/load/start |
Spawns traffic generator as background process |
GET |
/api/load/start |
Returns { running: bool, pid: number } |
The live dashboard at localhost:3000 shows:
┌─────────────┬─────────────┬─────────────┬─────────────┬─────────────┐
│ Total Reqs │ Avg Latency │ P95 Latency │ P99 Latency │ Slow Reqs │
│ 5,000 │ 143ms │ 287ms │ 412ms │ 23% │
└─────────────┴─────────────┴─────────────┴─────────────┴─────────────┘
⏱ Avg Latency — Live 📡 Request Rate — Live
╔══════════════════════╗ ╔══════════════════════╗
║ sparkline chart ║ ║ sparkline chart ║
╚══════════════════════╝ ╚══════════════════════╝
⚡ Endpoint Performance 🔢 HTTP Status Codes
/api/users avg ████░ 142ms 200 ██████████████ 4,823 (96%)
/api/posts avg █████░ 156ms 500 █ 177 (4%)
/api/products avg ███░ 118ms
📋 Live Request Feed
200 /api/users 142ms 22:30:11
200 /api/posts 167ms 22:30:11
Dashboard auto-polls /api/metrics every 1.5 seconds while a test is running and stops when complete.
- Spawns 1,000 Worker Threads in batches of 50
- Each thread acts as an independent virtual user
- Sends HTTP requests weighted by traffic share
- Writes every response to
performance-metrics.txt
- Reads
performance-metrics.txtafter each run - Identifies endpoints with avg or P95 > 200ms
- Opens the corresponding
route.tssource file - Applies targeted Prisma optimizations:
- Adds
selectfield projection (avoidsSELECT *) - Adds
takelimits to prevent full table scans - Adds cache headers (
Cache-Control: s-maxage)
- Adds
- Saves the modified file
Run traffic generator
↓
Check metrics — any endpoint above threshold?
↓ YES ↓ NO
Run editor agent Write final report ✅
↓
Re-run traffic generator
↓
Repeat up to 5 cycles
| File | Contents |
|---|---|
performance-metrics.txt |
Every request: timestamp, user, endpoint, status, latency |
performance-report.txt |
Baseline vs final comparison, pass/fail per endpoint |
| Metric | Target | Status |
|---|---|---|
| Avg latency | < 200ms |
✅ green / |
| P95 latency | < 200ms |
✅ green / |
| Error rate | < 1% |
HTTP 5xx tracked |
| Slow requests | < 10% |
Shown as % in dashboard |
The editor agent triggers when any endpoint's avg or P95 exceeds 200ms.
| Layer | Technology |
|---|---|
| Framework | Next.js 16 (App Router, Turbopack) |
| Language | TypeScript 5 (strict, ESM) |
| Database | PostgreSQL via Supabase |
| ORM | Prisma 7 (with prisma.config.ts) |
| Concurrency | Node.js Worker Threads (native) |
| UI | React 19 + Vanilla CSS (glassmorphism) |
| Fonts | Inter + JetBrains Mono (Google Fonts) |
| Charts | Canvas 2D API (zero dependencies) |
| Runtime | Node.js 20+ with tsx for TypeScript execution |
npm run dev # Start Next.js dev server
npm run build # Build production bundle
npm run start # Start production server
npm run lint # Run ESLint
npm run load:run # Terminal Agent only (traffic generator)
npm run load:monitor # Editor Agent only (performance monitor)
npm run load:all # Full autonomous loop (recommended)- WebSocket-based real-time streaming (replace polling)
- Historical run comparison view
- Custom endpoint configuration via UI
- CSV/JSON export of metrics
- Docker Compose setup for CI integration
- Configurable concurrency ramp-up curves
Built by Sameer Abrar, Flexcrit Inc.
Autonomous load testing — fire it and watch it optimize itself.