Run locally. Run in CI. Same bare metal every time.
Bencher is the first continuous benchmarking platform to run your existing benchmarks on the exact same bare metal both locally and in CI. It tracks results over time and fails the PR when there's a performance regression.
Built for teams where performance matters:
- Databases
- Compilers
- Browsers
- Runtimes
- Networking stacks
- Cryptographic libraries
- Operating Systems
Most teams start on GitHub Actions runners with a benchmark comparison script. That approach breaks down when noisy shared CI runners hide real performance regressions.
- Typical CI runners: >30% variance
- Bencher Bare Metal runners: <2% variance
When a number moves, it means something.
Benchmark for free → Bare Metal Quickstart
Used by the teams behind Google Sedpack, Microsoft CCF, GitLab Git, Mozilla Neqo, Rustls, Servo, and Diesel.
Shipping a performance regression is expensive:
- A database regression on a hot path pages someone at 2am
- A compiler regression silently makes every downstream build slower
- A browser engine ships a 10% paint regression and users notice
- A crypto library that adds a microsecond to a handshake breaks an SLA
Without trustworthy benchmarks in the PR workflow, you find out when users do.
Local benchmarks aren't reproducible. Every check means stopping work to pull the baseline branch and wait on a comparison. Most engineers skip it.
CI runners are shared and noisy. Noisy benchmarks train engineers to ignore alerts. Performance regressions silently ship.
If you can't tell a real regression from noise, the results are worthless. So teams stop looking.
- Run: Run your benchmarks locally or in CI using the exact same bare metal runners and your favorite benchmarking tools. The
bencherCLI orchestrates running your benchmarks on bare metal and stores the results. - Track: Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch, testbed, benchmark, and measure.
- Catch: Catch performance regressions locally or in CI using the exact same bare metal hardware. Bencher uses state of the art, customizable analytics to detect performance regressions before they merge.
For the same reasons that unit tests are run to prevent feature regressions, benchmarks should be run with Bencher to prevent performance regressions. Performance bugs are bugs!
| Branch | 254/merge |
| Testbed | ubuntu-latest |
🚨 1 ALERT: Threshold Boundary Limit exceeded!
| Benchmark | Measure Units | View | Benchmark Result (Result Δ%) | Upper Boundary (Limit %) |
|---|---|---|---|---|
| Adapter::Json | Latency microseconds (µs) | 📈 plot 🚨 alert 🚷 threshold | 3.45 (+1.52%) | 3.36 (102.48%) |
Click to view all benchmark results
| Benchmark | Latency | Benchmark Results microseconds (µs) (Result Δ%) | Upper Boundary microseconds (µs) (Limit %) |
|---|---|---|---|
| Adapter::Json | 📈 view plot 🚨 view alert 🚷 view threshold | 3.45 (+1.52%) | 3.36 (102.48%) |
| Adapter::Magic (JSON) | 📈 view plot 🚷 view threshold | 3.43 (+0.69%) | 3.60 (95.40%) |
| Adapter::Magic (Rust) | 📈 view plot 🚷 view threshold | 22.10 (-0.83%) | 24.73 (89.33%) |
| Adapter::Rust | 📈 view plot 🚷 view threshold | 2.31 (-2.76%) | 2.50 (92.21%) |
| Adapter::RustBench | 📈 view plot 🚷 view threshold | 2.30 (-3.11%) | 2.50 (91.87%) |
Bencher is a suite of bare metal continuous benchmarking tools:
bencherCLI: run benchmarks and publish results- Bencher API Server: store, query, and alert on results
- Bencher Console: web UI for tracking and graphing
- Bencher Bare Metal
runner: dedicated hardware for noise-free benchmarks
The best place to start is the Bare Metal Quickstart.
For on-prem deployments, check out the Bencher Self-Hosted Quickstart.
- Tutorial
- How To
- Explanation
- Reference
🌐 Also available in:
- {...} JSON
- #️⃣ C#
- ➕ C++
- 🎯 Dart
- 🕳 Go
- ☕️ Java
- 🕸 JavaScript
- 🐍 Python
♦️ Ruby- 🦀 Rust
- libtest bench
- Criterion
- Iai
- Gungraun (formerly Iai-Callgrind)
- ❯_ Shell
👉 For more details see the explanation of benchmark harness adapters.
Don't see your harness? Open an issue →
|
Microsoft CCF |
Google Sedpack |
GitLab Git |
|
Servo |
Mozilla Neqo |
GreptimeDB |
|
Diesel |
clap |
Rustls |
👉 Checkout all public projects.
|
Bencher is like CodeCov for performance metrics.
|
I think I'm in heaven. Now that I'm starting to see graphs of performance over time automatically from tests I'm running in CI. It's like this whole branch of errors can be caught and noticed sooner.
|
|
95% of the time I don't want to think about my benchmarks. But when I need to, Bencher ensures that I have the detailed historical record waiting there for me. It's fire-and-forget.
|
I've been looking for a public service like Bencher for about 10 years.
|
- Bencher Self-Hosted: Deploy Bencher on your own infrastructure. Bare metal, Docker, or Kubernetes. Full control, no data leaving your environment. Deploy in 60 seconds →
- Bencher Cloud: Zero infrastructure to manage. On-demand bare metal runners, billed by the minute. Pay for your benchmark runs, not idle servers. Benchmark for free →
Install the Bencher CLI using the GitHub Action, and use it for continuous benchmarking in your project.
name: Continuous Benchmarking with Bencher
on:
push:
branches: main
jobs:
benchmark_with_bencher:
name: Benchmark with Bencher
runs-on: ubuntu-latest
env:
BENCHER_PROJECT: my-project-slug
BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }}
steps:
- uses: actions/checkout@v6
- uses: bencherdev/bencher@main
- run: bencher run "bencher mock"Supported Operating Systems:
- Linux (x86_64 & ARM64)
- macOS (x86_64 & ARM64)
- Windows (x86_64 & ARM64)
👉 For more details see the explanation of how to use GitHub Actions.
Add BENCHER_API_TOKEN to you Repository secrets (ex: Repo -> Settings -> Secrets and variables -> Actions -> New repository secret). You can find your API tokens by running bencher token list my-user-slug or view them in the Bencher Console.
You can set the bencher run CLI subcommand to error
if an Alert is generated with the --error-on-alert flag.
bencher run --error-on-alert "bencher mock"👉 For more details see the explanation of bencher run.
You can set the bencher run CLI subcommand to comment on a PR with the --github-actions argument.
bencher run --github-actions "${{ secrets.GITHUB_TOKEN }}" "bencher mock"👉 For more details see the explanation of bencher run.
👉 See the example PR comment above.
There is also an optional version argument to specify an exact version of the Bencher CLI to use.
Otherwise, it will default to using the latest CLI version.
- uses: bencherdev/bencher@main
with:
version: 0.6.2Specify an exact version if using Bencher Self-Hosted. Do not specify an exact version if using Bencher Cloud as there are still occasional breaking changes.
The easiest way to contribute is to open this repo as a Dev Container in VSCode by simply clicking one of the buttons below. Everything you need will already be there! Once set up, both the UI and API should be built, running, and seeded at localhost:3000 and localhost:61016 respectively. To make any changes to the UI or API though, you will have to exit the startup process and restart the UI and API yourself.
For additional information on contributing, see the Development Getting Started guide.
There is also a pre-built image from CI available for each branch: ghcr.io/bencherdev/bencher-dev-container
All content that resides under any directory or feature named "plus" is licensed under the Bencher Plus License.
All other content is licensed under the Apache License, Version 2.0 or MIT License at your discretion.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Bencher by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.