Biased benchmarks
We ran perf tests, results are in the awesome dashboard below
Take them with a grain of salt, ofc
00:00:00
2X
Drizzle vs Prisma

Medium traffic size(1000VUs)
Micro database size
E-commerce
PostgreSQL

Lenovo M720q
Linux 5.15.0-86-generic x86_64
Intel Core i3-9100T
RAM 32GB DDR4 2666MHz
Benchmark Config
Drizzle
v0.28.1
prisma
v5.1.1
avg latency: 0.0ms
avg latency: 0.0ms
avg: 0 req/sec
Drizzle handled xNaN more requests
0
avg: 0 req/sec
Drizzle handled xNaN more requests
0
avg CPU load: 0.0%
avg CPU load: 0.0%

How it works

Drizzle has been originally designed to be a thin layer on top of SQL and introduce minimal runtime overhead and by introducing Prepared Statements and Relational Queries — we’ve smashed it. It’s now both fast and has exceptional DX and no n+1 problem for relational queries.

But how fast is it? Is it Drizzle or is it SQL who’s fast? What to measure?

What is a meaningful benchmark? We’ve spent quite some time doing synthetic benchmarks with mitata, tested everything in one runtime and then in separate containerised so there’s no GC cross-influence, community made their own benchmarks and helped us allocate Relational Queries performance and row reads bottlenecks and make them really fast and efficient.

We’ve tested different SQL dialects across all the competitors, and while we were crazy fast, in some cases 100+ times faster than Prisma with SQLite, we only wanted to share benchmarks that were meaningful for businesses and developers.

From business perspective— request roundtrip is the most important metric when it comes down to the server-side performance. While you can influence network latency with services like Cloudflare Argo, on the server side it usually comes down to the database queries.

We’ve composed a test case with a ~370k records in a PostgreSQL database and generated production-like E-commerce traffic benchmarks on 1GB ethernet to eliminate any discrepancies. On Lenovo M720q Drizzle can handle 4.6k reqs/s while maintaining a ~100ms p95 latency.

We ran our benchmarks on 2 separate machines, so that observer does not influence results. For database we’re using PostgreSQL instance with 42MB of E-commerce data(~370k records).
K6 benchmarking instance lives on MacBook Air and makes 1M prepared requests through 1GB ethernet to Lenovo M720q with Intel Core i3-9100T and 32GB of RAM.

image

To run your own tests - follow instructions below!

Prepare test machine

  1. Spin up a docker container with PostgreSQL using pnpm start:docker command. You can configure a desired database port in ./src/docker.ts file:
...
}

const desiredPostgresPort = 5432; // change here
main();
  1. Update DATABASE_URL with allocated database port in .env file:
DATABASE_URL="postgres://postgres:postgres@localhost:5432/postgres"
  1. Seed your database with test data using pnpm start:seed command, you can change the size of the database in ./src/seed.ts file:
...
}

main("micro"); // nano | micro
  1. Make sure you have Node version 18 installed or above. You can use nvm use 18 command
  2. Start Drizzle/Prisma server:
## Drizzle
pnpm start:drizzle

## Prisma
pnpm prepare:prisma
pnpm start:prisma

Prepare testing machine

  1. Generate a list of http requests with pnpm start:generate. It will output a list of http requests to be run on the tested server | ./data/requests.json
  2. Install k6 load tester
  3. Configure tested server url in ./k6.js file
// const host = `http://192.168.31.144:3000`; // drizzle
const host = `http://192.168.31.144:3001`; // prisma
  1. Run tests with k6 run bench.js 🚀