Real Production Postgres and MySQL Case Studies (2026)

Five named production users per database. Every case linked to the primary source. No fabricated numbers.

Postgres in production

Instagram (Meta)

2010s-present

Instagram ran PostgreSQL at scale from early days through the Django ORM. Their engineering blog covers Postgres sharding, connection pooling with pgBouncer, and the challenges of scaling a single Postgres cluster to billions of media objects. Now one of the canonical large-scale Postgres deployments, predating the current wave of managed Postgres services.

Scale: Billions of media objects, hundreds of millions of users  |  Source: instagram-engineering.com

GitHub

2010s-present

GitHub runs multiple PostgreSQL clusters for various services including repository metadata, issues, pull requests, and actions. GitHub's engineering blog has documented their work on Postgres connection pooling (pgBouncer), online schema migration (gh-ost, then Postgres equivalents), and multi-region database challenges.

Scale: Hundreds of millions of repositories  |  Source: github.blog/engineering

Notion

2021-2022

Notion published a detailed post in 2021 on their approach to sharding Postgres as their dataset grew past what a single instance could handle. They implemented 480 logical shards on Postgres, using a custom sharding layer and PgBouncer for connection management. Their post is one of the most-cited real-world Postgres scaling case studies.

Scale: Hundreds of TB, 480 logical shards  |  Source: notion.so/blog/sharding-postgres-at-notion

Heap

2017-2020

Heap originally ran on a single large Postgres instance handling 10M+ requests per second of analytics data. As their dataset grew, they migrated to Citus (distributed Postgres) to achieve horizontal scale-out without leaving the Postgres ecosystem. Their blog documents the Citus migration in detail, including the challenges of re-sharding live data.

Scale: 10M+ requests/sec, petabyte-scale analytics  |  Source: heap.io/blog

Discord

2018-present

Discord's storage layer has evolved through multiple phases documented in their engineering blog. While they moved message storage to Cassandra and then ScyllaDB for scale, their relational services (guilds, users, permissions, settings) run on Postgres. Discord is a strong example of using the right tool per workload rather than a single database for everything.

Scale: Trillions of messages total; relational data on Postgres  |  Source: discord.com/blog/engineering

MySQL in production

Booking.com

2000s-present

Booking.com runs one of the largest MySQL deployments in the world, with a dedicated database infrastructure team. They have published extensively on MySQL replication, operational tooling, and the decisions behind staying on MySQL at their scale. Their engineering blog covers topics like MySQL Group Replication at scale, Percona Server usage, and their custom tooling for MySQL management.

Scale: Hundreds of millions of accommodation reservations  |  Source: dev.booking.com / engineering.booking.com

Shopify

2010s-present

Shopify runs MySQL with Vitess for sharded OLTP across their e-commerce platform. Shopify Engineering has published on their migration to Vitess, connection pooling with ProxySQL, and operational challenges at scale. They are one of the most prominent Vitess deployments outside of the Google/YouTube ecosystem.

Scale: Millions of merchants, billions of orders  |  Source: shopify.engineering

Slack

2019-present

Slack adopted Vitess (MySQL) for their primary database infrastructure to handle chat data at scale. Slack Engineering has documented their Vitess adoption, the challenges of migrating to a sharded architecture, and operational lessons. Vitess allowed Slack to horizontally scale MySQL without changing the application's SQL interface.

Scale: Billions of messages, tens of millions of daily active users  |  Source: slack.engineering

YouTube

2010-present

YouTube is the origin of Vitess. The Vitess project was created at YouTube to solve MySQL scaling challenges at Google-scale. YouTube's engineering team created the sharding proxy that became Vitess and open-sourced it in 2012. Today Vitess is a CNCF project and the standard for MySQL sharding at scale.

Scale: Billions of videos, hundreds of millions of users  |  Source: vitess.io / youtube engineering

Etsy

2000s-present

Etsy has run MySQL with Percona Server for their marketplace. Their engineering blog (Code as Craft) has documented MySQL operational practices, schema migration tooling, and the replication setup supporting their marketplace at scale. Etsy is a strong example of a large-scale MySQL deployment with Percona tooling rather than Oracle Enterprise.

Scale: Hundreds of millions of listings  |  Source: etsy.com/codeascraft

What this teaches your team

Both databases run at massive scale. The choice is not about which is “better” in the abstract. Instagram and Booking.com are both at enormous scale; one chose Postgres, the other MySQL. The difference is workload profile, ecosystem fit, and accumulated operational expertise. Choose based on your specific context, not on whose logo is more impressive in this list.

Decision matrix by workloadMigration cost dataOperational reality