All guides

Self-Hosting vs Managed Databases: A Practical Cost & Control Comparison

Compare self-hosted and managed database services across cost, control, performance, and operational overhead. Includes real pricing breakdowns for PostgreSQL, ClickHouse, and Redis.

February 17, 2026

The debate between self-hosting and using a managed database service rarely has a clean answer. It depends on your team size, budget, traffic profile, and how much operational risk you are willing to accept. Most guides on this topic lean heavily toward one side. This one tries to give you the full picture so you can make the call that actually fits your situation.

What "Managed" and "Self-Hosted" Really Mean

A managed database (AWS RDS, ClickHouse Cloud, AWS ElastiCache, PlanetScale, Supabase, etc.) means a third party handles provisioning, backups, patching, replication, and failover. You pay a premium for that convenience, often a significant one.

Self-hosting means you provision your own servers (cloud VMs or bare metal), install and configure the database software, set up replication, handle backups, and respond when something breaks. You pay less per compute hour but you absorb the operational work.

Neither approach is universally better. The honest framing is: managed services sell you back your time, and self-hosting lets you keep your money, provided you actually have the time and skills to spend.


The Real Trade-offs

Cost

The most common argument for self-hosting is cost. It holds up at meaningful scale, but the gap is often smaller than people expect when you account for the full picture on both sides.

Managed services charge for compute, storage, IOPS, read replicas, cross-region backups, and data transfer. These line items compound. A modest RDS PostgreSQL instance with Multi-AZ, 500 GB storage, and moderate IOPS can easily run $400–600/month before you touch bandwidth or cross-region replication.

Self-hosting has different costs: the infrastructure itself (typically much cheaper), plus your time and the cost of getting things wrong. A misconfigured backup job or a missed failover test is not a line item on your invoice, but it has a real cost when something goes wrong at 2am.

Control

Self-hosting gives you full control over configuration, extensions, versions, and upgrade timing. You can tune shared_buffers, run custom PostgreSQL extensions that managed services do not support, or stay on an older version while you validate a migration.

Managed services abstract away configuration, which is convenient until you need to change something the provider has not exposed. Extension support on RDS, for example, is limited to an approved list. If you need pg_cron, timescaledb, or a less common extension, you may be stuck.

Performance

The performance story is nuanced. Managed services run on shared infrastructure with network-attached storage, which adds latency. If you provision dedicated VMs with local NVMe storage on Hetzner or similar providers, you can get substantially better disk I/O for the same budget.

On the other hand, managed services at the high end (Aurora, AlloyDB) are engineered specifically for their storage architecture and can outperform naive self-hosted setups. If you are not tuning your self-hosted deployment carefully, you may not capture the performance advantage.

Operational Overhead

This is where managed services genuinely win for small teams. Automated backups, point-in-time recovery, automatic failover, and patch management are real features that take real engineering time to replicate. If you have a team of two and your core product is not database infrastructure, paying for managed services is often the right call.

As teams grow and infrastructure costs become a meaningful budget line, the calculus shifts.


Cost Breakdowns by Database

The following comparisons use realistic 2026 pricing. Managed service prices are representative and will vary based on region, reserved vs. on-demand pricing, and specific configuration choices.

PostgreSQL: AWS RDS vs. Self-Hosted on Hetzner

A production PostgreSQL setup requires high availability: at minimum a primary with one standby, automated failover, and reliable backups. On RDS, Multi-AZ handles this. On self-hosted infrastructure, Patroni with etcd provides the same capability.

ComponentAWS RDS (Multi-AZ, db.r6g.large)Hetzner 3-node Patroni Cluster
Compute$185/mo (on-demand)$45/mo (3x CCX23, 8 vCPU / 32 GB each)
Storage (500 GB gp3)$57.50/mo$0 (included with server)
Provisioned IOPS (3000)$36/mo$0
Automated backups (7 days)$25/mo$10/mo (object storage)
Data transfer out (100 GB)$9/mo$0 (included in traffic)
Monitoring / tooling$0 (CloudWatch included)$0 (Prometheus + Grafana)
Total~$312/mo~$55/mo
Annualized~$3,744/yr~$660/yr
Savings~$3,084/yr

The RDS number climbs further if you add read replicas, increase IOPS, or enable Performance Insights. At 10,000 provisioned IOPS, your storage bill alone exceeds $300/month.

The Hetzner estimate assumes you are running dedicated servers. If your workload justifies larger instances, costs scale proportionally, but the ratio typically holds.

ClickHouse: ClickHouse Cloud vs. Self-Hosted on Hetzner

ClickHouse is where the cost gap becomes genuinely striking. ClickHouse Cloud pricing is consumption-based, which sounds attractive but scales steeply with query volume and storage.

ComponentClickHouse Cloud (Production Tier)Hetzner 3-node ClickHouse Cluster
Compute (baseline)$400–800/mo (3 nodes, 8 vCPU / 32 GB)$45/mo (3x CCX23)
Storage (1 TB)$23/mo$10/mo (object storage for backups)
Query compute overages$0.20 per million queries, variable$0
Backup storageIncluded at base tier$5/mo
Data transferVariable, $0.01–0.08/GB$0 (traffic included)
Total (moderate usage)~$500–900/mo~$60/mo
Total (heavy usage)~$1,500–2,500+/mo~$100/mo
Annualized (moderate)~$7,200/yr~$720/yr
Savings (moderate)~$6,480/yr

The ClickHouse Cloud pricing model is particularly sensitive to query patterns. A single poorly optimized query or a spike in analytical workload can generate unexpected charges. Self-hosting removes this variable entirely.

Redis: AWS ElastiCache vs. Self-Hosted on Hetzner

Redis is often treated as a secondary concern, but ElastiCache pricing for production configurations is surprisingly high relative to what you are running.

ComponentAWS ElastiCache (cache.r6g.large, Multi-AZ)Hetzner 3-node Redis Sentinel
Primary + replica compute$210/mo (on-demand)$18/mo (3x CPX21, 3 vCPU / 8 GB each)
Backup storage$0.085/GB-mo$2/mo
Data transfer out (50 GB)$4.50/mo$0
Total~$215/mo~$20/mo
Annualized~$2,580/yr~$240/yr
Savings~$2,340/yr

For teams running all three — PostgreSQL, ClickHouse, and Redis — the combined managed cost can easily reach $1,000–4,000/month. The self-hosted equivalent on Hetzner runs closer to $135–220/month.


Hidden Costs on Both Sides

Hidden Costs of Managed Services

IOPS charges. On AWS, provisioned IOPS for gp3 volumes are billed separately. A busy PostgreSQL instance needing 5,000+ IOPS can add $100–200/month to your storage bill alone.

Egress fees. Transferring data out of AWS or GCP incurs egress charges. At $0.09/GB (AWS standard), a data-intensive application pulling 1 TB/month pays $90 in transfer fees on top of everything else.

Read replicas. Each read replica is billed at roughly the same rate as the primary. Two read replicas triple your compute cost.

Vendor lock-in. Migrating away from a managed service later involves significant engineering effort. This is a real cost even if it does not appear on a monthly invoice.

Opaque scaling. Autoscaling on consumption-based platforms (ClickHouse Cloud, PlanetScale) can generate bills that are hard to predict until the invoice arrives.

Hidden Costs of Self-Hosting

Engineering time. Setting up a production-grade Patroni cluster correctly, including Patroni configuration, etcd, HAProxy, pgBouncer, and automated backups, takes meaningful time. If you bill at $150/hour, even 20 hours of setup represents $3,000. Ongoing maintenance adds more.

Expertise. Misconfiguring replication, failing to test failover, or not validating that backups are actually restorable are risks. The cost of a data loss incident vastly exceeds any infrastructure savings.

Incident response. When a managed service has an outage, you open a support ticket. When your self-hosted cluster has an incident, you are debugging it. On-call burden is a real labor cost.

Keeping up with versions. Security patches and version upgrades require planned maintenance windows. Managed services handle this (sometimes less gracefully than you would like, but they handle it).


Decision Framework

Choose managed if:

  • Your team has fewer than five engineers and database infrastructure is not your core competency.
  • You are in early-stage product development and iteration speed matters more than infrastructure cost.
  • Your data has compliance requirements that your managed service already satisfies (SOC 2, HIPAA, etc.) and you would need to obtain those certifications independently for self-hosted infrastructure.
  • You need immediate, enterprise-grade support SLAs and do not have senior database engineers on staff.
  • Your monthly infrastructure spend is below $500. At this scale, the absolute savings from self-hosting are small relative to the operational complexity introduced.
  • You have already had a data incident and do not yet have the tooling and processes to prevent the next one.

Choose self-hosted if:

  • Your database infrastructure costs are a meaningful budget line item (roughly $1,000+/month or more).
  • You have at least one engineer who is comfortable with Linux system administration and the specific database technology you are running.
  • You require configuration options, extensions, or versions that your managed service does not support.
  • Your data residency requirements or network topology make a specific cloud provider impractical.
  • You need predictable costs. Consumption-based managed billing introduces variance that can be difficult to budget against.
  • You are running workloads where co-located compute and database instances on the same bare-metal or dedicated hosts would provide meaningful latency or throughput improvements.
  • Your team already maintains other self-hosted infrastructure and adding database clusters fits your existing operational model.

How sshploy Helps

The main argument against self-hosting is the operational overhead. sshploy is designed to address that directly.

sshploy provides production-tested Ansible playbooks for deploying PostgreSQL (via Patroni), ClickHouse, and Redis clusters to any server you can reach over SSH. The deployment process handles:

  • Full cluster provisioning: all nodes configured in a single run
  • Replication setup: streaming replication for PostgreSQL, distributed replication for ClickHouse, Sentinel configuration for Redis
  • High availability: automatic failover with Patroni + etcd, HAProxy for connection routing
  • Connection pooling: pgBouncer for PostgreSQL
  • Monitoring: Prometheus exporters and pre-built Grafana dashboards for each database type
  • Backup configuration: automated backups to object storage (S3-compatible)
  • Firewall rules: Docker-aware UFW configuration to prevent accidental exposure

The result is that the "20 hours of setup" estimate above becomes closer to 20 minutes. You still need to understand what you are running and how to respond when something needs attention, but the barrier to getting a correct initial deployment is substantially lower.

sshploy works with any cloud provider or bare-metal host that accepts SSH connections. Hetzner, OVH, Vultr, DigitalOcean, Linode, and on-premises hardware all work. You are not locked into a specific provider.

The operational burden that remains after deployment is real but manageable: monitoring your dashboards, validating backups periodically, and handling version upgrades. For most teams, this is achievable with existing engineering capacity once the initial setup complexity is removed.


FAQ

Is self-hosting actually reliable enough for production?

Yes, when configured correctly. Patroni-managed PostgreSQL clusters power production workloads at companies ranging from early-stage startups to large enterprises. ClickHouse clusters handle petabyte-scale analytics workloads at organizations like Cloudflare and Yandex. The question is not whether self-hosted databases can be reliable — they can — but whether your team has the setup and operational practices to achieve that reliability. The risk is in the initial configuration and ongoing discipline, not in the technology itself.

What happens when a self-hosted node goes down at 3am?

With proper high-availability configuration, not much. Patroni handles automatic failover in 10–30 seconds. Redis Sentinel promotes a replica automatically. Your application reconnects through HAProxy or your connection pooler. You get an alert, review the situation in the morning, and replace the failed node when convenient. The failure handling is automated; your involvement is the review and remediation afterward.

Do I need dedicated database engineers to self-host?

Not necessarily, but you need engineers who are comfortable with Linux administration and willing to learn the operational basics of the database they are running. A generalist backend engineer can successfully operate a Patroni cluster with the right tooling and documentation. Where you genuinely need specialist expertise is in performance tuning at very high scale or diagnosing unusual replication or corruption scenarios.

What about compliance? Does self-hosting make audits harder?

It depends on the compliance framework. For SOC 2 Type II, self-hosted infrastructure is perfectly acceptable, but you need to demonstrate appropriate controls: access logging, encryption at rest and in transit, backup validation, change management, and incident response procedures. Managed services sometimes simplify evidence collection because the provider has already obtained their own compliance certifications. Self-hosting requires you to document and demonstrate those controls yourself, which is additional work but not an obstacle for most common frameworks.

Can I migrate from a managed service to self-hosted later?

Yes, though the complexity varies by database. PostgreSQL migration typically involves pg_dump/pg_restore or logical replication with minimal downtime. ClickHouse has built-in remote table functions that make data migration straightforward. Redis migration is usually simple given that Redis datasets are often smaller and more tolerant of brief downtime. The tooling exists to migrate off managed services, but it requires planning and testing. Starting self-hosted and moving to managed is generally easier than the reverse, since managed services sometimes use proprietary storage formats or extensions that complicate extraction.

Ready to deploy?

Skip the manual setup. sshploy handles the entire deployment for you.

Deploy PostgreSQL with Patroni