AWS RDS vs Self-Hosted PostgreSQL: Full Cost Breakdown
Compare AWS RDS and self-hosted PostgreSQL with real pricing at every scale. Covers total cost of ownership, performance differences, and when each option makes sense.
AWS RDS is the default choice for PostgreSQL hosting. It shows up first in every tutorial, it integrates cleanly with the rest of the AWS ecosystem, and it removes the operational burden of running a database yourself. For many teams, that is exactly the right trade-off.
But RDS pricing is not straightforward. The headline instance cost is only the starting point. Storage, IOPS, backups, data transfer, read replicas, and Multi-AZ deployments all add separate line items that compound quickly. At moderate scale, many teams discover they are paying 3-6x what the same workload would cost on dedicated or bare-metal infrastructure.
This guide compares AWS RDS and self-hosted PostgreSQL with real numbers at three different scale points. The goal is not to argue that one option is universally better -- it is to give you the actual cost data and trade-offs so you can make the right call for your situation.
What AWS RDS Gives You
RDS PostgreSQL is a managed service. AWS handles:
- Provisioning and patching. Instance creation, minor version upgrades, and OS-level security patches.
- Automated backups. Daily snapshots with configurable retention (up to 35 days) and continuous WAL archiving for point-in-time recovery.
- Multi-AZ failover. A synchronous standby in a different availability zone. If the primary fails, AWS promotes the standby automatically. Failover typically takes 60-120 seconds.
- Monitoring. CloudWatch metrics for CPU, memory, disk, IOPS, connections, and replication lag. Performance Insights provides query-level analysis (free tier for 7-day retention, paid beyond that).
- Encryption. At-rest encryption via KMS and in-transit encryption via SSL.
The pricing model has multiple dimensions:
- Instance hours -- billed per hour based on instance class (e.g., db.r6g.large, db.r6g.xlarge).
- Storage -- billed per GB-month for gp3, io1, or io2 volumes.
- Provisioned IOPS -- gp3 includes 3,000 baseline IOPS; additional IOPS are billed separately. io1/io2 volumes charge per provisioned IOP.
- Backup storage -- free up to the size of your provisioned storage; billed per GB-month beyond that.
- Data transfer -- inbound is free; outbound to the internet or other regions is billed per GB.
- Multi-AZ -- roughly doubles the instance cost.
- Read replicas -- each replica is billed at the same rate as a standalone instance.
What Self-Hosted PostgreSQL Requires
Self-hosting means you provision servers, install PostgreSQL, and manage the operational stack yourself. For production use, a bare PostgreSQL install is not sufficient. You need:
- High availability. Patroni with etcd handles leader election and automatic failover. Without it, a primary failure requires manual intervention.
- Connection pooling. PgBouncer multiplexes application connections across a smaller pool of actual PostgreSQL connections, preventing connection exhaustion under load.
- Load balancing. HAProxy routes read traffic to replicas and write traffic to the primary, with health checks against Patroni's REST API for automatic failover detection.
- Backups. pgBackRest provides full, differential, and incremental backups with WAL archiving for point-in-time recovery to S3-compatible object storage.
- Monitoring. Prometheus with the postgres_exporter and Grafana dashboards for visibility into query performance, replication lag, and resource utilization.
The infrastructure cost for self-hosting is significantly lower than RDS, but the setup complexity is real. Configuring Patroni, etcd, HAProxy, PgBouncer, and pgBackRest correctly from scratch takes 2-5 days for an experienced engineer, longer if you are learning the components as you go.
Detailed Cost Comparison
The following tables use current 2026 pricing. AWS prices are on-demand, US East (N. Virginia) region. Self-hosted prices use Hetzner dedicated servers, which represent the best cost-to-performance ratio for database workloads due to direct-attached NVMe storage and inclusive bandwidth.
Starter Tier: ~$200/month RDS Workload
A small production database. Single primary with Multi-AZ standby, 100 GB storage, moderate query load. Typical for an early-stage SaaS product with a few thousand users.
| Line item | AWS RDS (db.r6g.large, Multi-AZ) | Self-hosted (Hetzner AX42 x2) |
|---|---|---|
| Instance / server cost | $370/mo (Multi-AZ doubles the $185 single-AZ price) | $98/mo (2x AX42: 8-core Ryzen, 64 GB RAM, 2x 512 GB NVMe each at $49/mo) |
| Storage (100 GB gp3) | $11.50/mo | $0 (included: 2x 512 GB NVMe per server) |
| Baseline IOPS (3,000) | $0 (included with gp3) | $0 |
| Additional IOPS (to 6,000) | $18/mo ($0.006/IOP above 3,000) | $0 (NVMe delivers 100k+ IOPS natively) |
| Automated backups (100 GB, 7 days) | $0 (within free tier) | $5/mo (Hetzner Storage Box or S3-compatible) |
| Data transfer out (50 GB) | $4.50/mo ($0.09/GB) | $0 (20 TB included) |
| Monitoring | $0 (CloudWatch basic) | $0 (Prometheus + Grafana, self-hosted) |
| Monthly total | $404 | $103 |
| Annual total | $4,848 | $1,236 |
| Annual savings | -- | $3,612 (74%) |
Note the Multi-AZ reality: RDS Multi-AZ effectively doubles the instance cost because AWS runs a synchronous standby in a second AZ. The self-hosted setup uses two dedicated servers -- one primary, one replica -- with Patroni handling automatic failover. The self-hosted servers each have more RAM (64 GB vs. 16 GB), more CPU cores (8 vs. 2), and faster storage (local NVMe vs. network-attached EBS).
Growth Tier: ~$1,000/month RDS Workload
A growing SaaS product. Higher query volume, larger dataset, one read replica added for reporting queries. 500 GB storage with elevated IOPS requirements.
| Line item | AWS RDS (db.r6g.xlarge, Multi-AZ + 1 read replica) | Self-hosted (Hetzner AX52 x3) |
|---|---|---|
| Primary instance (Multi-AZ) | $740/mo | -- |
| Read replica (single-AZ) | $370/mo | -- |
| Server cost (3-node cluster) | -- | $195/mo (3x AX52: 12-core Ryzen, 128 GB RAM, 2x 1 TB NVMe each at $65/mo) |
| Storage (500 GB gp3, primary) | $57.50/mo | $0 (included) |
| Storage (500 GB gp3, replica) | $57.50/mo | $0 (included) |
| Provisioned IOPS (6,000 additional per volume) | $72/mo (2 volumes x $36) | $0 |
| Backup storage (500 GB, 14 days retention beyond free tier) | $28.75/mo | $10/mo (object storage) |
| Data transfer out (200 GB) | $18/mo | $0 (included) |
| Performance Insights (30-day retention) | $24.39/mo | $0 (Grafana dashboards) |
| Monthly total | $1,368 | $205 |
| Annual total | $16,416 | $2,460 |
| Annual savings | -- | $13,956 (85%) |
At this scale, the self-hosted cluster has three nodes (one primary, two replicas), each with 128 GB RAM and local NVMe. The RDS setup has one primary with Multi-AZ standby and one read replica. Despite being substantially cheaper, the self-hosted cluster has more total compute, more RAM, and faster storage.
Scale Tier: ~$5,000/month RDS Workload
A high-traffic application. Large dataset, high IOPS requirements, multiple read replicas, significant data transfer. 2 TB storage with demanding write throughput.
| Line item | AWS RDS (db.r6g.4xlarge, Multi-AZ + 3 read replicas) | Self-hosted (Hetzner AX102 x5) |
|---|---|---|
| Primary instance (Multi-AZ) | $2,920/mo | -- |
| Read replicas (3x single-AZ) | $4,380/mo (3x $1,460) | -- |
| Server cost (5-node cluster) | -- | $695/mo (5x AX102: 16-core Ryzen, 128 GB RAM, 2x 1.92 TB NVMe each at $139/mo) |
| Storage (2 TB gp3, primary) | $230/mo | $0 (included) |
| Storage (2 TB gp3, per replica, x3) | $690/mo | $0 (included) |
| Provisioned IOPS (12,000 additional, 4 volumes) | $288/mo | $0 |
| Backup storage (2 TB, 30 days, ~4 TB total) | $230/mo | $30/mo (object storage) |
| Data transfer out (1 TB) | $90/mo | $0 (included) |
| Performance Insights (all instances) | $97.56/mo | $0 |
| Monthly total | $8,926 | $725 |
| Annual total | $107,112 | $8,700 |
| Annual savings | -- | $98,412 (92%) |
At scale, the cost ratio becomes extreme. The RDS setup costs over 12x more than the self-hosted equivalent while delivering less total compute capacity. The five Hetzner AX102 servers provide 80 CPU cores, 640 GB RAM, and 19.2 TB of NVMe storage. The RDS setup provides 64 vCPUs (16 per instance across 4 instances), 512 GB RAM, and network-attached storage with provisioned IOPS that still cannot match local NVMe throughput.
Hidden RDS Costs Most Teams Miss
The tables above already include the major line items, but several RDS costs are easy to overlook until you see them on the bill.
IOPS Charges
gp3 volumes include 3,000 baseline IOPS. Any PostgreSQL workload with moderate write activity or active analytical queries will exceed this. Provisioning additional IOPS costs $0.006 per IOP per month. At 10,000 provisioned IOPS, that is $42/month per volume -- and Multi-AZ means you are paying for two volumes.
io1 and io2 volumes are worse: $0.064-$0.080 per provisioned IOP per month. A 20,000 IOPS io1 volume costs $1,280/month in IOPS alone, before storage.
Data Transfer
AWS charges $0.09/GB for data transfer out to the internet (first 10 TB/month). Cross-AZ transfer between RDS and your EC2 instances in different AZs costs $0.01/GB in each direction. Cross-region replication for disaster recovery adds $0.02/GB.
These charges are individually small but accumulate with data-heavy applications. A reporting pipeline that exports 500 GB/month pays $45 in transfer fees alone.
Snapshot and Backup Storage
RDS provides free backup storage equal to your provisioned database storage. Beyond that, backup storage costs $0.095/GB-month. With a 1 TB database and 30-day retention (storing multiple full snapshots plus incrementals), actual backup storage can reach 3-5x the database size, adding $190-$475/month.
Read Replica Costs
Each read replica is billed as a separate instance at the full on-demand rate. There is no discount for replicas. Adding two read replicas triples your compute cost. This is one of the most common sources of RDS bill shock -- teams add replicas to handle read load and are surprised when the bill jumps accordingly.
Reserved Instances Save Less Than You Think
AWS offers 1-year and 3-year reserved instance pricing that reduces costs by 30-60%. This helps, but it requires upfront commitment and does not apply to storage, IOPS, transfer, or backup charges. Even with 3-year all-upfront reserved pricing, the self-hosted cost advantage remains substantial at every scale point.
Performance Comparison: Bare Metal vs. Virtualized
Performance is not just about cost. The architecture of the underlying storage has a direct impact on PostgreSQL workload characteristics.
Storage Latency
RDS uses network-attached EBS volumes. Even gp3 volumes have higher latency (typically 1-4ms) compared to direct-attached NVMe (0.1-0.2ms). This matters for:
- Write-heavy workloads. WAL writes, checkpoint flushes, and autovacuum operations are all sensitive to storage latency.
- Index scans on large tables. When data is not cached in shared_buffers, each page read incurs the full storage round-trip.
- Connection storms. Under high concurrency, storage latency compounds as processes wait for I/O.
CPU and Memory
RDS instances share the underlying hypervisor with other tenants. While AWS provides consistent baseline performance for most instance types, noisy-neighbor effects can cause occasional performance variability.
Dedicated servers provide the full CPU and memory without virtualization overhead. This is a 5-15% performance advantage before considering storage, and it is consistent -- no variability from other tenants.
Practical Impact
For a typical OLTP workload, the combination of local NVMe and bare-metal CPU translates to roughly 2-4x better throughput (transactions per second) at equivalent hardware specs. For analytical queries scanning large tables, the difference can be even larger due to the storage throughput advantage.
This is not a theoretical concern. Teams running pgbench or real application benchmarks consistently report that the same PostgreSQL configuration produces significantly higher TPS on dedicated hardware with local NVMe compared to equivalent-spec RDS instances. The storage architecture is the primary bottleneck -- PostgreSQL is heavily I/O-bound for most production workloads, and network-attached storage introduces latency at exactly the layer where it matters most.
One area where RDS can close the gap is Aurora PostgreSQL, which uses a purpose-built distributed storage layer that reduces write amplification. Aurora's storage is faster than standard EBS for write-heavy workloads, but it comes at a higher price point and deeper vendor lock-in.
Feature Comparison
| Feature | AWS RDS | Self-Hosted (Patroni Stack) |
|---|---|---|
| Automated failover | Yes (Multi-AZ, 60-120s) | Yes (Patroni, 10-30s) |
| Point-in-time recovery | Yes (up to 35 days) | Yes (pgBackRest, configurable retention) |
| Connection pooling | Not included (use application-side or RDS Proxy at extra cost) | PgBouncer (included in deployment) |
| Read/write splitting | Application-level or RDS Proxy | HAProxy with automatic health checks |
| Monitoring | CloudWatch + Performance Insights | Prometheus + Grafana (full customization) |
| Extension support | Limited to AWS-approved list (~80 extensions) | Full (install any extension, including pg_cron, timescaledb, pgvector, etc.) |
| Major version upgrades | In-place upgrade (requires downtime window) | Rolling upgrade via Patroni switchover (minimal downtime) |
| Custom PostgreSQL builds | Not possible | Fully supported |
| Network isolation | VPC, security groups | Firewall rules, private networking |
| Encryption at rest | Yes (KMS) | Yes (LUKS or filesystem-level) |
| Cross-region replication | Read replicas in other regions | Streaming replication to any location |
| Vendor lock-in | Moderate (proprietary tooling, IAM integration) | None |
| Setup time | Minutes | Hours (manual) or minutes (with sshploy) |
| Operational overhead | Low | Moderate (monitoring, upgrades, backup validation) |
A few items worth highlighting. RDS connection pooling requires RDS Proxy, which is a separate service billed at $0.015 per vCPU per hour. For a db.r6g.xlarge (4 vCPU), that adds $43.80/month. The self-hosted stack includes PgBouncer at no additional cost.
RDS failover time (60-120 seconds) is slower than Patroni (10-30 seconds) because AWS replaces the underlying instance rather than simply promoting the standby. For applications sensitive to availability, the faster Patroni failover is a meaningful advantage.
Extension support is a hard constraint. If your application requires an extension not on the AWS-approved list, RDS is not an option regardless of other considerations.
Decision Framework: When to Choose Each Option
RDS is the better choice when:
- Your team has 1-3 engineers and nobody has meaningful Linux or PostgreSQL operations experience.
- Your monthly database spend is under $400. At this scale, the absolute savings from self-hosting (~$200-300/month) may not justify the operational responsibility.
- You are in an early product phase where iteration speed matters more than infrastructure cost. The hours spent on database operations could be spent on product development.
- You have strict compliance requirements (HIPAA, FedRAMP) and your compliance team has already validated AWS RDS. Self-hosting is possible under these frameworks but requires you to document and maintain your own controls.
- You are deeply integrated into the AWS ecosystem (Lambda, SQS, EventBridge) and the convenience of IAM-based authentication and VPC-native connectivity provides real productivity benefits.
Self-hosted is the better choice when:
- Your database costs exceed $500/month on RDS and are growing. The savings compound quickly and fund engineering capacity.
- You need PostgreSQL extensions or configurations that RDS does not support.
- You require predictable costs. RDS billing has too many variable components for reliable budgeting at scale.
- Your team has at least one engineer comfortable with Linux administration.
- You want faster failover (10-30 seconds vs. 60-120 seconds).
- You are running multi-region or hybrid deployments where a cloud-agnostic approach is valuable.
- Storage performance matters. Local NVMe dramatically outperforms EBS for write-heavy or scan-heavy workloads.
Migration Path: RDS to Self-Hosted PostgreSQL
Moving off RDS is straightforward but requires planning. Many teams assume the migration is risky or complex, but PostgreSQL's built-in replication tools make it manageable. The typical approach:
1. Provision your target cluster. Set up a Patroni cluster on your target infrastructure with the same PostgreSQL major version as your RDS instance. Ensure all extensions your application uses are installed and configured. Run your schema migrations against the new cluster to verify compatibility.
2. Initial data migration. Use pg_dump / pg_restore for databases under 100 GB. For larger databases, set up logical replication from RDS to your self-hosted primary -- RDS supports logical replication as a publisher since PostgreSQL 10. This allows continuous replication while you prepare the cutover. Logical replication keeps the target cluster in sync with RDS in near real-time, which minimizes your cutover window.
3. Test thoroughly. Run your application against the self-hosted cluster in a staging environment. Validate query performance, connection pooling behavior through PgBouncer, read/write splitting through HAProxy, and failover scenarios with Patroni. Run your full test suite. Benchmark critical queries to ensure performance meets or exceeds what you had on RDS.
4. Cutover. For logical replication setups, the cutover window is minimal: stop writes to RDS, wait for replication to catch up (seconds), update your application's connection string to point to the HAProxy endpoint, and resume. Total downtime can be under 60 seconds with preparation. For added safety, keep the RDS instance running in read-only mode for a few days as a rollback option.
5. Decommission RDS. After confirming the self-hosted cluster is stable (give it at least a week of production traffic), delete the RDS instance and snapshots. Cancel any reserved instance commitments that are eligible.
Key considerations: verify that all extensions you use on RDS are installed on the self-hosted cluster. Check that your backup and monitoring are functioning before cutover. Test the failover procedure on your new cluster at least once before it handles production traffic. If you are using RDS-specific features like IAM database authentication, you will need to switch to standard PostgreSQL authentication (password-based or certificate-based).
How sshploy Helps
The operational overhead of self-hosting is the primary counterargument to the cost savings. Setting up Patroni, etcd, HAProxy, PgBouncer, pgBackRest, firewall rules, and monitoring from scratch is a multi-day project that requires familiarity with each component.
sshploy eliminates the setup complexity. You provide your servers (any provider that accepts SSH -- Hetzner, OVH, Vultr, bare metal, or existing cloud VMs), configure the cluster topology, and sshploy runs production-tested Ansible playbooks that deploy the entire stack: PostgreSQL with Patroni for HA, PgBouncer for connection pooling, HAProxy for read/write splitting, pgBackRest for backups to S3-compatible storage, and Prometheus exporters with Grafana dashboards for monitoring. What would take days of manual configuration runs in minutes, and the result is a correctly configured, production-grade cluster -- not a tutorial setup that needs hardening.
You retain full SSH access and root control over your servers. There is no proprietary agent, no vendor lock-in, and no ongoing dependency on sshploy to operate your cluster. The output is standard open-source infrastructure that you own completely.
FAQ
Is the self-hosted cost comparison fair? Hetzner and AWS are not the same thing.
The comparison uses Hetzner because it represents the actual cost of running PostgreSQL on dedicated hardware. AWS EC2 bare metal instances exist but are priced at a premium that makes the comparison less interesting (an i3.metal instance costs ~$3,500/month). The point is that PostgreSQL does not require the AWS ecosystem to run reliably. If your application servers are already on AWS, you can still run your database on Hetzner or another provider and connect over a VPN or direct peering. Many teams do exactly this for cost reasons.
Can self-hosted PostgreSQL really match RDS reliability?
Yes. Patroni-managed PostgreSQL clusters run in production at companies of every size, from startups to enterprises handling millions of transactions per day. Patroni's automatic failover is actually faster than RDS Multi-AZ failover (10-30 seconds vs. 60-120 seconds). The reliability depends on your configuration and operational practices -- testing failover, validating backups, monitoring replication lag -- not on whether the infrastructure is managed by a third party.
What about RDS Aurora? Is it worth the premium over standard RDS?
Aurora PostgreSQL uses a different storage architecture (distributed, log-structured) that provides better write throughput and faster replication than standard RDS. It costs roughly 20% more than standard RDS. Aurora makes sense if you need the specific performance characteristics of its storage layer and are committed to AWS. It does not change the fundamental cost comparison with self-hosted -- Aurora pricing is still several times higher than dedicated hardware, and its proprietary storage layer increases lock-in.
How much engineering time should I budget for ongoing self-hosted operations?
After initial setup, plan for 2-4 hours per month for routine operations: reviewing monitoring dashboards, validating backup integrity, applying minor version updates, and occasional capacity planning. Major version upgrades (once per year) take 4-8 hours of planning and execution. This is a meaningful but manageable commitment for a team with basic infrastructure experience. If your database spend on RDS exceeds $1,000/month, the cost savings more than cover this time at any reasonable engineering hourly rate.
What if something goes catastrophically wrong with a self-hosted cluster?
The same thing that happens with any production system: you restore from backups. pgBackRest with WAL archiving provides point-in-time recovery to any second within your retention window. The recovery process is well-documented and deterministic. The real question is whether your backups are tested -- run a restore drill quarterly. Beyond data recovery, Patroni handles the most common failure scenario (node failure) automatically. For scenarios that Patroni cannot handle (storage corruption, accidental data deletion), your backup strategy is your safety net regardless of whether you use RDS or self-host.
Ready to deploy?
Skip the manual setup. sshploy handles the entire deployment for you.
Deploy PostgreSQL with PatroniRelated guides
PostgreSQL High Availability with Patroni: Step-by-Step Guide
Set up a production-ready PostgreSQL HA cluster with Patroni, etcd, PgBouncer, and HAProxy. Covers architecture, automatic failover, connection pooling, and backup strategies.
Self-Hosting vs Managed Databases: A Practical Cost & Control Comparison
Compare self-hosted and managed database services across cost, control, performance, and operational overhead. Includes real pricing breakdowns for PostgreSQL, ClickHouse, and Redis.