
Sharded, replicated analytics clusters
Deploy production-ready ClickHouse clusters with configurable shards and replicas. Includes ClickHouse Keeper for coordination, CHProxy for load balancing, and optional S3 backups with tiered storage.
Configure your cluster topology with flexible sharding and replication. Each shard can have multiple replicas for high availability and data redundancy.
ClickHouse Keeper provides distributed coordination for your cluster without external dependencies like ZooKeeper.
CHProxy provides intelligent load balancing and query routing for your ClickHouse cluster.
Integrated backup and restore with S3-compatible storage for reliable disaster recovery.
Optimize costs with automatic data tiering between hot and cold storage. Simply define your cold storage location and ClickHouse handles the rest.
Each ClickHouse node includes a restore_replica.sh script that can recover a failed replica by streaming metadata and access control data from a healthy peer in the same shard. The script automatically stops the local container, transfers the necessary files via SSH, sets proper ownership, creates a force_restore_data flag, and restarts ClickHouse. This allows you to quickly restore a replica without manual intervention or complex recovery procedures.
Customize your ClickHouse configuration without editing raw XML files. Our visual config editor layers your customizations on top of battle-tested defaults.
FAQ
Common questions about this deployment recipe.
Pay once, get free updates for a year. After that, pay again to continue receiving updates.
50 spots remaining
One-time payment • Free updates for 1 year
After 1 year, pay again to continue receiving updates
Explore our other production-ready database clusters.

Deploy production-ready PostgreSQL clusters with Patroni for automatic failover, etcd for coordination, PgBouncer for connection pooling, and optional HAProxy for read/write splitting. Includes pgBackRest for backups and support for cross-region read replicas.

Deploy Redis clusters with Sentinel for automatic failover and HAProxy for master routing. Support for cross-region replicas enables faster reads and geo-distributed caching.