Skip to content
pg PG Horizon
v1.0.0 · Free to use · Released 29 April 2026

pgpipe PostgreSQL change-data-capture, 57% faster than native.

pgpipe streams row and schema changes from one PostgreSQL database to another in real time — with strict transaction ordering, automatic DDL sync, a built-in web dashboard, dead-letter queue, and Prometheus metrics. Built for analytics replicas, disaster recovery, and zero-downtime database moves.

Free to use for everyone in v1. No registration, no email wall.

benchmark
$ pgbench -c 10 -T 20 source_db
$ pgpipe start -c pgpipe.yaml

┌──────────────────────────────────────────┐
│  Throughput                              
├──────────────────────────────────────────┤
  pgpipe          8,836  events/sec  
  Native PG       5,643  events/sec
├──────────────────────────────────────────┤
│  Replication lag                          │
├──────────────────────────────────────────┤
  pgpipe          0.001 s   strict mode
  Native PG       0.012 s
├──────────────────────────────────────────┤
│  Drain time (after backlog)               │
├──────────────────────────────────────────┤
  pgpipe          3 s
  Native PG       5 s
└──────────────────────────────────────────┘

→ pgpipe is 1.57× faster, with strict
  per-transaction ordering preserved.
What's in the box

Built for production. Free in v1.

Everything PostgreSQL's native logical replication is missing — observability, schema sync, error handling, and a UI — without giving up the ordering guarantees that matter.

Extreme throughput

8,836 events/sec with strict per-transaction ordering — 1.57× faster than native PostgreSQL logical replication, with the same consistency guarantees.

Full DDL replication

Detects and applies 12 schema-change types (ADD/DROP/ALTER COLUMN, CREATE/DROP INDEX, RENAME, …) via event triggers — no app downtime to add a column.

Web dashboard & setup wizard

Five-step first-run flow, live table management, real-time lag and error visibility — no YAML wrestling required.

Strict or parallel modes

Single-writer strict ordering for financial workloads, or parallel apply for analytics throughput. Pick per pipeline.

Dead-letter queue

Failed events land in a DLQ with full context — inspect, fix, and replay via REST API. You never silently lose data.

Prometheus & Kubernetes-ready

15 metrics out of the box, liveness/readiness probes, JWT auth, TLS, graceful shutdown. Drop into your existing platform.

Architecture

A proper streaming pipeline.

pgpipe decodes the WAL via the pgoutput v2 protocol, snapshots tables in parallel on first run, then streams changes with batching, backpressure, and crash-safe checkpoints.

STEP 1

Source PG

Logical replication slot + publication. Standard Postgres ≥ 12.

STEP 2

Decoder

pgoutput v2 with streaming. Handles long-running transactions.

STEP 3

Pipeline

Batch → backpressure → DLQ on failure. Schema changes preserved.

STEP 4

Applier

Strict-ordered SendBatch pipelining. Sliding-window checkpoint.

STEP 5

Destination PG

Same schema (or remapped) — including DDL and indexes.

Quickstart

From zero to replicating in under a minute.

The fastest way to try pgpipe is with Docker Compose. The dashboard opens at http://localhost:8080 with a setup wizard.

Try it with Docker

bash
# Bring up source + destination + pgpipe
docker compose up -d --build

# Open the dashboard
open http://localhost:8080

# Insert a row in source — watch it land in dest
docker compose exec source-db psql -U postgres source_db \
  -c "INSERT INTO public.users (name, email)
       VALUES ('Alice', 'alice@example.com');"

docker compose exec dest-db psql -U postgres dest_db \
  -c "SELECT * FROM public.users;"

Or run the binary directly

pgpipe.yaml
source:
  host: "source-db.example.com"
  database: "myapp"
  user: "pgpipe"
  password: "secret"
  tables:
    - schema: "public"
      name: "users"

destination:
  host: "dest-db.example.com"
  database: "myapp_replica"
  user: "pgpipe"
  password: "secret"

# Then:  pgpipe start -c pgpipe.yaml
Who pgpipe is for

Anywhere you need PostgreSQL changes somewhere else.

Analytics replicas

Keep a reporting database in sync without taxing the primary. Schema changes flow through automatically.

Disaster recovery

Continuously replicated standby in another region or provider, ready for failover.

Zero-downtime moves

Migrate between providers, versions, or clouds with a strict-ordered cutover and verifiable consistency.

Multi-tenant isolation

Replicate to a different schema name on the destination — useful for blue-green and per-tenant warehouses.

Distributed coherence

Stream Postgres changes to keep caches, search indexes, or microservices in sync (sink your own consumer on the WAL).

Compliance archives

Continuous replication into a retention-only, append-friendly destination with full DDL history.

Need a hand?

We built pgpipe — and we run it for clients.

If you'd rather not operate the pipeline yourself, PG Horizon can deploy, monitor, and support pgpipe in your environment as part of our managed services.