When building change data capture (CDC) pipelines with PostgreSQL, selecting the right output plugin determines how database changes get formatted and delivered—whether you're replicating to another PostgreSQL instance, streaming to Kafka, or delivering changes to a webhook. Understanding these plugins is crucial for implementing efficient real-time data synchronization across systems.
This guide examines the different PostgreSQL logical decoding output plugins, their technical characteristics, and practical implementation considerations for modern data architectures.
To understand how different output plugins work, let's start with a concrete scenario. We'll set up logical replication, make a change, and see how different plugins format that change.
First, enable logical replication in PostgreSQL by checking your WAL level:
SHOW wal_level;
If it's not set to 'logical', configure logical replication for your PostgreSQL provider (AWS RDS, Google Cloud SQL, Azure).
-- 1. Create a test table
CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
email VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
-- 2. Set replica identity for complete change tracking
ALTER TABLE users REPLICA IDENTITY FULL;
-- 3. Insert initial data
INSERT INTO users (name, email) VALUES ('John Doe', 'john@example.com');
-- 4. Create a publication
CREATE PUBLICATION user_changes FOR ALL TABLES;
-- 5. Create logical replication slot with test_decoding plugin
SELECT pg_create_logical_replication_slot('test_slot', 'test_decoding');
-- The change we'll observe across different plugins
UPDATE users SET name = 'John Smith' WHERE id = 1;
-- View the formatted output
SELECT lsn, xid, data FROM pg_logical_slot_peek_changes('test_slot', NULL, NULL);
This simple update produces dramatically different output depending on your chosen plugin—a critical factor for automated data sync between applications and integration complexity.
Understanding the data flow helps optimize your real-time data synchronization architecture:
PostgreSQL's WAL stores low-level binary records, not human-readable messages:
# Simplified WAL record structure
WAL Record: LSN 0/1A2B3C4
- Relation OID: 16384 (internal table identifier)
- Transaction ID: 12345
- Operation: UPDATE
- Block/offset: physical storage location
- Old tuple: [binary data for old row]
- New tuple: [binary data for new row]
The WAL contains only internal identifiers and binary data—no table names, column names, or readable values.
This architecture allows PostgreSQL to support many output formats without changing the underlying WAL format or storing duplicate information. The core database only needs to log changes once in the WAL, and then any number of output plugins can decode those logs and present the data in JSON, SQL, binary, etc., as needed. [1]
The decoding process:
Every logical decoding plugin receives the same core information about the change; what differs is how they output it. The test_decoding plugin formats this as human-readable text, wal2json converts it to JSON, and pgoutput encodes it in PostgreSQL's binary logical replication protocol. [1]
Each plugin receives identical decoded information:
public.users
UPDATE
{id: 1, name: "John Smith", email: "john@example.com"}
{name: "John Doe"}
This standardized input enables consistent behavior across different output formats while supporting diverse integration requirements.
PostgreSQL ships with two logical decoding plugins out of the box. These don't require any additional installations—they're ready to use on any Postgres 10+ server. [1]
pgoutput is PostgreSQL's default plugin for logical replication. If you're using the built-in publish/subscribe system, you're already using this plugin behind the scenes. [1]
Sample output (conceptual representation):
BEGIN LSN: 0/1A2B3C4
TABLE: public.users
UPDATE: id[integer]=1 name[text]='John Smith' (old: 'John Doe') email[text]='john@example.com'
COMMIT LSN: 0/1A2B3C4
The actual output uses a binary protocol requiring specialized parsing tools.
Technical characteristics:
PostgreSQL's example plugin, primarily useful for understanding logical decoding mechanics and debugging.
Sample output:
BEGIN 12345
table public.users: UPDATE: id[integer]:1 name[text]:'John Smith' email[text]:'john@example.com'
COMMIT 12345
Technical characteristics:
The wal2json extension allows streaming all changes in a database to a consumer, formatted as JSON. [2]
Sample output:
{
"change": [{
"kind": "update",
"schema": "public",
"table": "users",
"columnnames": ["id", "name", "email", "created_at"],
"columnvalues": [1, "John Smith", "john@example.com", "2024-01-15T10:30:00"],
"oldkeys": {
"keynames": ["id"],
"keyvalues": [1]
},
"oldvalues": [1, "John Doe", "john@example.com", "2024-01-15T10:30:00"]
}]
}
Technical characteristics:
Uses Protocol Buffers for efficient binary serialization, targeting high-throughput scenarios.
Sample output (conceptual protobuf structure):
RowMessage {
transaction_id: 12345
table: "public.users"
op: UPDATE
new_tuple: {
columns: [
{name: "id", type: INTEGER, value: 1},
{name: "name", type: TEXT, value: "John Smith"},
{name: "email", type: TEXT, value: "john@example.com"}
]
}
old_tuple: {
columns: [
{name: "name", type: TEXT, value: "John Doe"}
]
}
}
Technical characteristics:
Your choice depends on environment constraints and performance requirements:
Managed Services (AWS RDS, Google Cloud SQL, Azure): Choose pgoutput for Postgres-to-Postgres replication or wal2json for external integrations. [1]
Self-hosted environments: Self-hosted: You have full flexibility. Consider decoderbufs for high-performance scenarios or stick with pgoutput for simplicity. [1]
For high-volume real-time data synchronization scenarios:
While logical decoding plugins provide the foundation for change data capture, building production-ready consumers requires significant engineering investment. Organizations implementing automated data sync between applications increasingly choose purpose-built platforms that eliminate this complexity.
Traditional logical decoding implementation challenges:
Modern data synchronization platforms like Stacksync address these challenges by providing:
This eliminates the need for custom logical decoding consumer development while providing comprehensive bi-directional sync capabilities across CRMs, ERPs, and databases.
The output plugin format isn't necessarily what your application consumes. For example:
pgoutput
to receive binary data from PostgreSQLpgoutput
formatThis architecture separation allows PostgreSQL to maintain efficient internal formats while supporting diverse integration requirements through automated data sync between applications.
Ready to move beyond custom logical decoding implementations? Modern platforms like Stacksync provide enterprise-grade real-time data synchronization without the complexity of building and maintaining custom logical decoding consumers. Experience bi-directional sync across 200+ systems with no-code configuration and automated reliability.