/
Data engineering

PostgreSQL Keyset Pagination vs Offset: Cursor-Based Guide for High-Performance Queries

Why PostgreSQL keyset pagination is better than offset: cursor-based pagination benchmarks, implementation patterns, and when to use each approach.
Blog post featured image

PostgreSQL Keyset Pagination vs Offset: Cursor-Based Guide for High-Performance Queries

Traditional offset-based pagination creates a critical performance bottleneck that undermines enterprise applications and data synchronization platforms. When systems require efficient traversal of large datasets — whether for real-time data synchronization, API endpoints, or operational reporting — offset pagination's linear performance degradation becomes a technical liability.

Cursor-based pagination in PostgreSQL eliminates these limitations through compound cursor implementation that delivers consistent performance regardless of dataset size. Understanding PostgreSQL keyset pagination vs offset performance is essential for any team building APIs or sync pipelines on large tables. This approach forms the technical foundation for reliable, high-performance Postgres pagination in modern enterprise systems.

Dimension Offset Pagination Keyset (Cursor-Based) Pagination
Performance model O(n) — degrades linearly with page depth O(1) — constant time regardless of depth
Page 50,000 latency 87 ms (unacceptable in production) Sub-millisecond with composite index
Data consistency Rows can be skipped or duplicated on insert/delete Stable — position based on actual column values
Random page access Supported (OFFSET n) Not supported — sequential traversal only
Index requirement Basic single-column index sufficient Composite index matching keyset column order
Best for Small datasets, admin UIs with page numbers APIs, data sync, infinite scroll, large tables

Key Takeaways

Offset pagination degrades to O(n) at depth — keyset pagination maintains O(1) regardless of table size.

Offset can miss or duplicate rows when data changes during traversal; keyset cursors are immune to this.

Keyset requires a composite index and sequential access — the trade-off is worth it for any table above a few thousand rows.

PostgreSQL Keyset Pagination vs Offset: The Performance Gap

Linear Degradation Creates Operational Issues

While an index can help Postgres find a starting point quickly, it still needs to walk through the index to count off the offset rows. The operation is still O(n). [1]

Real performance measurements demonstrate the problem:

`-- Page 1 (fast)
SELECT * FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 0;
-- Time: 468 microseconds

-- Page 50 (slower)
SELECT * FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 1000;
-- Time: 1 millisecond

-- Page 50000 (unacceptable)
SELECT * FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 1000000;
-- Time: 87 milliseconds`

While offset pagination is widely used, cursor-based pagination offers significant performance advantages - my tests show a 17x speedup when paginating through a million-record dataset in PostgreSQL. [2]

This performance degradation compounds in production environments where systems must process millions of records for data synchronization, batch processing, or large-scale operations. It is precisely why PostgreSQL keyset pagination is better than offset for any workload beyond a few thousand rows.

Data Consistency Problems

Beyond performance, offset pagination creates correctness issues that compromise data integrity:

Performance aside, offset pagination has a correctness problem, specifically with tables where rows can be deleted. The row that was previously at position 21 is now at position 20, so now belongs to page 1. It will not appear in the result set for page 2. The user never sees the row that moved from 21 to 20. [1]

For systems requiring guaranteed data processing—such as automated data sync between applications—these consistency problems create unacceptable operational risks.

Keyset Pagination: The Technical Solution

Compound Cursor Implementation

Cursor-based pagination in PostgreSQL works through compound cursors that create stable, efficient traversal:

SELECT * FROM users WHERE (updated_at, id) > ('2025-01-01', 123) ORDER BY updated_at ASC, id ASC LIMIT 20;

The compound cursor ensures three critical properties:

  1. Unique positioning: Every row has a deterministic position in the sort order
  2. Stable ordering: Position changes only when the row's sort field values change
  3. Insertion/deletion immunity: New or removed rows don't affect existing traversal

Traversal Implementation

`-- Initial query (first page)
SELECT * FROM users
ORDER BY updated_at DESC, id DESC
LIMIT 20;

-- Subsequent pages (forward traversal)
SELECT * FROM users
WHERE (updated_at, id) < (last_updated_at, last_id)
ORDER BY updated_at DESC, id DESC
LIMIT 20;

-- Reverse traversal
SELECT * FROM users
WHERE (updated_at, id) > (first_updated_at, first_id)
ORDER BY updated_at ASC, id ASC
LIMIT 20;`

The key requirement: always include all columns in both the WHERE clause and the ORDER BY clause.

Performance Characteristics and Optimization

Measured Performance Improvements

Indeed, I have switched from offset pagination to keyset pagination and it is much faster and far better suited for my infinite scrolling list. Query times went from 5s to 500ms. [3]

This approach—called seek method or keyset pagination—solves the problem of drifting results as illustrated above and is even faster than offset. [4]

Index Requirements for Optimal Performance

Keyset pagination requires proper indexing to achieve optimal performance:

-- For timestamp + id keyset CREATE INDEX idx_users_updated_at_id ON users (updated_at DESC, id DESC);

You'll want to add a composite index that is compatible with the columns and the order of your keyset. [5]

Without proper indexing, keyset pagination loses its performance advantages. The index must match both the columns and the sort order of your keyset.

Production Implementation Patterns

Handling Non-Unique Sort Fields

To fix this, we use a composite key (multiple columns) for pagination, ensuring a unique ordering sequence: This approach prevents duplicate records from appearing in multiple pages, as each query fetches results based on a specific starting point. [6]

-- Compound keyset with non-unique timestamp SELECT * FROM orders WHERE (created_date, id) > ('2025-01-15', 12345) ORDER BY created_date DESC, id DESC LIMIT 50;

Managing Concurrent Updates

As I've written about previously, stable ordering in Postgres breaks down at the tail (around recently committed rows). There are no locks on Postgres sequences, so Postgres sequences can commit out-of-order. And timestamps are no better; when setting updated_at to now(), you can have a row with an older timestamp commit after a row with a newer timestamp. [1]

For mission-critical applications, establish cursor boundaries:

-- Capture boundary before pagination SELECT updated_at, id FROM users ORDER BY updated_at DESC, id DESC LIMIT 1;

Then paginate from the beginning up to this boundary to avoid seeing rows twice during traversal.

Real-World Applications

API Design with Keyset Cursors

{ "items": [...], "next_cursor": "eyJ1cGRhdGVkX2F0IjoiMjAyNS0wMS0xNSIsImlkIjoxMjN9", "has_more": true }

Please note that keyset pagination affects the whole technology stack up to the JavaScript running in the browser doing AJAX for infinite scrolling: instead of passing a simple page number to the server, you must pass a full keyset (often multiple columns) down to the server. [4]

Database Synchronization Platforms

Modern bi-directional sync tools leverage keyset pagination for reliable data synchronization. For example, Stacksync implements keyset cursors throughout its real-time synchronization engine to maintain consistent performance when synchronizing millions of records between CRMs, databases, and enterprise systems. This approach ensures that automated data sync between applications maintains sub-second performance regardless of dataset size.

The keyset implementation enables Stacksync to provide guaranteed data consistency and effortless scalability—critical requirements for enterprise database synchronization platforms.

Large-Scale Data Processing

-- ETL processing with keyset pagination SELECT * FROM source_table WHERE (batch_timestamp, record_id) > (last_processed_timestamp, last_processed_id) ORDER BY batch_timestamp ASC, record_id ASC LIMIT 10000;

This pattern enables consistent processing performance for large-scale data operations without the degradation characteristic of offset-based approaches.

When to Choose Keyset Pagination

Optimal Use Cases

It's safe to say that keyset pagination is most applicable for the average site with ordered linear access. [7]

Choose keyset pagination for:

  • Real-time feeds: Activity logs, social media timelines
  • API endpoints: Where consistent performance is critical
  • Data synchronization: ETL processes, database replication
  • Infinite scroll interfaces: Mobile and web applications
  • Large dataset traversal: Any scenario requiring reliable performance at scale

PostgreSQL Keyset Pagination vs Offset: Performance Trade-offs

Even with all these considerations, keyset pagination is still extremely important as it's a much more performant way of paginating data. It's particularly well suited to scenarios where you don't need random access to numbered pages. [8]

Keyset pagination provides:

  • Advantages: Consistent performance, data integrity, scale reliability
  • Limitations: No random page access, requires compound cursors for non-unique fields
Use Case Recommended Approach Reason
Real-time activity feeds Keyset pagination New rows inserted frequently — offset positions become unreliable
API endpoints at scale Keyset pagination Consistent sub-millisecond latency regardless of cursor depth
Data sync and ETL pipelines Keyset pagination Guarantees no rows are skipped or processed twice
Infinite scroll interfaces Keyset pagination Sequential forward traversal maps naturally to cursor tokens
Admin UI with numbered pages Offset pagination Random page access required — keyset cannot jump to page N
Small tables (< 10k rows) Either Performance difference is negligible at low row counts

Key Takeaways

Keyset pagination wins for any workload that traverses data sequentially — APIs, feeds, sync pipelines.

Offset pagination is the right choice only when users need to jump to a specific numbered page.

For PostgreSQL tables growing beyond 10k rows, switching to cursor-based pagination is a low-effort, high-impact optimization.

Implementation Best Practices

Index Design Strategy

-- Match index to keyset order CREATE INDEX idx_keyset_optimized ON table_name (sort_field_1 DESC, sort_field_2 ASC, unique_field ASC);

Error Handling and Monitoring

Implement cursor validation and performance monitoring:

-- Validate cursor before processing SELECT COUNT(*) FROM table_name WHERE (timestamp_field, id_field) = (cursor_timestamp, cursor_id);

Framework Integration

The hall of fame of frameworks that do support keyset pagination is constantly growing: If you are maintaining a framework that is somehow involved with pagination, I ask you, I urge you, I beg you, to build in native support for keyset pagination too. [4]

Modern applications should implement keyset pagination natively rather than retrofitting offset-based systems.

Build Scalable Database Performance

Keyset cursor pagination delivers the consistent performance and data integrity that enterprise applications require when processing large datasets. Whether you're building real-time data synchronization, API endpoints, or large-scale ETL processes, keyset pagination eliminates the performance bottlenecks that undermine offset-based approaches.

For organizations implementing database synchronization across multiple systems, these performance patterns become critical infrastructure. Stacksync leverages keyset cursor pagination throughout its real-time synchronization engine to maintain sub-second performance when synchronizing millions of records between CRMs, databases, and enterprise applications. This architecture ensures reliable data consistency and effortless scalability—essential requirements for production data integration platforms.

Ready to eliminate pagination bottlenecks in your data synchronization workflows? Explore how Stacksync works to deliver consistent, scalable data synchronization across your business systems with keyset cursors and other performance optimizations.

Citations

Ready to see a real-time data integration platform in action? Book a demo with real engineers and discover how Stacksync brings together two-way sync, workflow automation, EDI, managed event queues, and built-in monitoring to keep your CRM, ERP, and databases aligned in real time without batch jobs or brittle integrations.
→  FAQS
What is keyset cursor pagination in PostgreSQL?
Keyset cursor pagination uses compound cursors to create stable, efficient data traversal by positioning queries based on specific column values rather than row offsets. This approach uses WHERE clauses with tuple comparisons like (updated_at, id) > (last_timestamp, last_id) to fetch the next page of results. Keyset pagination delivers consistent O(1) performance regardless of dataset size, making it essential for high-performance database operations in enterprise systems and data synchronization platforms.
How does keyset pagination differ from offset pagination?
Offset pagination counts rows to skip (OFFSET 1000) which creates O(n) linear performance degradation as pages get deeper, while keyset pagination uses WHERE clauses with actual column values for O(1) consistent performance. Offset pagination also suffers from data consistency issues when rows are deleted or inserted, causing duplicate or missed records. Keyset pagination eliminates these problems by using stable cursor positions based on actual data values rather than row positions.
Why is keyset pagination faster than offset pagination?
Keyset pagination achieves constant-time performance because queries use indexed WHERE clauses to position directly at the next result set, while offset pagination must count through all previous rows. Real-world benchmarks show keyset pagination providing 17x speedup over offset when paginating through million-record datasets. The performance gap widens dramatically at deeper pages, with offset queries taking 87 milliseconds at page 50,000 compared to keyset queries maintaining sub-millisecond performance regardless of page depth.
When should you use keyset pagination instead of offset?
Use keyset pagination for real-time feeds, API endpoints requiring consistent performance, data synchronization processes, infinite scroll interfaces, and any scenario traversing large datasets where performance matters. Keyset pagination excels when you don't need random page access and can process data sequentially. Choose offset pagination only when random page navigation is essential, understanding that performance will degrade significantly at deeper pages.
How do you implement keyset pagination in PostgreSQL?
Implement keyset pagination using compound cursors with WHERE clauses comparing tuples: SELECT * FROM table WHERE (sort_field, id) > (last_value, last_id) ORDER BY sort_field ASC, id ASC LIMIT 20. Always include all columns in both WHERE and ORDER BY clauses, and create composite indexes matching your keyset order. For production systems handling large-scale data synchronization, platforms like Stacksync implement keyset cursors natively to maintain consistent performance when processing millions of records across enterprise systems.

Syncing data at scale
across all industries.

a blue checkmark icon
POC from integration engineers
a blue checkmark icon
Two-way, Real-time sync
a blue checkmark icon
Workflow automation
a blue checkmark icon
White-glove onboarding
“We’ve been using Stacksync across 4 different projects and can’t imagine working without it.”

Alex Marinov

VP Technology, Acertus Delivers
Vehicle logistics powered by technology