.webp)
Traditional offset-based pagination creates a critical performance bottleneck that undermines enterprise applications and data synchronization platforms. When systems require efficient traversal of large datasets — whether for real-time data synchronization, API endpoints, or operational reporting — offset pagination's linear performance degradation becomes a technical liability.
Cursor-based pagination in PostgreSQL eliminates these limitations through compound cursor implementation that delivers consistent performance regardless of dataset size. Understanding PostgreSQL keyset pagination vs offset performance is essential for any team building APIs or sync pipelines on large tables. This approach forms the technical foundation for reliable, high-performance Postgres pagination in modern enterprise systems.
| Dimension | Offset Pagination | Keyset (Cursor-Based) Pagination |
|---|---|---|
| Performance model | O(n) — degrades linearly with page depth | O(1) — constant time regardless of depth |
| Page 50,000 latency | 87 ms (unacceptable in production) | Sub-millisecond with composite index |
| Data consistency | Rows can be skipped or duplicated on insert/delete | Stable — position based on actual column values |
| Random page access | Supported (OFFSET n) | Not supported — sequential traversal only |
| Index requirement | Basic single-column index sufficient | Composite index matching keyset column order |
| Best for | Small datasets, admin UIs with page numbers | APIs, data sync, infinite scroll, large tables |
Offset pagination degrades to O(n) at depth — keyset pagination maintains O(1) regardless of table size.
Offset can miss or duplicate rows when data changes during traversal; keyset cursors are immune to this.
Keyset requires a composite index and sequential access — the trade-off is worth it for any table above a few thousand rows.
While an index can help Postgres find a starting point quickly, it still needs to walk through the index to count off the offset rows. The operation is still O(n). [1]
Real performance measurements demonstrate the problem:
`-- Page 1 (fast)
SELECT * FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 0;
-- Time: 468 microseconds
-- Page 50 (slower)
SELECT * FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 1000;
-- Time: 1 millisecond
-- Page 50000 (unacceptable)
SELECT * FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 1000000;
-- Time: 87 milliseconds`
While offset pagination is widely used, cursor-based pagination offers significant performance advantages - my tests show a 17x speedup when paginating through a million-record dataset in PostgreSQL. [2]
This performance degradation compounds in production environments where systems must process millions of records for data synchronization, batch processing, or large-scale operations. It is precisely why PostgreSQL keyset pagination is better than offset for any workload beyond a few thousand rows.
Beyond performance, offset pagination creates correctness issues that compromise data integrity:
Performance aside, offset pagination has a correctness problem, specifically with tables where rows can be deleted. The row that was previously at position 21 is now at position 20, so now belongs to page 1. It will not appear in the result set for page 2. The user never sees the row that moved from 21 to 20. [1]
For systems requiring guaranteed data processing—such as automated data sync between applications—these consistency problems create unacceptable operational risks.
Cursor-based pagination in PostgreSQL works through compound cursors that create stable, efficient traversal:
SELECT * FROM users WHERE (updated_at, id) > ('2025-01-01', 123) ORDER BY updated_at ASC, id ASC LIMIT 20;
The compound cursor ensures three critical properties:
`-- Initial query (first page)
SELECT * FROM users
ORDER BY updated_at DESC, id DESC
LIMIT 20;
-- Subsequent pages (forward traversal)
SELECT * FROM users
WHERE (updated_at, id) < (last_updated_at, last_id)
ORDER BY updated_at DESC, id DESC
LIMIT 20;
-- Reverse traversal
SELECT * FROM users
WHERE (updated_at, id) > (first_updated_at, first_id)
ORDER BY updated_at ASC, id ASC
LIMIT 20;`
The key requirement: always include all columns in both the WHERE clause and the ORDER BY clause.
Indeed, I have switched from offset pagination to keyset pagination and it is much faster and far better suited for my infinite scrolling list. Query times went from 5s to 500ms. [3]
This approach—called seek method or keyset pagination—solves the problem of drifting results as illustrated above and is even faster than offset. [4]
Keyset pagination requires proper indexing to achieve optimal performance:
-- For timestamp + id keyset CREATE INDEX idx_users_updated_at_id ON users (updated_at DESC, id DESC);
You'll want to add a composite index that is compatible with the columns and the order of your keyset. [5]
Without proper indexing, keyset pagination loses its performance advantages. The index must match both the columns and the sort order of your keyset.
To fix this, we use a composite key (multiple columns) for pagination, ensuring a unique ordering sequence: This approach prevents duplicate records from appearing in multiple pages, as each query fetches results based on a specific starting point. [6]
-- Compound keyset with non-unique timestamp SELECT * FROM orders WHERE (created_date, id) > ('2025-01-15', 12345) ORDER BY created_date DESC, id DESC LIMIT 50;
As I've written about previously, stable ordering in Postgres breaks down at the tail (around recently committed rows). There are no locks on Postgres sequences, so Postgres sequences can commit out-of-order. And timestamps are no better; when setting updated_at to now(), you can have a row with an older timestamp commit after a row with a newer timestamp. [1]
For mission-critical applications, establish cursor boundaries:
-- Capture boundary before pagination SELECT updated_at, id FROM users ORDER BY updated_at DESC, id DESC LIMIT 1;
Then paginate from the beginning up to this boundary to avoid seeing rows twice during traversal.
{ "items": [...], "next_cursor": "eyJ1cGRhdGVkX2F0IjoiMjAyNS0wMS0xNSIsImlkIjoxMjN9", "has_more": true }
Please note that keyset pagination affects the whole technology stack up to the JavaScript running in the browser doing AJAX for infinite scrolling: instead of passing a simple page number to the server, you must pass a full keyset (often multiple columns) down to the server. [4]
Modern bi-directional sync tools leverage keyset pagination for reliable data synchronization. For example, Stacksync implements keyset cursors throughout its real-time synchronization engine to maintain consistent performance when synchronizing millions of records between CRMs, databases, and enterprise systems. This approach ensures that automated data sync between applications maintains sub-second performance regardless of dataset size.
The keyset implementation enables Stacksync to provide guaranteed data consistency and effortless scalability—critical requirements for enterprise database synchronization platforms.
-- ETL processing with keyset pagination SELECT * FROM source_table WHERE (batch_timestamp, record_id) > (last_processed_timestamp, last_processed_id) ORDER BY batch_timestamp ASC, record_id ASC LIMIT 10000;
This pattern enables consistent processing performance for large-scale data operations without the degradation characteristic of offset-based approaches.
It's safe to say that keyset pagination is most applicable for the average site with ordered linear access. [7]
Choose keyset pagination for:
Even with all these considerations, keyset pagination is still extremely important as it's a much more performant way of paginating data. It's particularly well suited to scenarios where you don't need random access to numbered pages. [8]
Keyset pagination provides:
| Use Case | Recommended Approach | Reason |
|---|---|---|
| Real-time activity feeds | Keyset pagination | New rows inserted frequently — offset positions become unreliable |
| API endpoints at scale | Keyset pagination | Consistent sub-millisecond latency regardless of cursor depth |
| Data sync and ETL pipelines | Keyset pagination | Guarantees no rows are skipped or processed twice |
| Infinite scroll interfaces | Keyset pagination | Sequential forward traversal maps naturally to cursor tokens |
| Admin UI with numbered pages | Offset pagination | Random page access required — keyset cannot jump to page N |
| Small tables (< 10k rows) | Either | Performance difference is negligible at low row counts |
Keyset pagination wins for any workload that traverses data sequentially — APIs, feeds, sync pipelines.
Offset pagination is the right choice only when users need to jump to a specific numbered page.
For PostgreSQL tables growing beyond 10k rows, switching to cursor-based pagination is a low-effort, high-impact optimization.
-- Match index to keyset order CREATE INDEX idx_keyset_optimized ON table_name (sort_field_1 DESC, sort_field_2 ASC, unique_field ASC);
Implement cursor validation and performance monitoring:
-- Validate cursor before processing SELECT COUNT(*) FROM table_name WHERE (timestamp_field, id_field) = (cursor_timestamp, cursor_id);
The hall of fame of frameworks that do support keyset pagination is constantly growing: If you are maintaining a framework that is somehow involved with pagination, I ask you, I urge you, I beg you, to build in native support for keyset pagination too. [4]
Modern applications should implement keyset pagination natively rather than retrofitting offset-based systems.
Keyset cursor pagination delivers the consistent performance and data integrity that enterprise applications require when processing large datasets. Whether you're building real-time data synchronization, API endpoints, or large-scale ETL processes, keyset pagination eliminates the performance bottlenecks that undermine offset-based approaches.
For organizations implementing database synchronization across multiple systems, these performance patterns become critical infrastructure. Stacksync leverages keyset cursor pagination throughout its real-time synchronization engine to maintain sub-second performance when synchronizing millions of records between CRMs, databases, and enterprise applications. This architecture ensures reliable data consistency and effortless scalability—essential requirements for production data integration platforms.
Ready to eliminate pagination bottlenecks in your data synchronization workflows? Explore how Stacksync works to deliver consistent, scalable data synchronization across your business systems with keyset cursors and other performance optimizations.