Traditional offset-based pagination creates a critical performance bottleneck that undermines enterprise applications and data synchronization platforms. When systems require efficient traversal of large datasets—whether for real-time data synchronization, API endpoints, or operational reporting—offset pagination's linear performance degradation becomes a technical liability.
Keyset cursor pagination eliminates these limitations through compound cursor implementation that delivers consistent performance regardless of dataset size. This approach forms the technical foundation for reliable, high-performance database operations in modern enterprise systems.
While an index can help Postgres find a starting point quickly, it still needs to walk through the index to count off the offset rows. The operation is still O(n). [1]
Real performance measurements demonstrate the problem:
-- Page 1 (fast)
SELECT * FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 0;
-- Time: 468 microseconds
-- Page 50 (slower)
SELECT * FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 1000;
-- Time: 1 millisecond
-- Page 50000 (unacceptable)
SELECT * FROM users ORDER BY created_at DESC LIMIT 20 OFFSET 1000000;
-- Time: 87 milliseconds
While offset pagination is widely used, cursor-based pagination offers significant performance advantages - my tests show a 17x speedup when paginating through a million-record dataset in PostgreSQL. [2]
This performance degradation compounds in production environments where systems must process millions of records for data synchronization, batch processing, or large-scale operations.
Beyond performance, offset pagination creates correctness issues that compromise data integrity:
Performance aside, offset pagination has a correctness problem, specifically with tables where rows can be deleted. The row that was previously at position 21 is now at position 20, so now belongs to page 1. It will not appear in the result set for page 2. The user never sees the row that moved from 21 to 20. [1]
For systems requiring guaranteed data processing—such as automated data sync between applications—these consistency problems create unacceptable operational risks.
Keyset pagination uses compound cursors to create stable, efficient traversal:
SELECT * FROM users
WHERE (updated_at, id) > ('2025-01-01', 123)
ORDER BY updated_at ASC, id ASC
LIMIT 20;
The compound cursor ensures three critical properties:
-- Initial query (first page)
SELECT * FROM users
ORDER BY updated_at DESC, id DESC
LIMIT 20;
-- Subsequent pages (forward traversal)
SELECT * FROM users
WHERE (updated_at, id) < (last_updated_at, last_id)
ORDER BY updated_at DESC, id DESC
LIMIT 20;
-- Reverse traversal
SELECT * FROM users
WHERE (updated_at, id) > (first_updated_at, first_id)
ORDER BY updated_at ASC, id ASC
LIMIT 20;
The key requirement: always include all columns in both the WHERE
clause and the ORDER BY
clause.
Indeed, I have switched from offset pagination to keyset pagination and it is much faster and far better suited for my infinite scrolling list. Query times went from 5s to 500ms. [3]
This approach—called seek method or keyset pagination—solves the problem of drifting results as illustrated above and is even faster than offset. [4]
Keyset pagination requires proper indexing to achieve optimal performance:
-- For timestamp + id keyset
CREATE INDEX idx_users_updated_at_id
ON users (updated_at DESC, id DESC);
You'll want to add a composite index that is compatible with the columns and the order of your keyset. [5]
Without proper indexing, keyset pagination loses its performance advantages. The index must match both the columns and the sort order of your keyset.
To fix this, we use a composite key (multiple columns) for pagination, ensuring a unique ordering sequence: This approach prevents duplicate records from appearing in multiple pages, as each query fetches results based on a specific starting point. [6]
-- Compound keyset with non-unique timestamp
SELECT * FROM orders
WHERE (created_date, id) > ('2025-01-15', 12345)
ORDER BY created_date DESC, id DESC
LIMIT 50;
As I've written about previously, stable ordering in Postgres breaks down at the tail (around recently committed rows). There are no locks on Postgres sequences, so Postgres sequences can commit out-of-order. And timestamps are no better; when setting updated_at to now(), you can have a row with an older timestamp commit after a row with a newer timestamp. [1]
For mission-critical applications, establish cursor boundaries:
-- Capture boundary before pagination
SELECT updated_at, id
FROM users
ORDER BY updated_at DESC, id DESC
LIMIT 1;
Then paginate from the beginning up to this boundary to avoid seeing rows twice during traversal.
{
"items": [...],
"next_cursor": "eyJ1cGRhdGVkX2F0IjoiMjAyNS0wMS0xNSIsImlkIjoxMjN9",
"has_more": true
}
Please note that keyset pagination affects the whole technology stack up to the JavaScript running in the browser doing AJAX for infinite scrolling: instead of passing a simple page number to the server, you must pass a full keyset (often multiple columns) down to the server. [4]
Modern bi-directional sync tools leverage keyset pagination for reliable data synchronization. For example, Stacksync implements keyset cursors throughout its real-time synchronization engine to maintain consistent performance when synchronizing millions of records between CRMs, databases, and enterprise systems. This approach ensures that automated data sync between applications maintains sub-second performance regardless of dataset size.
The keyset implementation enables Stacksync to provide guaranteed data consistency and effortless scalability—critical requirements for enterprise database synchronization platforms.
-- ETL processing with keyset pagination
SELECT * FROM source_table
WHERE (batch_timestamp, record_id) > (last_processed_timestamp, last_processed_id)
ORDER BY batch_timestamp ASC, record_id ASC
LIMIT 10000;
This pattern enables consistent processing performance for large-scale data operations without the degradation characteristic of offset-based approaches.
It's safe to say that keyset pagination is most applicable for the average site with ordered linear access. [7]
Choose keyset pagination for:
Even with all these problematic aspects, keyset pagination is still extremely important as it's a much more performant way of paginating data. It's particularly well suited to scenarios where you don't need random access to numbered pages. [8]
Keyset pagination provides:
-- Match index to keyset order
CREATE INDEX idx_keyset_optimized
ON table_name (sort_field_1 DESC, sort_field_2 ASC, unique_field ASC);
Implement cursor validation and performance monitoring:
-- Validate cursor before processing
SELECT COUNT(*) FROM table_name
WHERE (timestamp_field, id_field) = (cursor_timestamp, cursor_id);
The hall of fame of frameworks that do support keyset pagination is constantly growing: If you are maintaining a framework that is somehow involved with pagination, I ask you, I urge you, I beg you, to build in native support for keyset pagination too. [4]
Modern applications should implement keyset pagination natively rather than retrofitting offset-based systems.
Keyset cursor pagination represents the technical standard for high-performance database traversal in production systems. By eliminating offset pagination's fundamental limitations, keyset cursors enable applications to maintain consistent performance and data integrity regardless of scale—essential capabilities for modern enterprise systems and data synchronization platforms.