Paginating records across large datasets in a web application seems like an easy problem that can actually be pretty tough to scale. The two main pagination strategies are offset/limit and cursors.
The offset/limit approach is by far the most common, and works by skipping a certain number of records (pages) and limiting the results to one page.
In addition to being easy to implement, it also has the nice advantage that pages are directly addressable. For example, if you want to navigate directly to page 20, you can do that because that offset is calculable very easily.
There is a major drawback though, and it lurks in the way that databases handle offsets. Offset tells the database to discard the first N results that are returned from a query. The database still has to fetch those rows from disk though.
This doesn't matter much if you're discarding 100 rows, but if you're discarding 100,000 rows the database is doing a lot of work just to throw away the results.