apollo-client: Query doesn't return empty entries from cached array (offset pagination) any more

We noticed that our implementation of an offset/limit bases pagination broke. This used to work with earlier Apollo Client 3.0 versions (beta/rc) but doesn’t work in 3.0.2.

Intended outcome: Custom type policies should be able to merge pagination results . useQuery should return the array with the empty entries so that the component can find the data it needs to display a page based on an offset and limit.

Actual outcome: The cache shows an array with empty entries. useQuery returns the array with the empty items removed.

Apollo Client 3.0.2 React 16.13.1

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 1
  • Comments: 15 (6 by maintainers)

Most upvoted comments

Hi @hwillson it is still a problem with latest. It would require an intentional update to reintroduce support for storing sparse arrays in Apollo cache. The current implementation uses a filter array operator that reduces a sparse array to a non-sparse array before storing.

The downside to no longer supporting sparse arrays, as previously noted, is that it forces clients to pad their arrays with potentially millions of nulls before storing (imagine the case where only the first and thousandth page of data had been fetched, for example).

I just ran into this issue again while attempting to reconfigure pagination in my app. Respectfully, the provided offsetLimitPagination utility and documentation should be removed if it is not going to be updated as it simply does not work as the docs describe and has not worked for almost 3 years. It is quite frustrating to follow documentation exactly and have different results - something that over time leads to lost confidence in Apollo overall.

The provided offsetLimitPagination policy no longer works properly as it creates sparse arrays that are now automatically filtered: [0, 1, 2, <3 empty items>, 6] becomes [0, 1, 2, 6].

I fixed this in our code by writing null in the cache and with a field read function:

/**
 * Create TypePolicy for offset/limit based pagination.
 */
function makeOffsetPaginationPolicy(): TypePolicy {
    return {
        fields: {
            rows: {
                keyArgs: false,
                merge(existing: any[], incoming: any[], params) {
                    const offset = params.variables.offset;
                    const merged = existing ? existing.slice(0) : [];
                    // Insert the incoming elements in the right places, according to args.
                    const newEnd = offset + incoming.length;
                    const oldEnd = existing?.length ?? 0;
                    for (let i = Math.min(offset, oldEnd); i < newEnd; ++i) {
                        if (i >= offset) {
                            merged[i] = incoming[i - offset];
                        } else if (i >= oldEnd) {
                            // add placeholder
                            merged[i] = null;
                        }
                    }
                    return merged;
                },
                read(existing) {
                    return existing ?? [];
                },
            },
        },
    };
}

@benjamn @hwillson I think this should be clarified in the docs, you still have to patch the read to make sure it doesn’t blow away empty slots in the array. Additionally, you can’t map over sparse arrays like this:

slice.map(friend => canRead(friend) ? friend : null);

map will leave empty slots in the resulting array, but it does not call the callback function on them. To actually turn the empty slots into null, you have to loop with a for:

for (let i = 0; i < slice.length; ++i) {
  slice[i] = canRead(slice[i]) ? slice[i] : null;
}

Further notes…the fix as suggested by @benjamn will not work in the case of sparse arrays (including the officially provided offsetLimitPagintation util) as the callback of a map function is not invoked for a sparse array “hole” (https://remysharp.com/2018/06/26/an-adventure-in-sparse-arrays).

@cbergmiller workaround does work, but does suffer from the padding problem I mentioned in my previous comment, so does still feel to be like a workaround rather than an acceptable solution.