apollo-client: [3.0] - pagination example does not work as intended

Intended outcome:

I’m trying to implement the pagination example with the new cache implementation

Actual outcome:

Using pagination example based n #5677 doesn’t work as expected

merge(existing: any[], incoming: any[], { args }) {
  const merged = existing ? existing.slice(0) : [];
  // Insert the incoming elements in the right places, according to args.
  for (let i = args.offset; i < args.offset + args.limit; ++i) {
    merged[i] = incoming[i - args.offset];
  }
  return merged;
},
read(existing: any[], { args }) {
  // If we read the field before any data has been written to the
  // cache, this function will return undefined, which correctly
  // indicates that the field is missing.
  return existing && existing.slice(
    args.offset,
    args.offset + args.limit,
  );
},

In the merge function, instead of doing

for (let i = args.offset; i < args.offset + args.limit; ++i) 

I had to do

for (let i = args.offset; i < Math.min(args.offset + incoming.length, args.offset + args.limit) ; ++i)

since the incoming length could be less than the offset, so it will add undefined items to the array

In the read function, if the existing arg has data, and we slice to a bigger offset, it return an empty array, not undefined, so the query is not passed to the server

insetad of doing

return existing && existing.slice(
  args.offset,
  args.offset + args.limit,
);

I did

const data =
  existing && existing.slice(args.offset, args.offset + args.limit);
return data && data.length === 0 ? undefined : data;

Also the types of the example fails as they are

intead of

merge(existing: any[], incoming: any[], { args }) {
...
}
read(existing: any[], { args }) {
...
}

I replaced with

merge(existing: any, incoming: any, { args }: { args: any }) {
...
}
read(existing: any, { args }: { args: any }) {
....
}

Versions

 System:
    OS: macOS 10.15.2
  Binaries:
    Node: 12.12.0 - /usr/local/bin/node
    Yarn: 1.19.1 - /usr/local/bin/yarn
    npm: 6.11.3 - /usr/local/bin/npm
  Browsers:
    Chrome: 79.0.3945.130
    Safari: 13.0.4
  npmPackages:
    @apollo/client: ^3.0.0-beta.30 => 3.0.0-beta.30
    @apollo/link-context: ^2.0.0-beta.3 => 2.0.0-beta.3
    @apollo/link-error: ^2.0.0-beta.3 => 2.0.0-beta.3

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 15 (5 by maintainers)

Commits related to this issue

Most upvoted comments

with the latest updates of AC, this seems to be working fine now. I just wanted to leave a working example on how to handle a numbered pagination, where you can delete and insert items to the list without having to refetch queries from the server in it’s not necessary. Of course this works in hand with the cache.modify and cache.evict functions when doing an insert and a delete mutation

const cache = new InMemoryCache({
  typePolicies: {
    Query: {
      fields: {
        users: offsetLimitPaginatedField(),
        user(existing, { args, toReference }) {
          return existing || toReference({ __typename: "users", id: args?.id });
        },
      },
    },
  },
});

pagination.ts


type Items = {
  nodes: any[];
  totalCount: number
}

export function offsetLimitPaginatedField() {
  return {
    keyArgs: ["where"],
    merge(
      existing: Items,
      incoming: Items,
      { args, canRead, readField }: FieldFunctionOptions
    ) {
      const offset = args?.offset ?? -1;
      const limit = args?.limit ?? -1;

      // Insert the incoming elements in the right places, according to args.
      if (offset >= 0 && limit > 0) {
        // filter dangling references from the existing items
        const merged = existing?.nodes.length
          ? existing.slice(0).filter(canRead)
          : [];
        // we need the offset difference of the existing array and what was requested,
        // since existing can already contain newly inserted items that may be present in the incoming
        const offsetDiff = Math.max(merged?.length - offset, 0);
        const end = offset + Math.min(limit, incoming.nodes.length);
        for (let i = offset; i < end; ++i) {
          const node = incoming.nodes[i - offset];

          // find if the node is already present in the array
          // this could happen when new  obj is added at the top of the list, and when
          // requesting a new page to the server, the server sends the same id back in the new page
          const existing = merged.find(
            (m: any) => readField("id", m) === readField("id", node)
          );

          if (!existing) {
            merged[i + offsetDiff] = node;
          }
        }
        // we filter for empty spots in case the incoming contained existing items.
        // This could happen if items were inserted at the top of the list
        const nodes = merged.filter((m: any) => m);
        return {
          ...incoming,
          nodes,
        };
      }
      return incoming;
    },
    read(existing: any, { args, canRead }: FieldFunctionOptions) {
      const offset = args?.offset ?? -1;
      const limit = args?.limit ?? -1;

      if (offset < 0 && limit < 0) {
        return existing;
      }

      // If we read the field before any data has been written to the
      // cache, this function will return undefined, which correctly
      // indicates that the field is missing.
      const nodes =
        existing?.length && offset >= 0 && limit > 0
          ? existing.nodes
              // we filter for empty spots because its likely we have padded spots with nothing in them.
              // also filter obj that are not valid references (removed from the cache)
              .filter(canRead)
          : existing;
      // we have to filter first in order to slice only valid references and prevent a
      // server roundtrip if length doesn't cover the items per page
      const page = nodes?.length ? nodes.slice(offset, offset + limit) : nodes;
      // if the total items read from the cache is less than the number requested,
      // but the total items is greater, it means that we don't have enough items cached,
      // so in order to request the items from the server instead, we return undefined
      // for this to work we need to know the total count of all items
      const itemsPerPage = args?.limit || 0;
      if (page?.length < itemsPerPage && nodes?.length < existing?.totalCount) {
        return undefined;
      }

      if (nodes?.length) {
        return {
          ...existing,
          nodes: page,
        };
      }
    },
  };

I agree this is an area that we need to handle better in the documentation. Let us know if the updates in #6429 do not answer your questions (or the questions you think other developers might have). Thanks again for pointing out the issues with the original pagination examples!

@benjamn I know you guys are busy with the release of the upcoming v3 but can we get back to this at some point, at least on the docs?. We use a lot of tables in a admin dashboard where we can edit/remove/add rows and we still have to refetch after any crud operation. I thought with the new AC cache the pagination was gonna be fixed but this still happening. i think is a really common use case and It would be great to include a best way to implement this sort of pagination in the docs

Tanks for the great work!

@benjamn you can see the repro of the 1 issue https://codesandbox.io/s/nostalgic-pike-94wxh