apollo-client: Fetchmore is not using cache
When using pagination and doing a fetchMore request the cache is not used to read data,
While data is already in the cache and having implementations for ‘read’ and ‘merge’ field policies, it looks like the ‘fetchMore’ is not using the typePolicy ‘read’ function and direcly does a network request. Also the fetchPolicy is not used, for example if you set it to ‘cache-only’, fetchmore is doing network requests.
It’s kinda like the example from the documentation:
const FeedData({ type = "PUBLIC" }) {
const [limit, setLimit] = useState(10);
const { loading, data, fetchMore } = useQuery(FEED_QUERY, {
variables: {
type: type.toUpperCase(),
offset: 0,
limit,
},
});
if (loading) return <Loading/>;
return (
<Feed
entries={data.feed || []}
onLoadMore={() => {
const currentLength = data.feed.length;
fetchMore({
variables: {
offset: currentLength,
limit: 10,
},
}).then(fetchMoreResult => {
// Update variables.limit for the original query to include
// the newly added feed items.
setLimit(currentLength + fetchMoreResult.data.feed.length);
});
}
/>
);
}
Sometimes if you scroll 500 items with a continuous scroll implementation, which loads 500 items in the cache, the component gets unmounted, and mounts again, you do not want all the 500 items to show at once because the rendering could be slow. I want to just to show the initial 10 items, and “fetchMore” from the cache or network (if it’s not in the cache), while you scroll for more items. so in the typePolicy I have something like:
const policy = {
Query: {
fields: {
feed: {
keyArgs: ['type'],
merge: {
// merge from offsetLimitPagination
},
read(existing, { args }) {
if (!args || !existing || !(args.limit >= 0)) {
return existing;
}
if (existing.length >= args.limit + (args.offset ?? 0)) {
return existing.slice(args.offset ?? 0, args.limit ?? existing.length);
}
// not enough data
},
},
},
},
};
But ‘read’ is never called when using fetchmore and always does a network call even though all the data is in the cache for the first 500 items.
Apollo version 3.3
About this issue
- Original URL
- State: open
- Created 4 years ago
- Reactions: 5
- Comments: 18 (4 by maintainers)
Unfortunately still no update at least in the code: https://github.com/apollographql/apollo-client/blob/fea2bab4e2c50ee96374ea27eb7b52358ccb59ed/src/core/ObservableQuery.ts#L431
Since I’m using React I’m not able to switch to
setVariable
as @m4riok suggests and like it’s stated in the docs https://github.com/apollographql/apollo-client/blob/fea2bab4e2c50ee96374ea27eb7b52358ccb59ed/docs/source/pagination/offset-based.mdx#L153I tried one build with this hardcoded value removed and it worked like a charm with
cache-first
policy (for my usecase at least and without regression testing of course).It would really be nice if it’s possible to introduce the default value
no-cache
which we can manually overwrite instead of simply hardcoding it. TheupdateQuery
referenced in the comment (line 429) is optional, so it even would make sense to check whether or notupdateQuery
is set and only applyno-cache
if it’s really necessary, wouldn’t it?Edit: Suggested change of line 431 in ObservableQuery.ts:
@awlevin I had also this problem. My current solution is to generate some typePolicies based on the schema. New arguments to fields are generated in the type policies automatically. But offcourse, a negated keyArgs would be nice. Something like excludedKeysArgs. Then the field key should be constructed with argnames included and sorted by argname.
@robertsmit If you have some way to detect the event of navigating back to the component, you should be able to call
setLimit(10)
to reset the window.The
fetchMore
method sends a separate request that always has a fetch policy ofno-cache
, which is why it doesn’t try to read from the cache first.@benjamn I have a related question about
offsetLimitPagination
. We’re implementing an infinite scrolling list where mutations can happen on items in the list (think: editing a comment in a news feed). UsingoffsetLimitPagination
helps tremendously with reactively updating the UI after mutations, and automating the feed concatenation, but it feels super hard to maintain.For example:
To be a little more concrete: when I was forgetting a particular key, fetching the next page of the feed would cause other pieces of UI to concatenate results where I wouldn’t want them to (i.e. a section with a limit of 6 items is next to a feed, when the feed calls “fetchMore” now all of a sudden both areas get the next page of data unless I backtrack and add all relevant
keyArgs
for both queries).So if I want an infinite scrolling list, now I have to add
keyArgs
for every other combination of arguments forposts
queries throughout our repo? It would be great if I could use thistypePolicy
on a specific instance ofuseQuery
… or if this global one had an option to just consolidate anything withoffset
(kind of like the inverse of how I think it currently works). Maybe I’m misunderstanding something here altogether though.Edit: I think someone else encountered this here for what that’s worth.
@dentuzhik Sure, let me try to explain it with words first.
We had the situation where we needed to load a list of 60 products in chunks of 12. By reaching the end of the first 12 products we would use
fetchMore
with an intersection observer to load the next 12. When reaching the end of 60 products, there was a load more button which starts the next 60 products in chunks of 12. So at first we passedfetchMore
down to the component as a prop (if I remember correctly) which handles the request as well as to the button.So to get to a solution, we ended up refactoring everything a bit. With
fetchMore
we already passed the variablesoffset
andamount
to the other components. But instead of callingfetchMore
we now useduseLazyQuery
within them. The query itself was available through a regular import. As you knowuseLazyQuery
does not execute immediately but returns a tuple with a query function. So the intersection observer simply calls this query function instead offetchMore
.I’m still not super happy with the solution, but we needed to meet a deadline and it met our requirements. If it’s still unclear I can create some code sandbox with both approches, just let me know.
@iteratus you mention in your comment of your closed PR that you have found some workaround using
useLazyQuery
.If it’s not too much work, can you share what was that workaround?
Hello @benjamn,
Please confirm that fetchMore still sends a separate request that always has a fetch policy of no-cache to date. The documentation still illustrates that the use of fetchMore is the recommended way to go for implementing pagination, but if this is the case it sort of defeats the purpose of having a cache in the first place.
I had to use setVariables on the ObservableQuery to achieve attempting to read from the cache first which is but a footnote in the documentation. If fetchMore stil behaves this way , other than using setVariables is there any other recommended way to trigger the read function in my typePolicies and the ObservableQuery to fire with new values ?
Though I’m still open to something like
nonKeyArgs
, I realized after writing my previous comment that you can currently provide a customkeyArgs
function to implement whatever behavior you want:@robertsmit I might call it
nonKeyArgs
for brevity, but I like the idea! Your other idea about allowingfetchMore
to take a non-defaultoptions.fetchPolicy
(rather than always usingno-cache
) is interesting too.I have added some code for clearance.