next.js: [Bug] next/link is not calling the server for subsequent navigations on dynamic routes

Describe the feature you’d like to request

In the documentation it is said that the conditions for hard navigation are :

  • when navigating between dynamic segments
  • When navigating between two different group layouts (ex: from (groupA)/layout to (groupB)/layout )

I’d like to suggest also adding hard navigation for segments marked with dynamic='force-dynamic' or when using dynamic functions and even when using fetch with cache: 'no-store'.

In the docs you said that using these configurations is like using getServerSideProps() in the pages directory, but it does not behave the same between navigations which is quite confusing.

Use cases for this feature could be these :

Describe the solution you’d like

The solution i propose is to consider hard navigation for these cases :

  • When navigating to a page marked with dynamic='force-dynamic', next should always do a hard navigation

  • When navigating to a page using dynamic functions headers() and cookies(), next should always do a hard navigation

  • When navigating to a page using fetch with cache: 'no-store', next should always do a hard navigation, or at least next should always refetch the data

  • When navigating to a page using either fetch with next: { revalidate: n_seconds } or export const revalidate = n_seconds, next should only do hard navigation when the n_seconds has elapsed.

The last two could be tricky and if it is not ideal, at least add a paragraph in the doc explaining why it is not possible and maybe recommending the first two approaches.

Describe alternatives you’ve considered

Updates

  • In next@13.3.1, using a <Link prefetch={false} /> seems to fix the problem : https://stackblitz.com/edit/vercel-next-js-n1tqpr?file=app%2Flayout.tsx . But it disables link prefetching and result in slower navigations.

  • As of next@13.4.0 the client side cache has been reworked, dynamic pages are now cached in the client with a timer of 30 seconds, so every 30 seconds your server is recalled, with one catch : this only apply if you navigates to a different page after that time, if you do very fast back & forth to the same page, the timer will reset and only wait for another 30 seconds. Disabling prefetch does not solve the problem anymore, it disables prefetching, but once the page has been navigated to, the rules of 30 seconds still applies. If you want the old behaviour, you’d have to downgrade to next@13.3.1 for now, You will loose support for Server Actions & revalidate primitives though, Or you can also not use the Link component and use regular a tags if you still want the new features of Next.

However there are things in the work for both allowing prefetching on hover (with <Link>) and allowing for setting the stale time on the client side cache. You can read in the PR about the client side rework :

Follow ups

  • we may add another API to control the cache TTL at the page level
  • a way to opt-in for prefetch on hover even with prefetch={false}

This would allow for setting a pageTTL = 0 for calling the server on each & every navigation to always have fresh data.

NEXT-1352

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 138
  • Comments: 297 (131 by maintainers)

Commits related to this issue

Most upvoted comments

Hey folks, wanted to share an update.

We really appreciate everyone’s feedback here. I spent today reading through every comment here and categorized them all in a document to further understand how we can improve.

We are evaluating the 30s client-side router cache behavior and ways to configure this value.

I’m going to post a new issue covering what the expected behavior is today, how you can configure and change those defaults, and then a set of questions you can answer that will help us determine the ideal expected behavior.

Thank you!

Ps: The reason the comments are marked outdated is just for keeping track of what has been read/tracked, they’re not outdated, just had to pick one of the options.

This seems to be a real issue that need to be considered by Next.js Team. It would have a lot of implications for components trying to use cookies authentication before rendering and is definitely going to make very difficult to migrate some apps relying on gSSP.

As some people has mentioned before the docs are not accurate mentioning using ‘force-dynamic’ is equivalent to getServerSideProps()

https://beta.nextjs.org/docs/api-reference/segment-config#dynamic:~:text='force-dynamic'%3A Force,fetchCache %3D 'force-no-store'

All what we have until now are some workarounds:

On my opinion none of this solutions is good enough they are not as clean as should be. I agree with the people that think Next.js team should update the conditions for Soft Navigation or adding some kind of configuration to the server/client component make the server cache to take preference over the client cache.

Maybe consider any server component with export const dynamic = 'force-dynamic' or export const revalidate = 0; to use Hard navigation only.

I have being enjoying app directory a lot but. i’ll keep checking since i really need this feature to be updated.

Hey everyone, I spent most of today writing down the individual cases that were highlighted in the feedback. I’m about halfway through the topics currently, expecting to wrap that part up on Monday.

I’ll definitely share the new issue here when it’s opened 👍

Is the next team working on this issue?

Although we would like feedback on if this 30 second limit is actually too long or too short in practice. If you have a need for this to be 0 or 5 second, please elaborate why so that we can understand this UX because I haven’t yet seen a UX where this is better as part of the navigation. I’m sure this might be wrong but to ensure that we solve it the right way, we need to understand the best UX here so an example would be great. Keep in mind that the back/forward button also keeps the stale data around for even longer for this same reason - just like built-in browser behavior.

I think there’s a difference intuitively between how people expect browser back/forward buttons to behave, and when clicking links to navigate. The primary use-case of “Back” I imagine is recovering some historic state… e.g. looking at a list of items, going into detail of one, then going back to same list to click on the next. In that situation, the cached state is generally useful and expected.

Whereas the action of clicking a link/button, even if it’s to a page that has been visited recently, seems to me quite a clear request that the user wants the very latest state of that information.

While there are plenty of situations where the data is not particularly time critical and caching can be a useful optimisation, I can think of examples across all kinds of verticals where a user may be clicking fast between different pages, ending up in an A->B->A navigation loop much faster than 30 seconds. E.g.:

  • E-commerce: Clicking between different products/categories, waiting for new products to be released, or checking that high demand items haven’t sold out.
  • Social networks: Clicking between different “posts” and wanting to see the very latest comments/reactions to each.
  • Gaming: Seeing the latest state of the game / inputs from other players.

Perhaps I am biased coming at this from a technical angle, but I do think over the past 20 or so years people have become used to these behaviours:

  • Type in a url -> see latest page content
  • Sit on a page -> page stays static, unless site supports real-time updates
  • Refresh a page -> see latest page content
  • Back/forward -> navigate between cached previous pages, if supported (but often breaks on things like form flows)
  • Click a link -> see latest page content

It would be nice for the choice of framework to be completely transparent to the user, not have a certain class of website which is changing these intuitive behaviours (quicker than usual to navigate between pages, but the freshness of data can no longer be trusted, and the user has to keep resorting to browser refresh button because they don’t know about this arbitrary 30 second caching rule). Fair enough if it’s the default behaviour if Next.js’s main target isn’t fast-paced websites, but it would be nice if there’s an easy path to getting these intuitive behaviours back for the many cases where freshness of data is a higher priority than marginal performance increases.

We’re reworking the client-side router caching behavior. Will share more soon.

Thank you for the greatly improved and very thorough updated documentation about Next.js caching, it’s very helpful!

However, I will add my voice to others’ to say that the current excessive client-side Router Cache behaviour is a design mistake that must be fixed.

The fact the client router forces caching of data from server components for at least 30 seconds is utterly surprising. I doubt any web developers would expect it to work this way, and the guaranteed outcome of surprising and unexpected caching behaviour is a lot of nasty bugs.

I urge the team behind Next.js to please rethink, and fix, the forced client-side caching – sooner rather than later, to save us all a lot of trouble and confusion.

In the meantime, it isn’t good enough to mention this surprising behaviour and the fact “[i]t’s not possible to opt out of the Router Cache” in only one place, almost 3,000 words into the very dense 4.4k word document Caching in Next.js. It reminds me of other information supposedly put on display.

For as long as the client-side router cache continues to ignore in common usage scenarios any of the server-side caching directives available to us – like dynamic = 'force-dynamic' and revalidate = 0 – the documentation should include warning notices wherever these directives are mentioned. Or at least a hint that the Router Cache can greatly affect overall caching, with a direct link to the relevant section.

The new App Router & React Server Components technology is exciting and we are keen to benefit from its many improvements, but for us this unwanted caching behaviour and the high cost of the available work-arounds – in complexity, riskiness, and potentially even fees for too many revalidations – means we simply cannot use it for the most important parts of our cart-based ticket-selling site.

Having the mandatory 30s cache is not good enough IMO. We should have more control.

Spent all of yesterday on this still, close to finishing up the writeup, needs a review from @sebmarkbage later today.

Please be patient, I’m working on it, demanding an update is not constructive.

Thanks for your feedback!

To provide an update here: I’m discussing the right caching behavior for links / refresh with Sebastian still, definitely listening to your feedback and we understand this particular case. A part of this will be how it integrates with mutations.

So it’s possible that this is a signal that Next.js does not plan to provide hard navigation for these cases…

Please don’t read things into “unit tests were added”, you can always ask me. I’ve been working on adding unit tests to the router to ensure that how it works right now is covered and others on the team are able to ramp up on working on the new router, especially as it’s affected by concurrent features (e.g. double-invokes etc) having these tests is incredibly important.

This is very concerning as I think this is a huge and obvious flaw in the specified/documented behavior and AFAICT there is no way to fix it without some kind of breaking change. I hope it gets some attention before declaring app dir stable. Does anyone else feel like maintainers aren’t aware of this issue and should be? Does anyone have any idea how to get their attention?

Please be patient. There’s a ton of different reports, including this one being duplicately reported across issues/discussions. We’ve been focused on ensuring all changes that would affect the code you’re writing are landed, for example replacing head.js with the new metadata API, route handlers which replace api routes, many bugfixes to CSS handling, ensuring edge cases in the new router don’t make applications hang, static generation / ISR improvements, MDX support, and more. A benefit of this is that now the majority of features and the way you write them has been landed, allowing us to focus on bugfixes and refining behaviors further. There’s still some features coming, notably mutations, parallel routes, and interception.

I just hit that issue in a project. I was going to do something I feel is pretty common. A simple CRUD feature, with a page showing a list of item, a page showing the item details, a page to edit the item, and a page to add a new item.

If the display pages (list and details) were done via RSCs, then when adding a new item or editing an item, and moving to the list or detail, the data was automatically stale. They’re not the same pages as the ones that did the mutations, so the router.refresh() tricks require double rendering. Making an edit and then looking at the detail page being stale for 30 seconds is…incredibly jarring o.O Even a browser refresh doesn’t always return correct data. Only a browser hard refresh does. Phew.

@Fredkiss3 thanks for the code, you saved the day! But indeed, a solution like @zenflow suggests would be ideal.

To add to tiscussion, I think this is only a problem if a link leading to such dynamic page is prefetching, which is unfortunately the default. The other workaround I found was to disable link prefetching, i.e. if I do:

<Link to="/random" prefetch={false}>Click me</Link>

the above does seem to not use the cache as my random page generation page returns a different number every time, but I also think the whole page reloads on link click as a result so that’s suboptimal. I think this may be a separate bug, as the Link now behaves like a tag with no extra behavior added.

Another workaround I found is not to use the Link at all, but instead useRouter and do push(href) directly. I wrapped it in a simple component and have started using instead of Link when linking to dynamic pages, seems to work just fine without doing any caching or any prefetching.

'use client';

import { useRouter } from 'next/navigation';

export default function DynamicLink({href, children}) {
  const router = useRouter();

  return(
    <a href={href} onClick={(e) => {e.preventDefault(); router.push(href)}}>{children}</a>
  );
}

I think this is the best solution I found so far.

It would be helpful if someone has a prod app example that they can show where this is bad UX.

But I would never release this kind of problem to prod! My examples are things I want to do, but I can’t.

I want to have the server inform the client of how long they can cache my content, and I want the client router to respect that.

Consider a simple example of notifications. My client side notification badge polls the server and informs the user that they have a new notification. The user clicks the badge to go to the notification page but doesn’t see a new notification because they were just there 25 seconds ago and it’s still cached. And now it won’t be refreshed in 5 seconds, but 30 more seconds!

If a browser got response headers telling it not to cache a response, and it did anyway, giving the user a stale interface, wouldn’t you consider that a browser bug? It’s not doing what it is told. I think the client router is the same. It’s not doing what it is told.

Yes there are workarounds, but I shouldn’t have to think of that. The experience of devs so far says that the current behavior is confusing and not what they expect, leading to more FUD. IMO.

Hey folks! We have two new docs pages going very in-depth on fetching, caching, and revaliding:

Please read through these, as I suspect it will answer a lot of questions raised here 😄

I don’t understand why would a framework force a 30 seconds client route cache without a way to opt out.

i think this comment: #42991 (comment) needs to be pinned or moved up into the main description because there are some hacks to make it work, if you really want the app directory.

i don’t think this makes nextjs unusable, since the app directory isn’t a requirement to use it. just stick to the pages directory until this is sorted out.

i am also not a fan of providing a cache with a seemingly random 30s timer with no way to opt out as it’s very unintuitive to me.

It just feels a little odd to use Next with the old style pages directory when you see Vercel directing its energies to the app router. Usually devs follow the evolution of frameworks.

It would be good to get an indication from Vercel - is this issue going to be fixed or not? Just a signal, to help people decide if they will adopt NextJS or not in new projects.

@timneutkens @sebmarkbage

How is the updated situation about this issue ? Do you have any ongoing RFC or ways to resolve this issue ?

Or are the docs definitive ? And you have decided for it stay that way ? If so can we hear the reasoning behind this decision ? As we have many users in this issue who are affected by this issue.

This is probably the most relevant part:

It’s not possible to opt out of the Router Cache.

You can opt out of prefetching by setting the prefetch prop of the <Link> component to false. However, this will still temporarily store the route segments for 30s to allow instant navigation between nested segments, such as tab bars, or back and forward navigation. Visited routes will still be cached.

https://nextjs.org/docs/app/building-your-application/caching#opting-out-3

The solution for this issue still remains open.

I believe this proposal is to fix these issues:

I am strongly in favor changing/fixing the rules of when soft vs hard navigation happens, to support dynamic data on pages that have no dynamic segments. You can see from the docs that soft navigation (i.e. client side cache) will always be used in this case (https://beta.nextjs.org/docs/routing/linking-and-navigating#conditions-for-soft-navigation) and there’s no way to override this. There’s effectively no way to bust the client side cache for these pages except a manual router.refresh().

At the same time, I’m against the exact proposal. I’d prefer to keep rules for client-side caching (hard vs soft navigation) separate/independent from rules for server-side caching (dynamic='force-dynamic', cache: 'no-store', etc). There seems to be some confusion around the fact that we are dealing with two separate caches and the proposed solution would definitely add to that, as well as make it more difficult learn and reason about how next.js is supposed to behave in whatever situation.

@leerob Can we have an option like export const navigation = 'hard' to force hard navigation on pages where it’s needed?

Documenting flawed behavior is not a good fix. 😃 Client-side caching must respect server-side caching headers. Any other behavior will continually confuse developers. IMO.

I dont even see one of nextjs commenting on this

@sebmarkbage has commented on this issue multiple times, so have I.

Is the next team working on this issue?

Sebastian is evaluating the feedback after the latest post but he’s out of office this week and next week. Based on that round of feedback we also found that a significant amount of cases mentioned need more specific documentation and examples instead of changing behavior, so we’ll be working on those as well.

Using "next": "13.4.7" and behavior is still as described in original issue’s Updates:

In next@13.4.0 the client side cache has been reworked, dynamic pages are now cached in the client with a timer of 30 seconds, so every 30 seconds your server is recalled …

We’ll have to resort to some sloppy workarounds for now. But, man, is this issue not discussed enough to still be an issue after 8 month?

Was experimenting with this behavior and it seems that you can force the next to always do a hard navigation if you pass a querystring to the link component.

Uhhh. Thats a nice workaround. And so easy 😉

But to comment on this overall topic. I think this is a major issue for NextJS. The expectation is totally different when reading the docs compared to the actual behavior. Even without reading the docs, i expect a force-dynamic component / revalidation component to re-fetch data (what it would do, if next/navigation wouldnt cache it if i am correct here).

Nobody will get the difference that a Browser Refresh does in fact work as expected wrt revalidation but a user navigation wont. I am strongly with @Josehower on the statement: “Maybe consider any server component with export const dynamic = ‘force-dynamic’ or export const revalidate = 0; to use Hard navigation only.”

It should be trivial to solve code wise, so it must be an architectural decision holding it back. I get it, sometimes you have to stick to some decisions, but in this case, it has to change! For content focussed websites, that feature might be nice. For a dashboard or most kind of apps, it is a deal breaker. I am on the verge of starting a new customer project that falls into the app category and given the current implementation, I cannot use next. Or at least not the app router, pages router does not have this behavior, right? But then again, I really like the direction of next and the app router and it is a real bummer that such a “small thing” is an absolute dealbreaker. Please let us opt out, pretty please!

image

There needs to be an opt out. For interactive web application, especially crm like behavior, data needs to always be up to date. If it is not, the user can perform incorrect actions. So for me, I don’t want any cache as it can provide an aggregating experience for my users.

Here is another real world experience:

My application displays iot data and the user jumps between different levels of a dashboard. If they jump between levels and the data isn’t consistent then they complain to me that the data isn’t accurate because they saw it didn’t match. This has happened before due to caching that lasted less than a minute. Adopting app router would cause my users to lose confidence in the data that I am showing them. A sacrifice I cannot afford.

My Opinion

It is my opinion that frameworks should not force caching on the developer. Caching can be really complicated to get correct. Unless you have thousands of users hitting your web pages there is no need to introduce caching and probable bugs into a codebase. Many of nextjs users I believe fall under this category.

@sebmarkbage Another example is the vercel dashboard (or any kind off dynamic dashboard), how would you implement it with next app router caching mechanism now ?

Let’s take simply the deployment list page for a project, it needs to always be up-to-date with the latest data, especially if it is in a bigger project with multiple deployments per day/hour or even minutes, with stale data, you loose this flexibility.

If we talk about simply the auth state, if you disabled or removed a user from a team, do you still want them to have access to the project on navigation ?

Granted this is all done client side today and i suppose you use swr to specify a cache time for faster navigations for certain requests (i think user is cached ?), but i can also see that the vercel dashboard always make a request to get the latest deployments when i navigate to the deployments page.

Here is another suggestion (amidst the other hundreds on this thread already), since you want to provide the best ux for the user, why not take the example of SWR or react-query and revalidate the page data in the background ? You could even give the user a useRefetching hook to give the developper the option to show a fallback while the route is revalidating. Granted this may not be the UX each developper would want to choose, so you could just add a simple option to block while before showing the new page. this would be more aligned with how suspense works today.

the only thing that is needed is to give the developper all the tools to chosse the UX they want, wether it be worse or better, it is not in the framework to make these decisions. I think the spirit of server components, is to give the user the control of over how much workload they would want to execute on the server only and client, this one decision to block the heureustics seems to go totally against that.

Please, listen to us whe there are more than 50 persons telling you this is just not the ideal UX and is just confusing. No need to overcache, especially where next gives us primitives like the data cache which makes fetch requests really cheap to run.

Although we would like feedback on if this 30 second limit is actually too long or too short in practice. If you have a need for this to be 0 or 5 second, please elaborate why so that we can understand this UX because I haven’t yet seen a UX where this is better as part of the navigation. I’m sure this might be wrong but to ensure that we solve it the right way, we need to understand the best UX here so an example would be great. Keep in mind that the back/forward button also keeps the stale data around for even longer for this same reason - just like built-in browser behavior.

Hey,

I’m really struggling to understand why you would need elaboration on why we need to get a non-cached webpage? Some of us are building highly dynamic frontends for social networks and forums. Are you suggesting nextjs is not the right fit for this kind of application?

Thanks

For dynamic pages without revalidate, the client side cache should be opt-in, and set to 0 by default, since it is the same on the server.

This would be 100% the behavior I would expect as a developer.

I concur that there needs to be much more control on caching here. Using RSC without using Server Actions is a DX nightmare. There should be at least a way to easily revalidate routes without having to use Server Actions at the very least. There really needs to be an opt out option or a better way to do this. Even still, having to manually invalidate multiple routes when data changes is a very poor experience.

RTFM is a poor response to this issue that warrants more discussion.

Hey everyone, thanks for your patience, took me quite a bit of time to write down all these behaviors but I think it will be helpful to start with a shared understanding of what the current behaviors are and how you can affect them using mutations.

Here is the new discussion, which includes a thorough explanation of mutations and other parts that relate to mutations, including how partial rendering works under the hood and expectations around back/forward navigations as well as how to achieve useSWR / react-query like behavior. At the end of the post are a couple of questions around the desired behavior from this issue around reducing the 30 seconds to zero.

I kindly request that you first read the entire post before posting a comment, this is to avoid misconceptions about how the router functions in particular cases, which should help with writing a constructive reply on the discussion:

The discussion

This behavior and the inability to truly control it is extremely surprising. The fact that this issue has been open since November and it’s not fully resolved is worrisome. With all the exciting and incredible things Vercel is doing with React and Next.js, surely this can’t be the problem they can’t solve?

I think @hubertlepicki your solution is just better 🤩, and i think i will use that for now until this issue is fixed. To add to this, i modified the code to be more complete with typescript :

'use client';

import { useRouter } from 'next/navigation';
import { forwardRef } from 'react';

const DynamicLink = forwardRef<
  HTMLAnchorElement,
  Omit<React.HTMLProps<HTMLAnchorElement>, 'ref'>
>(({ href, children, ...props }, ref) => {
  const router = useRouter();

  return (
    <a
      {...props}
      ref={ref}
      href={href}
      onClick={(e) => {
        e.preventDefault();
        router.push(href);
      }}
    >
      {children}
    </a>
  );
});

export default DynamicLink;

True that adding a new option might add some confusion in the sense that it’s more cognitive overhead (more to learn from docs and think about for every page) and I think we definitely already have enough of that with app/ dir, so I definitely want to keep it as simple as possible. I have given it some more thought…

So the Conditions for Soft Navigation (new in app/ dir) is causing our problem, inaccurately trying to guess whether hard or soft nav is appropriate:

On navigation, Next.js will use soft navigation if the route you are navigating to has been prefetched, and either doesn’t include dynamic segments or has the same dynamic parameters as the current route.

For example, …

Why do these conditions even involve whether the page has dynamic segments or not? Does it make sense?

  • (our problem) For pages without dynamic segments, we may or may not want to skip frontend cache, since these pages might also have dynamic data which we want/need to be fresh.
  • (also) For pages with dynamic segments, we may or may not want to have frontend cache, since these pages might also have data which we don’t need to be so fresh, and we can take advantage of client side cache to reduce requests to server.

The ideal solution IMO:

  • soft vs hard navigation applied to pages regardless of dynamic segments or their params
  • static pages always use soft navigation (already the case i looks like, judging by docs)
  • dynamic pages default to hard navigation & can opt in to using soft navigation (i.e. client side cache) via export const navigation = 'soft' or some option like that

What do you think? There would be a new option, but you wouldn’t have to use it or worry about it unless you wanted to optimize performance & reduce requests to server for a dynamic route, with using a feature that’s new in app/ dir.

I don’t know if the Next.js development team realizes the trouble this issue has caused for users. I believe that almost everyone who uses Next.js seriously will encounter problems with caching and routing. The Next.js team’s response to this issue surprised me a bit. Next.js has always been my favorite front-end framework, but now I’m wavering.

I think bottom line is that if a page.tsx says “export const dynamic=‘force-dynamic’;” then it must be honored by the client-side router and not cached. Now, if it can cache all the non-dynamic container layouts and only refresh the page.tsx vdom, then that’s ideal. But as a developer I shouldn’t have to force refresh something that I specifically said was dynamic every time.

other cached pages will not be invalidated

router.refresh() on canary invalidates the entire router cache.

For now using router.refresh() is fine as @Fredkiss3 said.

As a more elaborated work around that basically avoid the creation or weird patterns on the code i just patched Next.js to disable soft navigation completely:

https://github.com/upleveled/next-13-app-dir-demo-project/blob/main/patches/next%2B13.1.4.patch

diff --git a/node_modules/next/dist/client/components/reducer.js b/node_modules/next/dist/client/components/reducer.js
index 951f016..947ce1f 100644
--- a/node_modules/next/dist/client/components/reducer.js
+++ b/node_modules/next/dist/client/components/reducer.js
@@ -317,6 +317,9 @@ function fillLazyItemsTillLeafWithHead(newCache, existingCache, routerState, hea
     return tree;
 }
 function shouldHardNavigate(flightSegmentPath, flightRouterState, treePatch) {
+    // disable soft navigation to solve issues with server side dynamic segments
+    // https://github.com/vercel/next.js/issues/42991
+    return true;
     const [segment, parallelRoutes] = flightRouterState;
     // TODO-APP: Check if `as` can be replaced.
     const [currentSegment, parallelRouteKey] = flightSegmentPath;

The steps to use it are:

  1. install patch-package
  2. add the script "postinstall": "patch-package"
  3. Update /node_modules/next/dist/client/components/reducer.js as shown in the patch link
  4. run yarn patch-package next

after this your app should work with Hard navigation only

@Fredkiss3

I was hoping that it would be different if there are enough people wanting this change, and there are, not only people participating in this issue but also people outside on twitter & YouTube.

And In this issue, the core team have been responding and seems to be listening at least, and also the fact that this issue is prioritized (per the label linear: next).

I know, and me too, honestly. But they are answering all the related questions with “Show me your problem and I will show you how to do it (with a hacky trick)” or “Docs are updated”. I still can’t get how it is normal for the team to have a 30s cache.

Cheers!

Client-side cache invalidation is not sufficient! If user A edits a record, and user B navigates back to it but it’s in their cache, B won’t see A’s updates. The client-side cache must respect the server’s instructions about caching the page. If the server says force-dynamic, then client side should never cache the page. Devs will put this on pages that they know may change and must always reflect current values. If this isn’t the intent of force-dynamic, then something new needs to be introduced to allow the client to respect the server’s wishes.

@sebmarkbage Another example is the vercel dashboard (or any kind off dynamic dashboard), how would you implement it with next app router caching mechanism now ?

Can confirm this. I’m creating a shopping app and when I updated an item from dashboard, that item was not updated in the /dashboard, /list or /items/[id] pages. That is quite confusing

There would be nice if there is an option for caching behavior like Opt in when needed

if anything, routes should opt-in to soft navigation, not default to it.

I’m strongly in favor of making soft navigation opt-in.

  • Perf optimizations that have drawbacks/caveats in area of data validity should be opt-in.
  • 30 seconds looks like an arbitrary threshold. 0 seconds on the other hand is absolute and not arbitrary.

This issue is starting to get more attention. 27k people have watched the video linked below that is addressing this issue. https://www.youtube.com/watch?v=25yjSzl6PsQ

The discussion in this issue is too long to understand it all, but it seems to be a mixture of several issues.

  1. A discussion of bug that prevent fetch from occurring when it should for the developer
  2. A methodology to avoid the bug
  3. Performance principles that Next.js should follow while avoiding the bug

@sebmarkbage Even without examples of bad UX in the production environment, this issue would not have been discussed this much if this were not a bug for many people. To me, the most important thing is to get the bug fixed, and I recognize that performance is the next priority. Can’t we first clarify when the “when you should fetch” is for the developer, address it (or explain it in the documentation), and then move on to performance optimization?

Hello everyone, it seems the issue has been prioritised, it means it is aknowledged and considered as an issue to be worked on, as you can see in the label added :

image

So long as this label is present, the issue will not be closed even if there is no new activity for a month. So i would like to ask you all to be patient.

Thanks

@sebmarkbage

I think this is the main source of confusion because if you’re testing caching behavior, you’re probably doing this very quickly where as a real user wouldn’t hit this as an issue. It’s the programmer’s expectation that this should work instantly that causes the confusion. Although we would like feedback on if this 30 second limit is actually too long or too short in practice.

Thank you for explaining the reasoning. From my perspective, the current behaviour is super unintuitive for a developer, it makes no sense to me.

I understand your logic of wanting to preserve instant loads for tabbed behaviour by default but give us the option to opt out as developers.

The fact that the server side and client side caches behave differently and aren’t properly documented as such is painful. They should behave (and be controlled) in the same manner or be clearly delineated in their documentation and controls.

Two examples where I’m hitting this pain point:

  1. A dashboard has a tab that allows a user to mutate a data point. When they click to other tabs in the dashboard which reference the data point (pretty much instantly after making the mutation in many cases) they expect to see the result, but instead get stale data.
  2. Clicking into a task on a todo list, making a change (e.g. adding a child task or changing the todo’s status) and then navigating back to the task list overview which includes summaries of the underlying tasks and being presented with stale data.

I disagree that the user expects to wait 30 seconds in these instances before navigating again. Typically they want to make a change and move on with the latest data where the app is action orientated. They do not expect to have to manually reload the page after making a change. If that were the case, I just wouldn’t use next.

In both cases, I’m also getting around it using router.refresh() but it feels clunky and high overhead. It also means I can’t use server components for the links.

What I personally would like is one of the following:

  1. Reuse the revalidate export for the client cache as well as the server cache as @Fredkiss3 suggests
  2. Have revalidatePath allow you to invalidate the client cache for a path you arent currently on as well as your current path
  3. Allow an option in the link tag that allows you to specify that you want zero server/client side caching for a particular link.

This is better, but TBH I wish I could set 30 seconds -> 0 seconds. With the update, we still sometimes get soft navigation for dynamic routes.

yeah this seems to break the basic ux of the web. even if I disable prefetch, go to that page, visit another, and then come back in < 30 seconds, I’m looking at stale data with no way (for the user) to refresh it besides reloading the entire browser. if I click on a URL of a page I’m already on – I see stale data. both of those situations should hard refresh my data just by loading a URL in my browser. if it was a server side app, it would.

prefetching isn’t worth it if you just end up with stale data in the end. if anything, routes should opt-in to soft navigation, not default to it.

Yes! If the user quickly flips around looking for their updated data, they are just extending the time before they actually see it!

I just wanted to point out another flaw (IMO) Even the 30s is not really correct based on my tests.

For example if you keep interacting with a dashboard every 20 seconds, the data would remain stale for as long as you interact. You really need to wait 30s without hitting the route for it to hit the server.

In other words it’s acting more like it’s debounced where it should be throttled

But I’m definitely vouching in for no browser cache by default or at least opt out option. Otherwise you’re basically saying Next can’t be used for admin dashboards.

Separate Server and Client Cache

From my perspective, the Next.js server-side caching and client-side caching should be configured separately. I feel trying to share the same configuration is more likely to lead to unexpected scenarios for the developer, and having the granularity would ensure flexibility for all scenarios.

This approach also aligns with what most web developers are used to with the server providing its response (whether cached or not) and the browser having a separate local cache. And just as a server can send cache-control headers to the client, we should be able to send the Next.js equivalent from the server layouts/pages.

For example, if your website is displaying custom data for each user in your server-side responses, you’re probably not going to be using server-side caching. However, you might be quite comfortable allowing each user to have a cached version of their individual page on the client-side for an extended period.

Client Cache Variables

I would love to see something like a clientCache variable that can be set (1) globally, and (2) in layouts and pages, with these types of options:

  • none = no client caching for segment
  • 0-999s = use the cache with expiry in seconds (from server response timestamp, not client-side last page access timestamp)
  • session = for the full duration the browser session is open

It would also be great to have an additional option like cacheRefresh that controls how the website is updated when a cache expires:

  • foreground = the current page loading behaviour that waits for the fresh server response showing any loading.js etc. until the page renders
  • background = a hybrid approach where the stale cache is rendered immediately for an instant response, and in the background does a type of router.refresh([segment path]) to refresh the current segment/page to reflect any updates in the fresh cache

We do a background refresh like this in our current website on plain old React, and find it works really nicely for pages that may update regularly but not dramatically so no jumpy pages or CLS issues.

router.refresh([segment path])

Also just to add @Fredkiss3 https://github.com/vercel/next.js/issues/42991#issuecomment-1623245028 above of a path argument in router.refresh() would be a great addition to avoid unnecessarily reloading higher level segments. Particularly in the current absence of inbuilt client caching configuration, it would allow developers to implement custom client caching solutions without the heavy cost of a complete path router.refresh() every time.

Thank you for your insights, however this is not ideal for multiple reasons :

1 - this is not the easy to grok for beginner or people new to the App Router, the assumption is that the revalidate export has some kind of effect on the cache on client and for users who want their data to not accept stale data, this can be quite annoying, it breaks the general assumption 2 - full reloads are bad for UX, if we wanted that we would use regular Anchor tags and not get the benefit of prefetching 3 - I have a case in our company where we have really short sessions (one hour) for security reasons, if there is a delay at the end of the session the user will be able to see stale data and still be able to do some actions where it shouldn’t be the case 4 - router.refresh is not ideal as it wipes all the client side cache, and on navigation it cannot be implemented correctly without doing a double fetch 5 - I have a side project (a todo app) where I wanted to introduce filters with links, and I wanted the UI to show the correct filtered data (finished/unfinished items) when the user clicks on the corresponding link, but since Next caches the data and not issue a prefetch before 30 seconds, the user would toggle an item, see the UI update for the current route, but not for the other routes, I had to patch Next with the snippet I shared in a comment earlier to force next to do what I want, if you want the project is here : https://GitHub.com/fredkiss3/todo-app-beautiful-ux 6 - Someone wanted to use Next for a frontend for sourcegraph, he wanted the code search to always hit the server to get the latest data since it search on GitHub, and some data may have been pushed in the span of a search, because of the aggressive caching of next he gave up and finally used sveltekit, you can see in his repo, he has made multiple experiments, each with a post mortem of the result : https://github.com/isker/neogrok/issues/2

Sorry if this comment is quite long, but what we can see here is that this caching is not ideal. I understand your reasoning about this, but it breaks expectations, I am not the only one, this thread is just an example, but you can see my point : https://twitter.com/tkdodo/status/1660928385554456576?s=46

What I suggest :

  • Reuse the revalidate export to set both the client side and server side cache time, this would align with everyone’s expectations, and be really intuitive (which is the ideal way IMHO). For dynamic pages without revalidate, the client side cache should be opt-in, and set to 0 by default, since it is the same on the server.
  • Or add a way to configure the client side cache, which is not intuitive but you can explain the reasoning in the docs

React Query has the same default cache time for queries (30s) but allow for revalidation in the background, you could do the same if the user needs to switch back and forth faster than the time it takes for the network request for the new page to finish, and React Query allows also for the configuration of the stale time to be longer or shorter, with that they give the control to the users of the lib and I think that’s what makes so much more powerful.

I think soft navigation and prefetching are the default because they help make the UI feel fast and make navigations instant (if prefetching is successful), that’s a good thing in my opinion.

The only drawback here is the client side cache semantics with the 30 secs timer.

I think the best UX would be to allow developers to manually override that value (either to a higher or lower value), so that they can choose which level of stale content they can accept, for highly dynamic pages you could go with 0 as the value and for pages that don’t need highly dynamic values, a higher value would be best suited for them. This is kinda like how react-query works today.

For sure, conceptually there are two caches, but to the developer who doesn’t understand NextJs RSC that deep it does not seem like it. I’ve read the docs more than 3 or 4 times, yet I didn’t see any mention of the two caches as separate, there is dynamic rendering, dynamic segments and dynamic functions which are very confusing. And for the longest time I thought they were all equivalent or similar.

As for adding a new option, this would confuse people more I think. Maybe it is the best for performance, but DX wise it is not ideal. And more, it would be difficult to wrap our head around how it would play with the rest of options (dynamic rendering, dynamic segments & functions).

But IMO, when NextJs says in their doc that export const dynamic="force-dynamic" behaves like gSSP in pages directory I expect it to behave like it and rerender on page navigation, if not then the docs should precise that caveat and give us a way to make it behave exactly like gSSP.

Thanks for getting involved, @timneutkens! This surely made a lot of people very happy. Can you share the link to the new issue here when it’s ready?

@Apestein

I’m pretty sure it is not the case, the pricing specified on that page is for the data cache revalidations (revalidateTag) using revalidatePath does not cost you anything.

Please don’t assume bad faith out of people who work day and night to provide a great framework used by many people.

Let’s be courteous here.

I dont even see one of nextjs commenting on this

@sebmarkbage has commented on this issue multiple times, so have I.

Is the next team working on this issue?

Sebastian is evaluating the feedback after the latest post but he’s out of office this week and next week. Based on that round of feedback we also found that a significant amount of cases mentioned need more specific documentation and examples instead of changing behavior, so we’ll be working on those as well.

My bad, you are right, hopefully nextjs will address and give us a direction on how to achieve an SPA like behavior where it will render each time we navigate, until we need to cache some routes

Hey everyone, why do you think this is going to be fixed? They already state that this is the expected behavior and updated the docs. Also, the core team responses are very clear that this is not a bug, and we are doing it wrong by expecting the opposite. This seems so broken, you can’t build an app with app directory!

I’m pretty sure it is not the case, the pricing specified on that page is for the data cache revalidations (revalidateTag) using revalidatePath does not cost you anything.

@Fredkiss3 I’m late for the discussion, and I don’t know if anyone has brought this up yet, but revalidatePath is just a revalidateTag wrapper

https://github.com/vercel/next.js/blob/e127c51327ee9191098fb7b73c681db934505dcc/packages/next/src/server/web/spec-extension/revalidate-path.ts#L1-L5

Whatever pricing applied to revalidateTag is applied to revalidatePath as well.

For further evidence, I don’t use any revalidateTag in my apps, but my revalidatePath in my one-user apps (I’m the only user) have got me to 121 revalidation in the last 30 days. It is a force-dynamic page.

The apps are for my personal use only, and I have a Pro account, so I’m still quite safe. But imagine what would happen if I wasn’t financially stable enough to pay for a Pro subscription, or what would happen if my apps had, say, 10 users. 10 is considered disappointing for apps, but it’s already enough to surpass the revalidation limit for paid Pro accounts. Once again, no revalidateTag’s here, everything is revalidatePath, but either way it’s just revalidateTag behind the scenes.

Vercel seriously needs to fix this. Either on Next.js side, or on the pricing side, or preferably both. I don’t think the cache is usable for even hobby/personal applications if the current pricing is kept as-is (just see my app above: one user already at 121 revalidations > the free limit of 100).

To try to justify the 100-revalidation limit, I’ve been telling myself that the cache should be used for static data that doesn’t change often; like your own blog app that you revalidate whenever you make changes to the post – you shouldn’t need more than 100 changes per month. Dynamic data, like data related to users, should remain dynamic and shouldn’t be cached. But the lack of ways to bypass the 30-second invalidation period here, without using revalidate* functions, just literally throws that idea out of the window.

There are a number of work-around hacks that might be sufficient, depending on your site and your data.

But they are all hacks.

Vercel seems to be forcing caching with no way to opt-out to push people to pay

Vercel seems to be very smart and savvy, and this would be a very stupid way to make a small amount of money. I assume they have reasons for doing it the way it is now, but I also think they got it wrong and the voice of the user community will cause them to reconsider, if they aren’t already.

Hey folks! We have two new docs pages going very in-depth on fetching, caching, and revaliding:

Please read through these, as I suspect it will answer a lot of questions raised here 😄

We want to opt in Router Cache. We dont want it as default behavior. Remove the 30s caching, 5 seconds is better as default for quick back button but 30s seems too long or just give us the control to opt in and how many seconds we can put

@LuisMSoares @beykansen Please read https://github.com/vercel/next.js/issues/42991#issuecomment-1637665305

Sebastian is evaluating the feedback after the latest post but he’s out of office this week and next week. Based on that round of feedback we also found that a significant amount of cases mentioned need more specific documentation and examples instead of changing behavior, so we’ll be working on those as well.

Things are in the work 🙃

Currently having this same issue.

In my case I have two route groups

  • (auth)
  • (protected)

Each route group with its own layout (server components)

On the (auth) route group layout, a fetch is made for the user’s data, and if the user’s data is successfully returned, we route the user out of the current (auth) page to the (protected) route group - /dashboard, and vice versa for the (protected) route group.

Now when I use the back and forward navigation on the browser and that of useRouter, I’m still able to access the route that shouldn’t be accessed, only a full page reload solves this.

I think the right caching heuristics depends on the kind of app you are creating. If you a creating a mostly static app (blog, news app, documentation, etc.), you want your default to be static, i think you can even tolerate some stale data with these kind of apps. But if you are creating a mostly dynamic app (a dashboard like vercel or a forum open in the public like github), most of the time you want to always have fresh data, especially if it involves authentication.

For dynamic apps, i would suggest caching to work like @QzCurious said in his comment https://github.com/vercel/next.js/issues/42991#issuecomment-1622172049 . When you create dashboard-like apps in SPA land (with vite or CRA) you usually manage the stale time of your data with react-query or swr (for ex), and you usually refresh only the data needed when doing mutations. It would be better for it to work the same for next, and give the user the total control over of the revalidate time, that should be respected by the client-side router.

If you think router.refresh() is too radical, you may maybe change the API to specify a path to refresh like router.refresh("/<path>") so that it only refresh all pages and layouts in the sub route on the client.

I’m having the same issue here, I have a detail page, that you could take some action, change the state of an item and return to that detail page seeing the old status, instead of the new one.

This is my production case that is currently implemented with “pages”  on https://orange.pl, which is built on Next.js. We have a selling process (in fact multiple processes) which are orchestrated by a backend-driven state machine. As a frontend application, we never know where the user can go, what steps he or she can visit multiple times or if the state on a given step will be the same. Given that, we have cases where a user can navigate a few steps in a few seconds. There is hard requirement to present always up-to-date data.

I believe that this case might be quite common in e-commerce area.

@sebmarkbage

There’s one design principle that pulls against this though which is the ability for future improvements. Which either means that it’s harder to upgrade in the future, or that you add some constraints earlier around the principles of the design.

It is known that cache invalidation is one of the harder problems to deal with in computer science, and the next team introduced a cache that can be hard to understand on Next App Router. This one in my opinion is today more difficult to evolve than having no cache at all in the first place. The absence of a cache would have naturally led us to a different set of considerations, perhaps focusing on how users can achieve the same functionality as getSSP from pages with the added benefits of caching and prefetching. Going backwards (starting with the cache and tweaking it) can be much more difficult as you have to rearchitect your cache to allow for a more granular approach.

@jeengbe you can subscribe to the status changes on this issue using the “Customize” link in the notifications area to only subscribe to “issue closed” or “issue reopened” events. (the issue will be closed once there is a solution)

Screenshot 2023-05-30 at 07 57 39

Are you guys also seeing an issue where router.refresh on the current page will not invalidate the router cache and subsequent soft navigations still show cached pages? Is that related to this? In my mind the better fix for this issue is making router.refresh reliably invalidate the router cache… when someone logs out or mutates data that should update on other pages, I should just be able to router.refresh and totally clear out the router cache. Then next can prefetch pages again, etc to make nav fast. If I understand, that’s what the docs recommend and that aligns with my intuition about how a mutation should be handled.

The current behavior I observe is that router.refresh only invalidates the current page segments. So if I mutate data / log out, the soft navigations have stale data (e.g. I visit profile to see my “likes” then go to /detail-page/xyz where I “like” something then bac to /profile where I see my “likes” but it’s missing the new one).

As a workaround I’ve replaced Links with DynamicLinks as described in this comment but it makes the navigation much slower than it could be with prefetching and loading states.

@fprl

Hey everyone, why do you think this is going to be fixed? They already state that this is the expected behavior and updated the docs.

I was hoping that it would be different if there are enough people wanting this change, and there are, not only people participating in this issue but also people outside on twitter & YouTube.

And In this issue, the core team have been responding and seems to be listening at least, and also the fact that this issue is prioritized (per the label linear: next).

It’s worth noting that the current invalidation method ( router.refresh() ), breaks existing features like Parallel Routes and Route Interception

This issue is not getting enough attention: https://github.com/vercel/next.js/issues/51714

I think I’ve managed to solve the problem. Use this component instead of Link.

//link-button.tsx
"use client" 
import { useRouter } from "next/navigation"

export function LinkButton({
  children,
  href,
}: {
  children: React.ReactNode
  href: string
}) {
  const router = useRouter()
  return (
    <button
      onClick={() => {
        router.push(href)
        router.refresh()
      }}
    >
      {children}
    </button>
  )
}

This will not cause the flicker unlike calling router.refresh() from useEffect.

a path argument in router.refresh() would be a great addition to avoid unnecessarily reloading higher level segments

Absolutely. When I have a component with a server-side search that adds a query param, I know that the results of that component only needs refreshed. The whole page all the way back to root layout doesn’t need to re-run. Especially if there are RSC api calls in those that will re-run unnecessarily.

server-side caching and client-side caching should be configured separately

I think the use cases for this are minimal, but real. I can imagine a long per-user client-side cache, while the server is never cached. But by default, without that, I think the client-side cache should never extend beyond the server-side cache. Server should always be able to inform client about how long it can cache something.

Imagine a browser that ignored http cache headers and kept everything for 5 minutes. It would absolutely be considered a bug!

I’m just finding this bug, and it seems serious and a source of much confusion to new users! RSC pages marked as dynamic, where the cache-control header says DO NOT CACHE, should not be cached by the client-side router. Demo: https://demystifying-rsc.vercel.app/test/math-random/1/ Source: https://github.com/matt-kruse/demystifying-rsc/blob/main/app/test/math-random/1/page.js

Sorry @jeengbe 😅 but you still can unsubscribe and keep coming back to this issue regularly.

I can’t do anything about it but I worry that if this issue get changed into a discussion it might be lost in the see of other discussions

@sebmarkbage

Although we would like feedback on if this 30 second limit is actually too long or too short in practice.

IMO to long:

  1. User made a mistake, and want to immediately go back
  2. Shooting process where you’re on page A like, product Page, click link to select something on page B and automatically get redirected to page A.
  3. Tabs where each tab contains trading data represented on the chart that is generated on server side
  4. “View Tweet” analytics on Twitter button.
  5. Telco process where you have a multistep process and on each page you can go to “upsell” page that is 100% controlled by backend process, so you *must show up-to-date data.

Keep in mind that the back/forward button also keeps the stale data around for even longer for this same reason - just like built-in browser behavior.

I am not convinced that “back” button is holding any data. In fact, on my page I can see that every time I click back/ forward button, gSSP is executed.  Example orange.pl > Top menu “Oferta” -> click “nowy numer”, click go back / forward.

We’re working on better docs for this but let me try to clear up about how this is intended to work.

There is a server-side cache of fetch() which can be controlled with the revalidate option either on the fetch itself or per layout/route. It doesn’t directly control the client-side cache but the main thing that matters is whether a route is fully static - doesn’t use any cookies() or headers() and all revalidate options are higher than 0.

In a previous versions, before stable, the use of dynamic params matter but that doesn’t matter at all in latest Next.js. So everything in the beginning of this thread is irrelevant. You couldn’t have try to change using dynamic params or not.

The main thing that controls the client-side cache is the prefetch option on Link. You can specify this to either true or false explicitly. It controls whether or not you accept a stale result or not.

  • prefetch={true}: This means that it’s cached. This means that the target URL is allowed to be prefetched and show a stale result. When navigating away from it and back again, it’ll reuse the same stale result. If you’re away from the page for a long time, it can prefetch again to get fresher data but you always can navigate to it instantly - at the cost of seeing stale data.
  • prefetch={false}: This means that it’s not cached (mostly). However, many users and sites end up with situations like tab bars where users switch between one page to another page - for example to compare to values. Blocking on reloading these would cause the user to have to wait for the load when switching between pages like this - causing them to lose their place. Therefore we keep these for a brief period (30 seconds) even when uncached to allow for this UX pattern. Therefore relying on explicitly switching tabs or links to force reloads isn’t a good UX pattern. It’s not an expectation. We’ve found that user expectation is to reload the page or use a pull-to-refresh or that the data is always live using another mechanism such as polling. Therefore, there’s no opt-out of this 30 second fast-switch mechanism.

I think this is the main source of confusion because if you’re testing caching behavior, you’re probably doing this very quickly where as a real user wouldn’t hit this as an issue. It’s the programmer’s expectation that this should work instantly that causes the confusion. Although we would like feedback on if this 30 second limit is actually too long or too short in practice. If you have a need for this to be 0 or 5 second, please elaborate why so that we can understand this UX because I haven’t yet seen a UX where this is better as part of the navigation. I’m sure this might be wrong but to ensure that we solve it the right way, we need to understand the best UX here so an example would be great. Keep in mind that the back/forward button also keeps the stale data around for even longer for this same reason - just like built-in browser behavior.

If you don’t specify a prefetch option, we’ll infer a default by the page type. Since static pages are inherently somewhat stale and also cheap to load, we default to prefetch={true} for static page to default to best perf. For dynamic pages, we default to prefetch={false} to default to best cost savings.

There are two additional manual ways to control the cache:

  • router.refresh(): If you want to manually refresh the cache when visiting a page, you can call router.refresh() in a useEffect. This gives you stale data immediately for quick navigation and then refresh it while the user is looking at it. You can also use this in a timer to periodically refresh in the background. This ensures that you’re able to see fresh data immediately upon navigation.
  • router.prefetch(): This lets you disable the prefetch on the Links but control exactly when you want to use it.

We’re running into this issue as well. The soft navigation behavior makes building any interactive app using the app directory tough. If a user:

  • navigates from page A to page B
  • does an action
  • navigate back to page A within 30 seconds

Page A is guaranteed to be stale. This is obviously a really common flow.

Any updates or workarounds here? We want to use the app directory but have spent nearly a full day trying to figure out how to manage mutations and caching in the app directory.

In my opinion, client side cache should be aligned with server side cache to ensure predictable behavior when navigating, no matter if it’s soft or hard navigation.

Case: revalidate time provided

The living time of this prefetch in the background should be the same as the one provided by the developer in the revalidation time of the page or fetch request, then retrigger a fetch if link hasn’t been clicked yet.

Case: no-cache

If no-cache is provided then prefetch should never occur (or happen only on hover/focus) as the developer expects the page or component to be rendered at request time.

Case: force-cache (default):

Then current behavior is fine unless invalidated by the developer.

This would also be on a per fetch call basis to ensure the granularity intended by the RSC.

Here’s a new patch-package patch for next@13.1.6, since José’s previous patch is broken now:

patches/next+13.1.6.patch

diff --git a/node_modules/next/dist/client/components/router-reducer/should-hard-navigate.js b/node_modules/next/dist/client/components/router-reducer/should-hard-navigate.js
index 150a5fd..8ccb52f 100644
--- a/node_modules/next/dist/client/components/router-reducer/should-hard-navigate.js
+++ b/node_modules/next/dist/client/components/router-reducer/should-hard-navigate.js
@@ -5,6 +5,7 @@ Object.defineProperty(exports, "__esModule", {
 exports.shouldHardNavigate = shouldHardNavigate;
 var _matchSegments = require("../match-segments");
 function shouldHardNavigate(flightSegmentPath, flightRouterState) {
+    return true; // Disable soft navigation for always-fresh data https://github.com/vercel/next.js/issues/42991#issuecomment-1413404961
     const [segment, parallelRoutes] = flightRouterState;
     // TODO-APP: Check if `as` can be replaced.
     const [currentSegment, parallelRouteKey] = flightSegmentPath;

Also in our example repo here: https://github.com/upleveled/next-js-example-winter-2023-vienna-austria/blob/d99ffd14608aa41ee082009259d657aad7e3f34a/patches/next%2B13.1.7-canary.7.patch

Since 13.1.2, navigation with url and query params is completely broken for me in production mode (not dev). The very first SSR page /discussions/1?page=1 will load correctly but navigating to /discussions/1?page=2 will reuse the browser cache from /discussions/1?page=1. It looks like the query params are completely ignored on further pages with the same segments but different query params. Even If I hard reload the page /discussions/1?page=2, it is still the browser cache for /discussions/1?page=1 which is displayed and no calls to the SSR server are made.

I downgraded back to 13.1.1.

Refactoring the components to pass the data to a “use client” component that uses react-query’s initialData (so SSRed and then update on the client) is likely a lot easier and more cost effective than the hacks above until they fix it imo. You also save a page refresh, so likely better UX.

As a more elaborated work around that basically avoid the creation or weird patterns on the code i just patched Next.js to disable soft navigation completely:

https://github.com/upleveled/next-13-app-dir-demo-project/blob/main/patches/next%2B13.1.4.patch

diff --git a/node_modules/next/dist/client/components/reducer.js b/node_modules/next/dist/client/components/reducer.js
index 951f016..947ce1f 100644
--- a/node_modules/next/dist/client/components/reducer.js
+++ b/node_modules/next/dist/client/components/reducer.js
@@ -317,6 +317,9 @@ function fillLazyItemsTillLeafWithHead(newCache, existingCache, routerState, hea
     return tree;
 }
 function shouldHardNavigate(flightSegmentPath, flightRouterState, treePatch) {
+    // disable soft navigation to solve issues with server side dynamic segments
+    // https://github.com/vercel/next.js/issues/42991
+    return true;
     const [segment, parallelRoutes] = flightRouterState;
     // TODO-APP: Check if `as` can be replaced.
     const [currentSegment, parallelRouteKey] = flightSegmentPath;

The steps to use it are:

  1. install patch-package
  2. add the script "postinstall": "patch-package"
  3. Update /node_modules/next/dist/client/components/reducer.js as shown in the patch link
  4. run yarn patch-package next

after this your app should work with Hard navigation only

This patch generously suggested by @Josehower didn’t work for me with 13.4.13 so I put together a new one. It appears that setting getPrefetchEntryCacheStatus so it always returns "stale" will make every link visit fetch new data.

diff --git a/node_modules/next/dist/client/components/router-reducer/get-prefetch-cache-entry-status.js b/node_modules/next/dist/client/components/router-reducer/get-prefetch-cache-entry-status.js
index b611280..d2f4f31 100644
--- a/node_modules/next/dist/client/components/router-reducer/get-prefetch-cache-entry-status.js
+++ b/node_modules/next/dist/client/components/router-reducer/get-prefetch-cache-entry-status.js
@@ -30,6 +30,7 @@ var PrefetchCacheEntryStatus;
     PrefetchCacheEntryStatus["stale"] = "stale";
 })(PrefetchCacheEntryStatus || (PrefetchCacheEntryStatus = {}));
 function getPrefetchEntryCacheStatus(param) {
+    return "stale";
     let { kind , prefetchTime , lastUsedTime  } = param;
     // if the cache entry was prefetched or read less than 30s ago, then we want to re-use it
     if (Date.now() < (lastUsedTime != null ? lastUsedTime : prefetchTime) + THIRTY_SECONDS) {

Same steps as the original comment but the file to edit is dist/client/components/router-reducer/get-prefetch-cache-entry-status.js.

I have spent all of 15 minutes poking at my app since throwing this in there and it’s working so far. YMMV. Please share your experiences. I’m so surprised this is necessary. This is what I’ll be going with.

Vercel seems to be forcing caching with no way to opt-out to push people to pay. 100 revalidations/month. 0.10 USD per 1000 revalidations. https://vercel.com/docs/infrastructure/data-cache/limits-and-pricing

Reading further the discussion of the issue I understood that several people have the same issue I’m facing, I’m trying to use revalidateTag and revalidatePath outside the Server Action

In a real world example of an e-commerce using RSC:

/product/[productId]/page.tsx

import React from "react";
import { unstable_cache } from "next/cache";

type Props = {
  params: { productId: number };
};

async function getData(productId: number){
  const productRevalidateTag = `PRODUCT-REVALIDATE-TAG:${productId}`;

  const product = await unstable_cache(
    async () => {
      const product = /* get product from database... */

      return {
        ...product,
      };
    },
    [productRevalidateTag],
    { tags: [productRevalidateTag] }
  )();

  return product;
}

export default async function page({ params }: Props) {
  const product = await getData(params.productId);

  return (
    <div>{product.price}</div>
  );
}

export const revalidate = 0;
export const dynamic = 'force-dynamic';

/api/product/price/route.ts

import { revalidateTag } from "next/cache";
import { NextResponse } from "next/server";

export async function PATCH() {
  //update product price...

  revalidateTag(`PRODUCT-REVALIDATE-TAG:${productId}`)
  revalidatePath(`/product/${productId}`)
  
  return NextResponse.json({ revalidated: true });
}

revalidatePath/revalidateTag/revalidate = 0/dynamic = 'force-dynamic' does not revalidate the page on the client-side, using unstable_cache or not. Using 4 functions that the name says will revalidate and make the page dynamic NONE of them revalidates the client-side.

The biggest problem here is that the customer who is browsing the e-commerce or using back-forward browser navigation and in the meantime the price of the product is updated, the client will not see it updated.

The only way to revalidate the client-side in this case is using useEffect(() => { router.refresh() }, []), but I believe that both for the developer and for performance it is not a good way to resolve.

@masterbater

They removed the 30s timer?

No, but using revalidateTag, revalidatePath, cookies.set or cookies.delete inside a server action does invalidate immediatly all the the client side cache.

it is mentionned in the doc : https://nextjs.org/docs/app/building-your-application/caching#invalidation-1

The timer still takes effect if you navigate without doing a mutation.

We know that, we use that as workaround but we still notice the cache page before it updates.

Nextjs team should consider this a top priority. This is really becoming big, lots of people already covered and encountered the caching issue. Router cache 30s should be opt in. App router has fundamentally a better organization and layout. I dont want to go back to page router

It looks like server actions can do it, but they’re in alpha (and what if for whatever reason I’m not using them?).

Server actions with revalidatePath can mostly do it but it is still stale on popstate navigation (back/forward browser buttons).

Upgrade to the latest canary version of next, the back/forward showind stale data was a bug and it was fixed.

Echoing that it needs an opt out.

use case (I think it was mentioned above but in a different context).

I have a table. The table has a button to add new, which brings you to a new page. You add a new item, and redirect to the table’s page. Boom, table is out of date.

A workaround is to use the server component just for initial data, then do client side fetching (with SWR or react-query or whatever). That will always work, and is arguably a better user experience. But what if, for simplicity of implementation, just want the simpler fully server side flow? I literally can’t do it.

The best you can do is a router refresh on the table which will show stale data, or worse double fetch (server and client) if the 30 seconds has elapsed. It looks like server actions can do it, but they’re in alpha (and what if for whatever reason I’m not using them?).

The 30s seems pretty arbitrary too. Like, what if I feel 15 seconds is better? I can’t change it?

Does revalidatePath only work in server actions? Can it be used in a regular route handler and have the same effect at least?

Is there any update about this?

There is linear ticket assigned to this issue so it’s been acknowledged by the team and it’s surely somewhere in their pipeline, subject to other priority work items.

If following statement is fundamentally wrong about how nextjs works. Please downvote this to let me (and others) know it is irrelevant.

I would expect the navigation cause all data up to root layout be updated, just like MPA but only data is sent to client.

Retrieving all data should be the worst case. It would be great if the router on server side knows which parts should be sent base on revalidate and only re-run those parts. For example

data to be fetched/re-run revalidate after 20 after 55 after 80 after 110
root layout 50 x o o o
layout 60 x x o o
page 30 x o o o
RSC 100 x x x o

(for RSC, it might need to re-run for props change)

I’m assuming that client only signaling navigation event to server. It’s server’s job to do the re-run. And finally, client can somehow update corresponding parts by structure of the data.

After all, I thought it would be good if navigation support revalidatePath and revalidateTag directly to invalidate cache immediately. So whenever I click a <Link />, it’s guaranteed that data is fresh new.

Yea, I thought that this bug had already been fixed so I was confused about so many hitting issues, but it makes perfect sense that without manual revalidation being possible that you’d reach for shorter cache timeout in lieu of that.

In all my coding interactions with libraries and frameworks I think I’m looking for two key things: Well reasoned and sensible default behaviours The ability to tweak that behaviour when I have a bespoke need outside of the well researched norm

Agreed. This is a good base to consider. It’s not always possible to express using defaults and then having new concepts to express the different types of UX are important.

There’s one design principle that pulls against this though which is the ability for future improvements. Which either means that it’s harder to upgrade in the future, or that you add some constraints earlier around the principles of the design. We expect to add even more optimizations and intermediate caches in the future - both server and client. As long as the semantics allow for it, it can be a pretty much automatic optimization in future upgrades. E.g. how ISR is an automatic optimization over cached fetches. However, if there’s manual controls for each mechanism specifically it might not work out and the controls can end up mutually exclusive.

That doesn’t eliminate the option to add more control but if the control is fine grained control over the exact mechanisms that exist today then that doesn’t directly port to other caching mechanisms. My annoying prodding is about looking to understand a higher level concept or principle that we can build multiple caches on top. This is the tension against granularity but I want to find something that makes sense for the desired UX.

This is not good enough, and the workaround is inconsistent. We need to be able to reliably declare if a page should make a fetch call every time it’s navigated to via link/soft navigation. I have a page set up and have tried exporting revaildate = 0, fetchCache = “force-no-store”, in combination with link prefetch={false} and no matter what you do you cant get the page to reliably fetch every time.

I would say this is breaking because if you have some app with any reasonable expectation of fresh data you lose all the benefits of a modern react app.

Tested in 13.4.3 and still no change in behavior.

I have even added revalidatePath to the mutation and that does not change anything.

EDIT: I must also confirm, that downgrading to Next v13.3.1 behaves as expected, meaning when you click a next/link it will reliably make the serverside call and the only configuration necessary is the: export const revalidate = 0; // segment option (fetchCache = "force-no-store" works just as well).

However, if you click forward/back on your mouse/browser instead of the next/link , the page will show the original cached data (from when you first navigated to the page). This may be appropriate for another discussion but looking forward to when this issue is solved, It would be nice if next supported a first class way of managing cache with browser history navigation.

A PR about rewriting the cache handling has been merged and will come into the next canary soon (link: https://github.com/vercel/next.js/pull/48383).

The new logic is this :

  • all navigations (prefetched/unprefetched) are cached for a maximum of 30s from the time it was last accessed or created (in this order).

  • in addition to this, the App Router will cache differently depending on the prefetch prop passed to a <Link> component:

    • prefetch={undefined}/default behaviour:

      • the router will prefetch the full page for static pages/partially for dynamic pages
      • if accessed within 30s, it will use the cache
      • after that, if accessed within 5 mins, it will re-fetch and suspend below the nearest loading.js
      • after those 5 mins, it will re-fetch the full content (with a new loading.js boundary)
    • prefetch={false}:

      • the router will not prefetch anything
      • if accessed within 30s again, it will re-use the page
      • after that, it will re-fetch fully
    • prefetch={true}

      • this will prefetch the full content of your page, dynamic or static
      • if accessed within 5 mins, it will re-use the page

TLDR :

  • for dynamic pages, there will be a gap of about at least 30 seconds before the server is called again, with one condition : this only apply if you navigates to a different page after that time, if you do very fast back & forth to the same page, the timer will reset and only call after another 30 seconds.
  • They also added that, in the future, they might add another API to manually specify this timer.

from the PR :

  • we may add another API to control the cache TTL at the page level
  • a way to opt-in for prefetch on hover even with prefetch={false}

This could helps some people, as 30 seconds seems like a good trade-off between server load and user experience. I don’t know exactly why this exact value has been chosen, react-query has a similar default about staleTime so i trust them about this value.

For people who want highly dynamic navigations, the best solution now IMO is still to add a prefetch={false} to the link component, it will not cache the data between navigations and show you fresh values between navigations.

When the new canary will come, i will try that and post an update.

In the meantime, if someone needs a workaround to fully clear the Next.js client-side cache, you can use this one from @clerkinc : https://github.com/clerkinc/javascript/blob/712c8ea792693a335d9bf39c28e550216cb71bcb/packages/nextjs/src/client/invalidateNextRouterCache.ts

Hi all,

currently the patch updated by @karlhorky here https://github.com/vercel/next.js/issues/42991#issuecomment-1413404961 is broken since 13.1.7-canary.18.

I have updated the patch in a way that only disable the hard navigation for next/link what is a bit less invasive. so functions as router.push() should still work with soft navigation. since there we have the router.replace() + router.refresh() trick.

the updated patch for 13.2.3 is:

patches/next+13.2.3.patch

diff --git a/node_modules/next/dist/client/components/layout-router.js b/node_modules/next/dist/client/components/layout-router.js
index 9b60a45..dd0639d 100644
--- a/node_modules/next/dist/client/components/layout-router.js
+++ b/node_modules/next/dist/client/components/layout-router.js
@@ -317,6 +317,7 @@ function HandleRedirect({ redirect  }) {
     const router = (0, _navigation).useRouter();
     (0, _react).useEffect(()=>{
         router.replace(redirect, {});
+        router.refresh()
     }, [
         redirect,
         router
diff --git a/node_modules/next/dist/client/link.js b/node_modules/next/dist/client/link.js
index d15ce7f..369e036 100644
--- a/node_modules/next/dist/client/link.js
+++ b/node_modules/next/dist/client/link.js
@@ -83,6 +83,7 @@ function linkClicked(e, router, href, as, replace, shallow, scroll, locale, isAp
     if (isAppRouter) {
         // @ts-expect-error startTransition exists.
         _react.default.startTransition(navigate);
+        router.refresh()
     } else {
         navigate();
     }


Are you guys also seeing an issue where router.refresh on the current page will not invalidate the router cache and subsequent soft navigations still show cached pages? Is that related to this?

Not sure, this sounds similar but not 100% related.

As far as I understand, this particular issue (#42991) is about next/link and router.push() selecting soft navigation for pages which have opted in to dynamic rendering using dynamic = 'force-dynamic' or revalidate = 0. This results in the page showing stale cached data, instead of running the logic in the async function component again and showing fresh data.

I described it also in the Next.js 13 app directory feedback discussion:


In my mind the better fix for this issue is making router.refresh reliably invalidate the router cache… when someone logs out or mutates data that should update on other pages, I should just be able to router.refresh and totally clear out the router cache. Then next can prefetch pages again, etc to make nav fast. If I understand, that’s what the docs recommend and that aligns with my intuition about how a mutation should be handled.

The current behavior I observe is that router.refresh only invalidates the current page segments. So if I mutate data / log out, the soft navigations have stale data (e.g. I visit profile to see my “likes” then go to /detail-page/xyz where I “like” something then bac to /profile where I see my “likes” but it’s missing the new one).

@psugihara so you’re proposing this is related to mutations, eg. the “The Next.js team is working on a new RFC for mutating data in Next.js” statement that appears currently in the beta docs.

I can follow what you’re saying, and it seems like maybe you’re asking for this RFC to be released, with a proper solution for cache invalidation on other pages when something changes that will affect multiple pages. This would indeed be great, and would probably provide a solution for a lot of these use cases. (not all of them, still unanswered are time-based invalidations and other cache invalidations not based on a mutation - eg. as mentioned below by @Fredkiss3)

However, until that happens, if the Next.js team could provide an interim solution to disable soft navigation automatically for dynamic pages, that would be nice.

Was experimenting with this behavior and it seems that you can force next to always do a hard navigation if you pass a querystring to the link component.

Updated the example by just adding a querystring, and it seems to work : https://stackblitz.com/edit/nextjs-smotka?file=app%2Fpage.tsx,app%2Fnested%2Fpage.tsx

This have the advantage of the page being prefetched on the first render, and always refetched on every navigation.

One other thing is noticed is that without this hack, when the page has been prefetched, it will load on the first navigation and next will scroll to the top of the page, but on subsequent navigations it doesn’t. When you introduce this hack, it will always scroll to the top [video example down here].

https://user-images.githubusercontent.com/38298743/211924001-51f2ec85-0647-4b76-a924-f6f28de1d90e.mov

Warning THIS IS DEFINITELY A BUG

The solution is revalidatePath, they want us to use it. Only problem is you only get 100 on hobby plan.

i think this comment: https://github.com/vercel/next.js/issues/42991#issuecomment-1382024752 needs to be pinned or moved up into the main description because there are some hacks to make it work, if you really want the app directory.

i don’t think this makes nextjs unusable, since the app directory isn’t a requirement to use it. just stick to the pages directory until this is sorted out.

i am also not a fan of providing a cache with a seemingly random 30s timer with no way to opt out as it’s very unintuitive to me.

@Apestein

This will not cause the flicker unlike calling router.refresh() from useEffect.

I presume that the back-forward browser navigation will not be taking advantage of router.refresh() inside the button when the data is updated after having navigated with the LinkButton, the old data still persist 😕.

Just combine it with router.refresh() in useEffect ¯_(ツ)_/¯

If the update happens on a different session or directly in the database, this cache will keep showing a stale data and there is no way to opt out. I am aware of router.refresh() or using ‘a’ instead of ‘Link’. My question is why can’t this be an opt out configuration?

It’s not a bug, nor a limitation.

The client side router do not have any direct relation with route handlers. it cannot invalidate itself on navigation just like that.

This is explained here in the docs : https://nextjs.org/docs/app/building-your-application/caching#data-cache-and-client-side-router-cache

  • Revalidating the Data Cache in a Route Handler will not immediately invalidate the Router Cache as the Router Handler isn’t tied to a specific route. This means Router Cache will continue to serve the previous payload until a hard refresh, or the automatic invalidation period has elapsed.
  • To immediately invalidate the Data Cache and Router cache, you can use revalidatePath or revalidateTag in a Server Action.

Using revalidateTag/revalidatePath inside a server action IS the way to clear the client side router cache.

The use case for using them inside a route handler is to manually revalidate static pages or ISR pages (either with export const revalidate = <number> or fetch('https://...', { next: {tags: ['tag-1'] } }) ), this is more like on demand revalidation of ISR pages in pages router.

the back-forward browser navigation was fixed in the latest canary version, i’ve personnally tested that.

I think there are probably a lot more people with deeper technical feedback here, which is probably what is being asked for - so I’ll leave that to them. Hope my message is not considered too spammy.

But two things from an end user perspective, coming from the perspective of teaching beginner programmers in our web development bootcamp:

  1. There is a user expectation (not only for end users of websites, but also for programmers, and magnified for beginners*) that navigating around server-side pages that fetch / query data will show fresh data. This is currently the most challenging expectation to work against. Maybe it’s correct that this user expectation should change in total over time, but my feeling is that it’s probably not there yet, and taking this on may be like boiling the sea**.
  2. When setting up other frameworks, this extra caching behavior does not happen, which leads to the blame being placed on Next.js or React

* this feeling can be magnified for beginners because they are used to their code being the problem, so their understandings and assumptions of systems not being the cause of problems can be strong ** caveat: overcoming this challenge may be more achievable if there are enough documented, paved-cowpath, “pit of success” patterns for invalidating stale data like “invalidating paths and tags from actions” and “patterns for querying databases for fresh data” and these become more of the default pattern, so that Next.js makes it hard to run into overly-eager caching behavior (so a beginner setting up a project naïvely and doing some database queries in RSCs will not run into it by default)

By the way, super appreciate the openness to continue discussing with the community to come to a workable solution ❤️

Hmm, this issue still hasn’t been resolved even after the release of version 13.4.5. I believe this issue is crucial for many applications that require the latest data version when working with “soft navigation”. Additionally, it would be beneficial to have some options such as:

// at -> /app/.../page.tsx
export const navigation = "hard" | "soft";

or having a built-in component specifically designed for server-only soft navigation, which could look like this:

import { DynamicLink } from 'next/navigation' 

export default Page() {
   return <div>
    Page
    <DynamicLink href={href}>ABC</DynamicLink>
  </div>
}

In my opinion, client side cache should be aligned with server side cache to ensure predictable behavior when navigating, no matter if it’s soft or hard navigation.

Case: revalidate time provided

The living time of this prefetch in the background should be the same as the one provided by the developer in the revalidation time of the page or fetch request, then retrigger a fetch if link hasn’t been clicked yet.

Case: no-cache

If no-cache is provided then prefetch should never occur (or happen only on hover/focus) as the developer expects the page or component to be rendered at request time.

Case: force-cache (default):

Then current behavior is fine unless invalidated by the developer.

This would also be on a per fetch call basis to ensure the granularity intended by the RSC.

As I stated above, In my opinion browser cache should be in sync with the server cache, or at least give us the chance to control behavior on both which honestly I think it generates a lot of confusion.

Current behavior goes against the RSC goal. Ideally each component revalidates its data independently with the server at the specified time.

What revalidatePath/Tag lets you do is ensure that the right thing is fresh after mutation (assuming the bug above is fixed). So you always see your own mutations - while the 30 second cache lets you see other people’s mutations relatively quickly.

I thought not being able to revalidate non current routes was intentional and didn’t realise it was a bug. Honestly, without that bug I suspect this issue would have generated far less noise.

The framing in the above quote makes a ton of sense to me. Manual cache busting for your own mutations and time delayed busting for other peoples mutations sounds like the right plan.

In all my coding interactions with libraries and frameworks I think I’m looking for two key things:

  • Well reasoned and sensible default behaviours
  • The ability to tweak that behaviour when I have a bespoke need outside of the well researched norm

In this case, I’d defer to vercels research into what the optimal cache time should be for the majority of use cases. I’m certain you’d know better than me.

However, I’d ask that you respect the need for as much granularity as possible. Thinking more on my post above, I initially said I’d like one of three options, but on reflection it’s actually that I’d like them all to be possible as they serve different use cases. And I have an ask for a fourth:

  1. Let us choose to override the cache time through whatever method you choose (I like Fred’s suggestion of being able to set this at the layout or individual route level). This serves the need of displaying “other people’s” mutations at a time interval that makes sense for each specific developers app. I personally don’t mind if this includes a framework decided minimum if essential, e.g 5 secs
  2. Allow us to bust the cache programmatically through a manual revalidation for routes other than current. This allows you to account for the users own mutations and it sounds like this bug will be fixed, woop.
  3. Allow us to specify that when a route is accessed through a next link tag with a given option, the cache will be busted. This may well be used infrequently but there are definitely cases where you know it’s essential that a given route be fresh, no ifs, buts or maybes.
  4. My extra request, I want to be able to use revalidatetag where I want to refresh a data source and not a whole page. But I want to be able to use it on requests that don’t use fetch, for example a direct db query. I don’t see why the tag setting option can’t be built into cache() as well as the custom fetch function.

Having all of these would mean I have the right tool for every job and i’m empowered to take the correct approach for my app. Most of the time, I’ll rely on the sensible defaults, but I’m not forced to use clunky hacks or workarounds for those instances where I’m not building with the majority.

Thanks for listening!

The revalidatePath not invalidating the client cache for paths that you’re not currently on is indeed a bug.

It’s also a bug that having a lower revalidate time doesn’t set the client cache to a lower value. However, caching is still supposed to be capped at the high end even if revalidate time is higher because at some point you need to start seeing manual revalidation.

The main thing about the whole design is that you’re supposed to call revalidate which will revalidate server caches as well as the client caches. The more you can use manual revalidation the more you can have longer and more effective caches. That’s also why the revalidate time isn’t a fool proof signal for longer values since the design is that you’re supposed to be able to cache things for longer as long as you can revalidate them. What revalidatePath/Tag lets you do is ensure that the right thing is fresh after mutation (assuming the bug above is fixed). So you always see your own mutations - while the 30 second cache lets you see other people’s mutations relatively quickly.

If there was an option to not cache for 30 seconds. What would you expect that to do when navigating between small pages within a layout? It wouldn’t refetch the layout in that case due to sub-tree navigation. Layouts are cached between page navigations. Is what you really want to always refetch the whole page on every navigation? Otherwise you can’t rely on it as a design pattern anyway since part of the page is stale so what’s the point?

I spent a little bit of time but i was able to provide a patch, and tested it with the latest stable version (next@13.4.4) :

diff --git a/node_modules/next/dist/client/components/router-reducer/get-prefetch-cache-entry-status.js b/node_modules/next/dist/client/components/router-reducer/get-prefetch-cache-entry-status.js
index b611280..009dcc2 100644
--- a/node_modules/next/dist/client/components/router-reducer/get-prefetch-cache-entry-status.js
+++ b/node_modules/next/dist/client/components/router-reducer/get-prefetch-cache-entry-status.js
@@ -21,7 +21,7 @@ _export(exports, {
     }
 });
 const FIVE_MINUTES = 5 * 60 * 1000;
-const THIRTY_SECONDS = 30 * 1000;
+const ONE_SECOND = 1 * 1000;
 var PrefetchCacheEntryStatus;
 (function(PrefetchCacheEntryStatus) {
     PrefetchCacheEntryStatus["fresh"] = "fresh";
@@ -32,7 +32,8 @@ var PrefetchCacheEntryStatus;
 function getPrefetchEntryCacheStatus(param) {
     let { kind , prefetchTime , lastUsedTime  } = param;
     // if the cache entry was prefetched or read less than 30s ago, then we want to re-use it
-    if (Date.now() < (lastUsedTime != null ? lastUsedTime : prefetchTime) + THIRTY_SECONDS) {
+    // FIXME: TEMPORARY PATCH : reduced the time to only 1s
+    if (Date.now() < (lastUsedTime != null ? lastUsedTime : prefetchTime) + ONE_SECOND) {
         return lastUsedTime ? "reusable" : "fresh";
     }
     // if the cache entry was prefetched less than 5 mins ago, then we want to re-use only the loading state
diff --git a/node_modules/next/dist/esm/client/components/router-reducer/get-prefetch-cache-entry-status.js b/node_modules/next/dist/esm/client/components/router-reducer/get-prefetch-cache-entry-status.js
index 7156f75..438e828 100644
--- a/node_modules/next/dist/esm/client/components/router-reducer/get-prefetch-cache-entry-status.js
+++ b/node_modules/next/dist/esm/client/components/router-reducer/get-prefetch-cache-entry-status.js
@@ -1,5 +1,5 @@
 const FIVE_MINUTES = 5 * 60 * 1000;
-const THIRTY_SECONDS = 30 * 1000;
+const ONE_SECOND = 1 * 1000;
 export var PrefetchCacheEntryStatus;
 (function(PrefetchCacheEntryStatus) {
     PrefetchCacheEntryStatus["fresh"] = "fresh";
@@ -10,7 +10,8 @@ export var PrefetchCacheEntryStatus;
 export function getPrefetchEntryCacheStatus(param) {
     let { kind , prefetchTime , lastUsedTime  } = param;
     // if the cache entry was prefetched or read less than 30s ago, then we want to re-use it
-    if (Date.now() < (lastUsedTime != null ? lastUsedTime : prefetchTime) + THIRTY_SECONDS) {
+     // FIXME: TEMPORARY PATCH : reduced the time to only 1s
+    if (Date.now() < (lastUsedTime != null ? lastUsedTime : prefetchTime) + ONE_SECOND) {
         return lastUsedTime ? "reusable" : "fresh";
     }
     // if the cache entry was prefetched less than 5 mins ago, then we want to re-use only the loading state
diff --git a/node_modules/next/dist/client/components/router-reducer/reducers/navigate-reducer.js b/node_modules/next/dist/client/components/router-reducer/reducers/navigate-reducer.js
index bafc1c1..18bd713 100644
--- a/node_modules/next/dist/client/components/router-reducer/reducers/navigate-reducer.js
+++ b/node_modules/next/dist/client/components/router-reducer/reducers/navigate-reducer.js
@@ -203,7 +203,7 @@ function navigateReducer(state, action) {
                 return handleExternalUrl(state, mutable, href, pendingPush);
             }
             let applied = (0, _applyflightdata.applyFlightData)(currentCache, cache, flightDataPath, prefetchValues.kind === "auto" && prefetchEntryCacheStatus === _getprefetchcacheentrystatus.PrefetchCacheEntryStatus.reusable);
-            if (!applied && prefetchEntryCacheStatus === _getprefetchcacheentrystatus.PrefetchCacheEntryStatus.stale) {
+            if (prefetchEntryCacheStatus === _getprefetchcacheentrystatus.PrefetchCacheEntryStatus.stale) {
                 applied = addRefetchToLeafSegments(cache, currentCache, flightSegmentPath, treePatch, // eslint-disable-next-line no-loop-func
                 ()=>(0, _fetchserverresponse.fetchServerResponse)(url, currentTree, state.nextUrl));
             }
diff --git a/node_modules/next/dist/esm/client/components/router-reducer/reducers/navigate-reducer.js b/node_modules/next/dist/esm/client/components/router-reducer/reducers/navigate-reducer.js
index dc29fc2..1d456bc 100644
--- a/node_modules/next/dist/esm/client/components/router-reducer/reducers/navigate-reducer.js
+++ b/node_modules/next/dist/esm/client/components/router-reducer/reducers/navigate-reducer.js
@@ -181,7 +181,7 @@ export function navigateReducer(state, action) {
                 return handleExternalUrl(state, mutable, href, pendingPush);
             }
             let applied = applyFlightData(currentCache, cache, flightDataPath, prefetchValues.kind === "auto" && prefetchEntryCacheStatus === PrefetchCacheEntryStatus.reusable);
-            if (!applied && prefetchEntryCacheStatus === PrefetchCacheEntryStatus.stale) {
+            if (prefetchEntryCacheStatus === PrefetchCacheEntryStatus.stale) {
                 applied = addRefetchToLeafSegments(cache, currentCache, flightSegmentPath, treePatch, // eslint-disable-next-line no-loop-func
                 ()=>fetchServerResponse(url, currentTree, state.nextUrl));
             }

The steps to use it are:

  1. Install next@13.4.4
  2. install patch-package
  3. add the script “postinstall”: “patch-package” to your package.json file
  4. create a file in patches/next+13.4.4.patch at the same level as your package.json and copy the content of the patch above in it
  5. run npx patch-package

With this i decreased the cache time to 1 second because when i used 0 second it was causing an infinite reload with redirects.

I tested on my end and it seemed to work, so i can’t garantee that it will work perfectly on your end, so use this at your own risk 🙏.

I’d be okay with that. On top of that, you should be able to opt out of prefetching at a page level, for pages in which stale data are unacceptable or useless. Configuring that on <Link tags seems backwards, more distributed, and more brittle.

Hi @timneutkens , we’ve been following this issue closely and upgraded Next.js to get the new client-side router caching behaviour you mentioned above.

We’ve identified what seems like quite a serious regression, which appears to have been introduced in version 13.3.2-canary.2 (though we suspect it may have originated in canary.0, but we could not test this as it does not seem to be published on npm). This issue is causing quite regular occurences of pages becoming permanently stuck in their loading states. We’ve narrowed down the steps to exactly this:

  • Project setup:

    • Production site deployed to Vercel
    • All pages are Server Components which fetch their data over network requests
    • loading.js in app/ and app/items/
  • Steps:

    • Open the website on a page without dynamic segments (/items)
    • Wait 30 seconds
    • Click link to a page that has dynamic segments (/items/123)
  • Outcome:

    • New page gets stuck on infinite loading state
    • Nothing in the console error log and no stalled network requests

This can be worked around by adding prefetch={false} , clicking the links before 30 seconds have passed, or downgrading to Next.js v13.3.1 stable realease or earlier.

I think there are too much unresolved bugs for the team to focus on this specific one, seems like they are working on shipping features right now.

We have to be patient and not harass the team.

And if after a long time, this one is not resolved (and is closed automatically), we can still create a new one, by providing all the context needed.

Looks like Tim wrote some tests for the shouldHardNavigate function to verify that it will soft navigate if the segments match:

https://github.com/vercel/next.js/pull/45303/files

So it’s possible that this is a signal that Next.js does not plan to provide hard navigation for these cases…

So maybe there will indeed need to be a way that all use cases are supported by the changes provided in the future Mutations RFC

@markitosgv i’ve seen your example but i don’t understand what you don’t understand… ? It is definitely buggy i think ?

Sorry , the issue is navigating to /dashboard/[team-x]/detail links, in docs says that a hard navigation will occur, but it seems that is always a soft naviagtion too

If you don’t want to patch next, I’m using this workaround and it’s working well so far.

startTransition(() => {
    router.push('/some-route')
   router.refresh()
});

Thanks. It kinda does the trick-ish as a quick and dirty temporary “fix”. It renders the page twice on first page load (production) and multiple times in dev. Just have to cross my fingers for a proper fix from the next team in the future.

@Fredkiss3 You can check it under “Usage” in your Vercel dashboard

See screenshot

image

@joulev

I’ve been using revalidatePath extensively for a while and it seems to work fine for me at least as I am on the hobby plan.

The reason I’ve talked about revalidatePath not costing you anything is that if it doesn’t revalidate the data cache (fetch cache), then it should be ok. (I think), since there will not be any cached fetch to revalidate.

For further evidence, I don’t use any revalidateTag in my apps, but my revalidatePath in my one-user apps (I’m the only user) have got me to 121 revalidation in the last 30 days. It is a force-dynamic page.

  • question : where did you check to see the total revalidations you ran in your vercel dashboard ? I would like to check maybe I haven’t ran that many revalidations.

@Apestein

This will not cause the flicker unlike calling router.refresh() from useEffect.

I presume that the back-forward browser navigation will not be taking advantage of router.refresh() inside the button when the data is updated after having navigated with the LinkButton, the old data still persist 😕.

I always felt “safe” using next as every feature had a logical implementation, and I felt like i understood what went on under the hood. Now I feel as though there’s magical caches and fetch-hijacks by default and I can’t trust that some page/route/component isn’t holding stale data.

should not be required to avoid the overly-eager caching

I like eager caching and pushing as much to SSG as possible. That’s the mental shift devs need to make - unless you explicitly use a known-dynamic feature, or you tell NextJS that you want to be dynamic, it’s going to optimize and cache the hell out of it. There is a heavy bias to run-time performance, which is great, IMO. It just needs to calm down when I tell it to. 😉

If there was an option. What would that do to layouts? One of the issues mentioned for router.refresh() is that it’s too brute force because it invalidates other layouts.

One of the issue we’re trying very hard to avoid is refetching data in layouts during navigation since that adds a lot of extra data. Especially with Server Components since the idea is that you’d move to do more in those - but that costs more than sending it to the client if you’re going to refetch all the time.

Additionally, we’re not finished in optimizing this. We plan on adding smarter fetching that does even less fetching in the future. E.g. for subpages like tooltips and pagination and perhaps even automatic ones.

I think the intuitive expectation here is that maybe just the page.tsx level down would be refetched upon any navigation. However, for a search param maybe it’s not so obvious that we can optimize that to only refetch a subtree. Which is likely what you want in that case.

So we’d either need to refetch the whole page upon navigation (same as router.refresh()) or make up some kind scope for it.

Is it true that only refetching page.tsx down would be sufficient or is that even too much in the search params case?

I’m had a problem updating data(get from fetch) in component where I’m visiting a page with link RandomPage<->HomePage (back/forward). I want to share my solution using router.refresh()

page.tsx | layout.tsx:

import { SetDynamicRoute } from './SetDynamicRoute';

export default async function Page() {
  const {rnd} = await getRandom();

  return (
    <div>
       <SetDynamicRoute /> {/* <-- inset this to page or layout where you want to refresh route (receive data fetch every 
                                   time at visit page) */}
      <h1>{rnd}</h1>
    </div>
  );
}

async function getRandom() {
  return await fetch('http://localhost/random', { cache: 'no-cache' }).then((res) => res.json());
}

SetDynamicRoute.tsx:

'use client';

import { useEffect } from 'react';
import { useRouter } from 'next/navigation';

export function SetDynamicRoute() {
  const router = useRouter();

  useEffect(() => {
    router.refresh();
  }, [router]);

  return <></>;
}

Hello everyone, there have been some updates that help with this issue . As of next@13.4.6 :

  • updating cookies in server actions (with cookies.set() or cookies.delete()) does invalidate all the client side cache (PR)
  • using revalidatePath() or revalidateTag() in a server action also invalidates all the client side cache (PR)

That may help for people who get stale data after doing some actions. However the root issue stay the same, and you still get stale data for 30 seconds if you are not using server actions.

@karlhorky @Xexr i think there is a function in next/cache that exists for that, but right now is still in a unstable flag, the function is unstable_cache. I don’t think there are docs about it yet, but you could guess the usage with the typescript interface.

@dragidavid . I have the similar scenario with you. At first, like you, I thought this behavior was very disturbing. But some parts of the app is really helpful. Of course, a reload is not a ideal. But you could use revalidatePath function after the mutation. It will more suitable approach and probably solves your problem. My implementation like this;

Screenshot 2023-05-21 at 00 47 19

I tried that actually but didn’t do much, as many of the folks here, I tried the export const revalidate = 0; approach as well as the force-dynamic. I moved this direct prisma query into a fetch request and set the cache to no-store. Nothing worked so far. I was still getting the cached post when navigating back and forth.

I might be doing something wrong, not sure at this point 😅

This is my page FYI

import { notFound } from "next/navigation";

import Editor from "components/Editor";

import { prisma } from "lib/prisma";

async function getPost(id: string) {
  return await prisma.post.findUnique({
    where: {
      id,
    },
  });
}

export default async function Page({ params }: { params: { id: string } }) {
  const post = await getPost(params.id);

  if (!post) {
    notFound();
  }

  return (
    <Editor
      post={post}
    />
  );
}

I just came across this issue as well. I have a dynamic page under app/[id]/page.tsx where I get a post from the database with that id. If I end up modifying that post, like simply changing the title or something and then navigating back to the / with the browser back button and then clicking on the post again, the cached post shows up, not the one with the updated title.

Obviously a reload fixed the issue but thats not ideal.

Really annoying.

I suppose there will be a config that would allow you do like this :

export pageTTL = 0
export const revalidate = 0

export default async function AdminPage() {
    // your page code...
}

so you could still allow for prefetches (on hover or by default for ex) for fast navigations, but every subsequent navigation would call the server.

Note : this is what I suppose it would look like, I’m not privy to the next or vercel, so the official API may totally be different.

I already reproduced it based on the steps you provided, thanks for offering though 🙏

Thanks @timBm10c, having a look!

This is better, but TBH I wish I could set 30 seconds -> 0 seconds.

It seems that this is being considered.

Not sure if this is the same issue as @karlhorky is describing above, but I have basically a “Data leak” because of dynamic server side pages not correctly being re-rendered.

Let’s say I have this page: app/profile/page.tsx

export default async function Profile() {
  const session = getCurrentSession();
  const profile = await getProfileById(session?.user?.profileId);
  
  return (<div>{profile.name}</div>};
}

export const revalidate = 0;
export const dynamic = "force-dynamic";

When I open this profile page in browser 1 with session 1, I get the name of the profile associated with session 1. After opening this page in another browser, logged in with session 2, I can STILL get the name of the session 1 profile in there, and after a couple of refreshes, then the data switches to the correct one.

This is a big issue as this will cause data leaks and people getting data that is not theirs.

I am not using any fetch calls in my page but purely database retrievals using Prisma, so I guess next picks this up as being static data and does not refresh the page correctly on navigation/refresh? (This does not only happen after following a Link component, also after doing a browser refresh!)

Adding a hard refresh as suggested here:

const HardRefresh = () => {
  const router = useRouter();
  useEffect(() => {
    router.refresh();
    // eslint-disable-next-line react-hooks/exhaustive-deps
  }, []);
  return null;
};

export default HardRefresh;

Also does not seem to solve the issue. It IS more stable, but I can see multiple requests happening in both my browsers, and from time to time still end up with mixed session data. This is a really big issue as I have multiple pages accessing session based data that should never be shared (this is the basics of authentication so… 😄).

Any suggestions I can try or things I can look into to solve this issue?

---- edit ----

Might be usefull to know that the Response headers for the page are perfectly fine and state NOT to cache te response, but it seems to be the nextjs server that is actually caching it and returning stale data:

HTTP/1.1 200 OK
Vary: RSC, Next-Router-State-Tree, Next-Router-Prefetch, Accept-Encoding
Cache-Control: no-store, must-revalidate
X-Powered-By: Next.js
Content-Type: text/html; charset=utf-8
Content-Encoding: gzip
Date: Tue, 11 Apr 2023 09:11:19 GMT
Connection: keep-alive
Keep-Alive: timeout=5
Transfer-Encoding: chunked

I also tested this on my Netlify deploy and the cached hydration data from the last session that logged on is returned, even though the response headers show no sign of caching:

age: 5
cache-control: no-cache
content-encoding: br
content-type: text/x-component
date: Tue, 11 Apr 2023 09:22:23 GMT
server: Netlify
strict-transport-security: max-age=31536000
vary: RSC,Next-Router-State-Tree,Next-Router-Prefetch,Accept-Encoding
x-nf-render-mode: ssr
x-nf-request-id: 01GXQSZ62G65K78WSGP5143E85
x-powered-by: Next.js

Yes. So here is what is working now on canary!

After the mutation you should call: router.refresh() And on the page when you need the freshly mutated data you fetch like this:

const response = await fetch('/api/something', {
    cache: 'no-store',
}

Hope this helps. 😃

Ok, i used <Link prefetch={false} /> for the dynamic pages and it seems to fix the problem : https://stackblitz.com/edit/vercel-next-js-n1tqpr?file=app%2Flayout.tsx

I think this is the current (kinda) official solution, for links you need to add prefetch: false, and for router.push it always do a hard navigation.

It can be very useful to always have fresh data for navigation, even with mutations.

I have a use case in my company where we have an app (the app is not coded with nextjs, but bear with me) that has a short jwt token (about 1 hour), we want our users to be logged out after one hour without any mutation done, the app is a dashboard so it can be left in a tab without refreshing for a long time.

If every navigation can get fresh data, we won’t have any problem with stale data and sessions.

Issue still happening on v13.1.3 and in 13.1.4-canary.0

stackblitz.com/edit/github-6fhvxk?file=src%2Fapp%2Fpage.jsx

@Josehower @Fredkiss3 here the same! I want to force server side fetching again on some dynamic pages when user is navigating. I think is a normal behaivour to get “fresh” pages when user’s are navigating between pages

I’m on the latest Next.js version (v13.1.3) and the workaround using <a> and router.push isn’t working.

right router.push() is soft navigation too, you may need to use

router.replace("/new-page");
router.refresh()

Any news yet about this feature ?

Using a patch of the underlying code might be the best option for folks looking to avoid revalidatePath. It might seem heavy-handed but it’s easy to disable if/when this is ever improved.

The problem is that it still requires revalidatePath/revalidateTag, which means money if you host it on Vercel. And I don’t see a way to invalidate without revalidatePath (money) or router.refresh() (not suitable for all cases; what if I don’t want to refresh the entire page but only one particular server component?)

I have a workaround that seems to work well. It uses a container component and an abstracted Link component (creatively called ExtendedLink in my demo) that triggers a server action to revalidatePath after someone browses away from the page. It is opt-in – you can mix no-cache and default-cache pages by using different Link components. It is described here. I whipped this up quickly, it can probably be refined, but it’s doing ok for me so far.

I’ve had so many bad experience with parallel routes that i just don’t bother to use them anymore

Just tested, revalidatePath/Tag are working in Server Actions. If you’re not using Server Actions you must use router.refresh() or switch it to a client component. Edit: I found out you only get 100 revalidations for the hobby tier. I feel it’s very scummy have to feature that you can opt-out and then forcing people to pay to use it.

@viniciusbitt

What confuses me the most is the revalidatePath and revalidateTag which by name, says it revalidates, but the client-side cache does not revalidate

That’s exactly what it does. It clears the client side router cache, for the current page it refreshes it, returning new data and for the other pages, it will trigger loading.tsx boundaries.

If you don’t see it, upgrade to the latest canary version (npm i next@canary), it works there.

That video is good. This issue will prevent me and I’m sure many others from adopting /app until it’s addressed. I’m assuming they are having internal discussions now if not actually making changes.

Well, for now it seems there’s no specific way to do this using the available next features.

I ended up running a full refresh using window.location.href for actions that I want to invalidate server side cache.

So for example;

  • A user logs in successfully, I run window.location.href to make a full blown request to the server. This way, all previously cached pages will be invalidated, then the pages the user visits while being logged in will be cached instead, making navigation easy for the user.

PS: I also use client side data fetching SWR to keep the data on the user profile fresh…this way, the entire page is served from the cache, but the data will always be up to date because of SWR

  • If there’s a logout request and if my useUser hook ever returns an unauthenicated error, I also run window.location.href

the problem is when you want to have not only /dashboard/user but /product/[id], /profile, etc.

And the first quote still stands true :

but you’d have to manually validate the pathname and throw 404 in your route

since you don’t want to have the user to access to the dashboard through /anything/user right ?

I got prefetch={false} working again, but it had to be done in tandem with running the app from a dynamic route. Think about how next-intl library works by prepending routes with /en/ for english and /fr/.

This would mean the app structure would look like this now, using the locale dynamic prop as an example:

app
  [locale]
    layout.tsx   <--- root layout
    page.tsx     <--- root page

And routes would be access like so:

/en/
/en/about-us

I use the next-intl library a lot of my projects anyway, so it’s not a big deal to do it this way but you don’t have to use the library to get this working, and you obviously don’t need to use [locale] as the dynamic param either.

It does have the unfortunate effect of making your URLs less than desirable, but it’s just another day, another workaround until there’s an actual solution.

Edit:

The dynamic prop [locale] can be anything because it’s just a param, so if you’re running a dashboard, for example, you can just access the root URL on /dashboard

@matt-kruse

Consider a simple example of notifications. My client side notification badge polls the server and informs the user that they have a new notification. The user clicks the badge to go to the notification page but doesn’t see a new notification because they were just there 25 seconds ago and it’s still cached. And now it won’t be refreshed in 5 seconds, but 30 more seconds!

@sebmarkbage If you want a production example, you could either take the github webapp or you could see this app : https://grafikart.fr (made with PHP). Everytime the user navigates to the notification page, it is always refetch on the server, but in the case of the 2nd case you’d believe the app is an SPA, but it is totally server-side rendered app that’s just using turbolinks for navigations. it can be as fast as a SPA and the UX here is better as you want to always have the latest notifications.

It would be helpful if someone has a prod app example that they can show where this is bad UX. It can help explain someone the details of how a potential solution should work. Not a theoretical demo. That doesn’t have enough detail to allow for flexibility in the solution.

@Fredkiss3 Yes it is, I was using 13.3 and it wasted a lot of time. On 13.4 it is working fine. Thanks

I also facing this issue, i suggest that when a cache for a route has been revalidated by revalidatePath subsequent navigation to that route should not be a soft navigation but a hard navigation. This BUG needs to be fixed.

@Fredkiss3 oh my bad >.>

@ShueiYang it does not, this is a suggestion for how it could work and an answer to @sebmarkbage comment : https://github.com/vercel/next.js/issues/42991#issuecomment-1622068226

@EugeneMeles thank you for sharing your solution.

@Fredkiss3 sadly i can’t use fetch in this case as I am using a node package that wraps the api calls. So I would have to use cache. Will check out unstable_cache!

I’m doing this for now… not sure if it will be helpful to anyone (or if this is a major anti pattern / problem):

But essentially im tracking client navs with usePathname and if I see you do a client side nav, just call router.refresh (ends up kinda doing a swr like experience).

image

Oho @karlhorky when did they add that 👀 Thanks for pointing out! Happy discussing too you

With the client side cache semantics of 30 seconds, Layouts are also not revalidated on navigation (I think?), so this one is expected. And I suppose critical data is not supposed to be fetched on the Layout right ?

So it wouldn’t change that much, if the cache is set to a lower time.

I don’t know if it works today, but what if the user could specify the revalidate export on the parent Layout and have the layout client side & server side cache respect this stale time ? If devs want to customise this time for one off pages inside the Layout or a group of other pages, they can do so in individual pages or in Layouts.

If you take the case of Remix, with their nested routes approach, each navigation inside a subgroup (or Layouts) of pages only refetches the data on the server for that subgroup, and navigating to another subgroup causes a refetch of the data for all of the subgroup, you could do the same. But with respect to the revalidate export with a default of 0 if not specified, or 30 secs (if you want) and a big disclaimer in the docs as to why this time was chosen.

In our use case, we have an cart page that when the user visits it for the first time, gets cached client side, then when they navigate -> add a new product -> go to cart page again, they don’t see their newly added items. We are working around it by using a router.refresh but it slows things down and the UX is not great.

I wonder if we could do a router.revalidateTag or some way to bust a bit more granular in the client.

@dragidavid . I have the similar scenario with you. At first, like you, I thought this behavior was very disturbing. But some parts of the app is really helpful. Of course, a reload is not a ideal. But you could use revalidatePath function after the mutation. It will more suitable approach and probably solves your problem. My implementation like this;

Screenshot 2023-05-21 at 00 47 19

I recreated same problem in my repo https://github.com/ricardasjak/next-prevent-cache

I have to confirm, that downgrade to Next v13.3.1 actually solved this issue.

Hi @Fredkiss3 - Thanks for the suggestion, although a double fetch, especially at scale, won’t work either.

Agree there are likely many things in the works, and will probably need to wait and see.

Thanks for staying on top of it.

My bad! I meant prefetch=false 😃

Thank you @Fredkiss3, I actually tried that, the problem is my component is now refreshing twice, so it’s doing two fetches to api. Will try a downgrade as you suggested in a previous comment.

The behavior described is not possible at the moment. Would be awesome though 😉

@Fredkiss3 THANK YOU!

I downgraded to 13.3.1 and this is exactly the behavior we want. We’d prefer correct and lower over stale and faster. It would be great to be able to have similar behavior re-enabled in future versions. If I understand what is going on correctly, it would be great to have the option to disable the page level cache.

@grantmagdanz in the versions prior to next@13.3.2, you could just use a <Link prefetch={false}>, next would not issue a prefetch of your page and you would not see stale data, the only downside is that navigations aren’t as fast as possible, if that’s okay with you, you could downgrade to next@13.3.1 for now.

In the new version, <Link prefetch={false}> does not prefetch the data, but it reuses the cache the same way it does with prefetching enabled i suppose this is so that prefetching will be recommended way to do things and you would opt-out of prefetching if you want to disallow too many requests on your server (if you have many links for ex).

However there are things in the work for both allowing prefetching on hover (with <Link>) and allowing for setting the stale time on the client side cache. You can read in the PR :

Follow ups

  • we may add another API to control the cache TTL at the page level
  • a way to opt-in for prefetch on hover even with prefetch={false}

Related: https://github.com/vercel/next.js/discussions/49708

I appreciate the way it was stated there: we expect cookies() and headers() to opt us in to dynamic rendering (I would also call this hard navigation), given that the components cannot be statically rendered when accessing variables (cookies and headers) that may change from one request to another.

@Fredkiss3 that will only get the current page… other cached pages will not be invalidated

I tried to see if the window.next element contains the properties router.sdc and router.sbc, and it does not contains it, at least in next app router.

And i think, using the code you provided is the same as calling router.refresh().

Interesting… and what do you think about editing the post to be more general anyway? I guess there are situations where users would want to:

  1. Link to /dynamic (not a dynamic segment, because it does not use square brackets in the directory name)
  2. Using <Link>
  3. Not disable prefetching behavior
  4. Not have stale data on navigation to this link

What I mean here is that A) it seems like the prefetch prop doesn’t really solve the issue completely and B) this issue is more general than just dynamic segments. In my opinion, the title and description should reflect that, that this issue is not just affecting dynamic segments (it affects all dynamic routes).

Ok update, it does a hard navigation when navigating to a dynamic segment (/path/[dynamic]), i’ve tested it the latest canary :

https://stackblitz.com/edit/vercel-next-js-n1tqpr?file=app%2Fpage.tsx,app%2F[dynamic]%2Fpage.tsx,app%2Flayout.tsx,app%2Fdynamic%2Fpage.tsx

The only downside is that it does not do a hard navigation when using a static segment marked with dynamic = "force-dynamic".

Seems like there will be a solution for invalidating the Next.js Cache for different routes programmatically in one of the next versions of Next.js:

Next up, we plan to implement programmatic updates to specific paths or cache tags.

https://vercel.com/blog/vercel-cache-api-nextjs-cache#:~:text=Next up%2C we plan to implement programmatic updates to specific paths or cache tags.

We have to be patient and not harass the team.

Yeah, I agree. That would be ineffective and not appreciated. I was wondering if there’s any other way to bring this to their attention. But I suppose it may just not be a priority.

So it’s possible that this is a signal that Next.js does not plan to provide hard navigation for these cases…

I feel it might be different if maintainers were aware of this issue, but I don’t think they are? There’s been no comment from any of them here or the other duplicate issues that I’m aware of.

This is very concerning as I think this is a huge and obvious flaw in the specified/documented behavior and AFAICT there is no way to fix it without some kind of breaking change. I hope it gets some attention before declaring app dir stable.

Does anyone else feel like maintainers aren’t aware of this issue and should be? Does anyone have any idea how to get their attention?

So maybe there will indeed need to be a way that all use cases are supported by the changes provided in the future Mutations RFC

I’m doubtful of that since I assume that change wouldn’t (or shouldn’t) affect behavior when we’re not doing mutations, or when mutation happened in a different tab or by another user. For some pages we just want a navigation to always get fresh data. I think support for this would need to be a separate change.

What about disabling it only for dynamic pages ?

Yes, I plan to make a minimal example, first to verify if nothing in my code is triggering this behaviour and secondly to report a bug properly if this is the case. Just need to find a bit of free time now.

[Edit] I have created a bug report for this issue that might not be related but pretty critical I believe #45026

@njarraud hm, this seems like a significant bug, maybe it’s worth reporting this separately as a new issue, with a link back to this issue…