ethers.js: Using provider.on with v6 errors as filters 'expire'

Ethers Version

6.4.0

Search Terms

rpc provider.on

Describe the Problem

The implementation of event subscription with provider.on seems to have changed between v5 and v6. In v5 the requests to providers were for logs within a given range, in v6 we are calling eth_getFilterChanges for a saved filter definition.

This works well. For a while, but it looks like saved filters expire, and eventually you get this:

@TODO Error: could not coalesce error (error={ "code": -32000, "message": "filter not found" }, code=UNKNOWN_ERROR, version=6.4.0)
    at makeError (file:///Users/foo/Documents/GitHub/crowsnest/node_modules/ethers/lib.esm/utils/errors.js:116:21)
    at AlchemyProvider.getRpcError (file:///Users/foo/Documents/GitHub/crowsnest/node_modules/ethers/lib.esm/providers/provider-jsonrpc.js:628:16)
    at file:///Users/foo/Documents/GitHub/crowsnest/node_modules/ethers/lib.esm/providers/provider-jsonrpc.js:247:52
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5) {
  code: 'UNKNOWN_ERROR',
  error: { code: -32000, message: 'filter not found' }
}

An example of what this looks like on the provider side: Screenshot 2023-05-31 at 1 58 59 PM

Code Snippet

No response

Contract ABI

No response

Errors

No response

Environment

node.js (v12 or newer)

Environment (Other)

No response

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 2
  • Comments: 21 (4 by maintainers)

Commits related to this issue

Most upvoted comments

Hi @AndreMiras and @kevinday,

⚠️ Warning ⚠️ this isn’t the most elegant solution… 🤣

To handle this I override all logging, then look for filter not found errors and maintain a count of those. When I get over a threshold I exit the process. I am running this using pm2, which catches the process exit and spins me up a new process. That creates a new instance of the listener and I’m up and running again. You could use the same approach, but without the exit, instead attaching a new listener.

This is the code:

// Save the original console.log
const originalConsoleLog = console.log;
// Override console.log
console.log = function(...args) {
  // Call the original console.log with the arguments
  originalConsoleLog.apply(console, args);
  const logMessage = args.join(' ');
  if (logMessage.includes('filter not found')) {
    filterNotFoundErrorCount += 1;
    console.error("Filter not found: " + filterNotFoundErrorCount);
    if (filterNotFoundErrorCount > maxFilterErrorCount) {
      console.error("Filter not found count exceeded. Restarting....");
      process.exit(1); // This is fatal, we need to exit and restart
    }
  }
};

Like I said, I’m not sure this is the right answer (pretty confident it isn’t lol), but I have been running with this for months and it’s worked perfectly for my use case.

@AndreMiras dealing with the exact same thing right now as well. Also tried contract.on.catch(), which doesn’t seem to be catching the error either.

@omnus, how did you manage to workaround the issue? I got no luck with provider.on() nor with try/catching around contract.on() and provider.on() My idea was to indeed catch for this error and resubscribe to the event, but I can’t seem to catch it. If I can’t get it to work that way I will probably implement a keep alive that would reset the filter if no event were received after a while. The other solution would be to manually re-implement the event subscription, consuming the eth_newFilter method and doing the polling myself.

I had the same error and it disappeared after I set the polling: true as per below this.Provider = new ethers.JsonRpcProvider(fr, undefined, { staticNetwork: ethers.Network.from(connInfo.ChainId), polling: true });

Auto-recovery is planned and being worked on now.

It just needs to be tested on various backends which have different ways of “hanging up”.

But it is 100% planned and coming soon.

Interesting. It would be nice if on expiration it uses getLogs to backfill the missing time and resubscribe the filter.