gaia: Node randomly stops syncing, after restart it's fine (for some time)

Summary of Bug

I’m running a cosmos node and occasionally (now at least once a day) it just stops syncing, in the logs i can see some

2:11PM ERR Connection failed @ sendRoutine conn={"Logger":{}} err="pong timeout" module=p2p peer={"id":"5dc6a28f2caff8e61c47c1c9b658e7b1ea5fbfd9","ip":"5.9.42.116","port":26656}

and

2:11PM ERR Stopping peer for error err=EOF module=p2p peer={"Data":{},"Logger":{}}

It doesn’t recover by itself, the only way to get it back synced is to restart it (the container)

EDIT: restart doesn’t always immediately help, i get the same logs for the connections

i also just tried with a newly downloaded addrbook.json

Version

v7.1.0

Steps to Reproduce

i’m just running a node with gaiad start --x-crisis-skip-assert-invariants


For Admin Use

  • Not duplicate issue
  • Appropriate labels applied
  • Appropriate contributors tagged
  • Contributor assigned/self-assigned
  • Is a spike necessary to map out how the issue should be approached?

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 26 (7 by maintainers)

Most upvoted comments

We have the same problem when running chihuahuad (based on cosmos SDK). The problem ocurs for us only when REST API enabled and some application tries to download all accounts using endpoint: “cosmos/auth/v1beta1/accounts” (paginated). At this moment in our node logs we can see this output: May 04 09:01:23 chihuahua chihuahuad[674]: 9:01AM ERR Connection failed @ sendRoutine conn={“Logger”:{}} err=“pong timeout” module=p2p peer={“id”:“28c227d31064e4bacb366055d796f0c3064c1db0”,“ip”:“149.202.72.186”,“port”:26613} May 04 09:01:26 chihuahua chihuahuad[674]: 9:01AM INF service stop impl={“Logger”:{}} module=p2p msg={} peer={“id”:“28c227d31064e4bacb366055d796f0c3064c1db0”,“ip”:“149.202.72.186”,“port”:26613} May 04 09:01:27 chihuahua chihuahuad[674]: 9:01AM ERR Stopping peer for error err=“pong timeout” module=p2p peer={“Data”:{},“Logger”:{}} May 04 09:01:30 chihuahua chihuahuad[674]: 9:01AM INF service stop impl={“Data”:{},“Logger”:{}} module=p2p msg={} peer={“id”:“28c227d31064e4bacb366055d796f0c3064c1db0”,“ip”:“149.202.72.186”,“port”:26613} May 04 09:01:44 chihuahua systemd[1]: node.service: Main process exited, code=killed, status=9/KILL May 04 09:01:44 chihuahua systemd[1]: node.service: Failed with result ‘signal’.

Disabling API is fully solving the problem. Upscaling VPS to from 4\8 to 16 cores\64GB RAM not solving the problem. This issue for all COSMOS SDK projects. Seems that the issue may be closed.

@bb4L this is the conclusion that we ended with, that theres an interplay with network traffic and the node performance. This is a tendermint / comet level issue, that we think has been addressed in versions after v8. Currently, v8 / v9 are not supported in production, only for archive related issues, therefore closing this issue. for future versions, we will ask the Comet team to include longer term tests with heavy rpc / rest loads to confirm that there is no regression and that the performance characteristics are understood.

We have the same problem when running chihuahuad (based on cosmos SDK). The problem ocurs for us only when REST API enabled and some application tries to download all accounts using endpoint: “cosmos/auth/v1beta1/accounts” (paginated). At this moment in our node logs we can see this output: May 04 09:01:23 chihuahua chihuahuad[674]: 9:01AM ERR Connection failed @ sendRoutine conn={“Logger”:{}} err=“pong timeout” module=p2p peer={“id”:“28c227d31064e4bacb366055d796f0c3064c1db0”,“ip”:“149.202.72.186”,“port”:26613} May 04 09:01:26 chihuahua chihuahuad[674]: 9:01AM INF service stop impl={“Logger”:{}} module=p2p msg={} peer={“id”:“28c227d31064e4bacb366055d796f0c3064c1db0”,“ip”:“149.202.72.186”,“port”:26613} May 04 09:01:27 chihuahua chihuahuad[674]: 9:01AM ERR Stopping peer for error err=“pong timeout” module=p2p peer={“Data”:{},“Logger”:{}} May 04 09:01:30 chihuahua chihuahuad[674]: 9:01AM INF service stop impl={“Data”:{},“Logger”:{}} module=p2p msg={} peer={“id”:“28c227d31064e4bacb366055d796f0c3064c1db0”,“ip”:“149.202.72.186”,“port”:26613} May 04 09:01:44 chihuahua systemd[1]: node.service: Main process exited, code=killed, status=9/KILL May 04 09:01:44 chihuahua systemd[1]: node.service: Failed with result ‘signal’.

Disabling API is fully solving the problem. Upscaling VPS to from 4\8 to 16 cores\64GB RAM not solving the problem. This issue for all COSMOS SDK projects. Seems that the issue may be closed.

I can assure you that I have noticed this same issue on other Cosmos SDK chains (Secret and Terra2) several times. This is not gaia specific, there is something else upstream. It started happening a couple of months ago. I am sorry I have not been able to narrow it down aside from time and chains in which we’ve seen this exact issue happening.

Is there a way to trigger the problem? It seems like one way to reproduce the problem is by increasing the RPC (REST/gRPC) load on the node. Without that kind of pressure, are there other means to trigger this issue?

can’t tell since it’s happening without me doing anything… / without having a high rpc load

@Daniel1984 we discussed this a bit in the Telegram channel, @MSalopek did a bit of an investigation, @nddeluca @bb4L it would be good to check the number of incoming rest/grpc calls, what endpoint they are using and seeing if pagination limits have a beneficial effect as @MSalopek noted below.

Reporting that after disabling REST and gRPC the node functions as expected without hiccups.
Best advice I can give is to setup a loadbalancer/proxy (such as nginx or cloudflare) and set a rate-limiting system for your production nodes. It's known that an RPC server can be brought down with expensive queries - that is not necessarily a gaia issue, it's possible on all cosmos-sdk based networks

@adizere could you also replicate with some heavy calls to the rest endpoints and see how the performance is impacted?

for me the effect is also on nodes which aren’t used by applications (so it can’t be a only load related issue)

Do you monitor your node? Wondering if the problem is not under-resourcing, i.e., the virtual machine on which your node is running might be unable to keep up with the network. Would be good to check how the cpu/memory profile looks like to eliminate that potential root cause!

cpu / memory looks fine on my instance(s)

@mmulji-ic thanks for the information, let me know if you need something from my side

@adizere

Hi, can someone provide a minimal way to reproduce this issue? We’d be glad to look into this, but we need that first. The config.toml with peers, gaia version, etc. Many thanks!

  • gaia version: 7.1.0, 8.0.1 as well as 9.0.0 (as written in the issue/other comments)

  • config.toml has no peers section (at least mine hasn’t) output of cat config.toml | grep peer:

    # If true, query the ABCI app on connecting to a new peer
    filter_peers = false
    # Address to advertise to peers for them to dial
    persistent_peers = ""
    # Maximum number of inbound peers
    max_num_inbound_peers = 40
    # Maximum number of outbound peers to connect to, excluding persistent peers
    max_num_outbound_peers = 10
    unconditional_peer_ids = ""
    # Maximum pause when redialing a persistent peer (if zero, exponential backoff is used)
    persistent_peers_max_dial_period = "0s"
    # Set true to enable the peer-exchange reactor
    # peers. If another node asks it for addresses, it responds and disconnects.
    # Does not work if the peer-exchange reactor is disabled.
    # Comma separated list of peer IDs to keep private (will not be gossiped to other peers)
    private_peer_ids = ""
    # Toggle to disable guard against peers connecting from the same ip.
    # Maximum size of a batch of transactions to send to a peer
    # snapshot from peers instead of fetching and replaying historical blocks. Requires some peers in
    # peer (default: 1 minute).
    peer_gossip_sleep_duration = "100ms"
    peer_query_maj23_sleep_duration = "2s"
    
  • minimal way to reproduce, i guess just try to run a node 🤷🏽‍♂️

@adizere would you recommend doing a tendermint debug dump?

We also ran into this issue – our node would never make it 24 hours without halting syncing.

We resolved it by switching to rocksdb, using the address book at https://polkachu.com/addrbooks/cosmos, and increasing the number of outbound peers to 200. The node has now been stable for 4+ days.