go-spacemesh: Report of Critical Bug in Spacemesh v1.2.8 - Inability for Nodes Syncing via Local Network to Receive Rewards
Dear Spacemesh Team,
I am writing to report a critical bug I encountered while using version v1.2.8 of Spacemesh. It pertains to nodes syncing via a local network, specifically the failure to correctly receive rewards, despite the rewards being logged in the system. However, these rewards are not reflected on the chain or within the account balance.
Bug Description: While running a node that syncs via a local network, the system’s logs indicate that rewards have been obtained. Regrettably, these rewards do not display accurately on the chain, nor do they reflect in the account balance. Although the rewards acquisition is logged, there is no corresponding record or reward visible on-chain or in the account balance.
Bug Manifestation:
The local network syncing node correctly logs reward acquisition. However, no corresponding reward information is displayed on the chain. My account balance fails to reflect any rewards received. Steps to Reproduce the Bug:
Run Spacemesh v1.2.8. Opt for node synchronization via a local network. After a period of operation, the logs display rewards acquired. Despite the log entry, there is no visible record of the rewards on the chain or in the account balance. Additional Information:
I attempted multiple syncs and node restarts, but the issue persists. I have ensured that my node settings are accurate and no irregular actions were taken. Expected Resolution: I kindly request your team to investigate and resolve this bug. Accurate recording of rewards is crucial for users syncing via a local network. I am willing to provide further information or assist in any tests necessary to resolve this issue.
Please let me know if there is any additional information required or if further testing is necessary to pinpoint and resolve this bug. Your team’s assistance and support are greatly appreciated.
Thank you for taking the time to review this email, and I look forward to your response.
Sincerely,
node logs:
2023-12-15T08:12:18.793Z INFO c6d00.proposalBuilder proposal eligibilities for an epoch {"node_id": "c6d0075ddb273968baa24f686668ef5d47136ae1a653bdedad2b37b5db9049c4", "module": "proposalBuilder", "epoch": 11, "beacon": "0xa12acad4", "atx": "4ce5ad8713", "weight": 298464, "prev": 0, "slots": 2, "eligible": 2, "eligible by layer": [{"layer": 44633, "slots": 1}, {"layer": 46845, "slots": 1}], "name": "proposalBuilder"} 2023-12-16T07:25:04.139Z INFO c6d00.conState received mempool txs {"node_id": "c6d0075ddb273968baa24f686668ef5d47136ae1a653bdedad2b37b5db9049c4", "module": "conState", "layer_id": 44633, "num_accounts": 0, "name": "conState"} 2023-12-16T07:25:05.781Z INFO c6d00.proposalBuilder proposal created {"node_id": "c6d0075ddb273968baa24f686668ef5d47136ae1a653bdedad2b37b5db9049c4", "module": "proposalBuilder", "proposal_id": "9030c9dbb2", "transactions": 0, "mesh_hash": "1b1fe7723e", "ballot_id": "2d9bc5eeed", "layer_id": 44633, "epoch_id": 11, "smesher": "c6d0075ddb273968baa24f686668ef5d47136ae1a653bdedad2b37b5db9049c4", "opinion hash": "1b1fe7723e", "base_ballot": "ab28a7b2b6", "support": 1, "against": 0, "abstain": 0, "atx_id": "4ce5ad8713", "ref_ballot": "0000000000", "active set hash": "0848703629", "beacon": "a12acad4", "votes": {"base": "ab28a7b2b6", "support": [{"id": "e4092bb19c", "layer": 44632, "height": 84399}], "against": [], "abstain": []}, "latency": {"data": "3.245017909s", "tortoise": "41.694309ms", "txs": "5.18565ms", "hash": "30.361797ms", "publish": "1.611940677s", "total": "4.934200342s"}, "name": "proposalBuilder"} 2023-12-16T07:26:50.717Z INFO c6d00.executor optimistically executed block {"node_id": "c6d0075ddb273968baa24f686668ef5d47136ae1a653bdedad2b37b5db9049c4", "module": "executor", "lid": 44633, "block": "2e49b371c7", "state_hash": "0xa7419a1f23c5dd0f75543499aec63f5a19ffa05cfefc2222406490fccdb6eb87", "duration": "1.008054682s", "count": 2, "skipped": 0, "rewards": 44, "name": "executor"} 2023-12-16T07:26:50.847Z INFO c6d00.blockGenerator generated block (optimistic) {"node_id": "c6d0075ddb273968baa24f686668ef5d47136ae1a653bdedad2b37b5db9049c4", "module": "blockGenerator", "layer_id": 44633, "block_id": "2e49b371c7", "name": "blockGenerator"} 2023-12-16T07:26:57.004Z INFO c6d00.blockCert generating certificate {"node_id": "c6d0075ddb273968baa24f686668ef5d47136ae1a653bdedad2b37b5db9049c4", "module": "blockCert", "requestId": "7cdf3c74-e1e8-44fa-a767-5c047d6f7e1d", "layer_id": 44633, "block_id": "2e49b371c7", "eligibility_count": 101, "num_msg": 101, "name": "blockCert"}
spacemesh log: { “timestamp”: “2023-12-16T07:25:05.782055424Z”, “help”: “Node published proposal. Rewards will be received once proposal is included in the block.”, “proposal”: { “layer”: 44633, “proposal”: “kDDJ27KMkvFJ+QxNtZpr781SdbE=” } }
About this issue
- Original URL
- State: open
- Created 6 months ago
- Reactions: 1
- Comments: 15 (5 by maintainers)
Dear Spacemesh Team,
I hope this message finds you well. I am writing to provide feedback regarding a discrepancy observed in the port connections of my node, which seems to differ from the information outlined in the official documentation.
As per the official documentation, I expected certain port connections not to be established. However, upon checking, I noticed a consistent communication link on port 7513 of the node, indicating mutual communication on this port. Surprisingly, this communication appears to be successful, unlike what the documentation suggested. Here is the output indicating the unexpected port connections: ss -npO4 | rg spacemesh | rg 6000
tcp ESTAB 0 0 127.0.0.1:7513 127.0.0.1:6000 users:((“go-spacemesh”,pid=39165,fd=11)) tcp ESTAB 0 0 127.0.0.1:6000 127.0.0.1:7513 users:((“go-spacemesh”,pid=39202,fd=47))
This interaction on port 7513 seems to be facilitating successful synchronization between nodes without any apparent issues.
I am reaching out to bring this discrepancy to your attention. Should this observed connectivity on port 7513 be expected behavior, or does it signify a deviation from the standard setup outlined in the documentation? I would greatly appreciate any clarification or guidance on this matter.
If any additional information, tests, or adjustments to configurations are required from my end, please let me know. Your prompt assistance and insights into this matter would be immensely helpful.
Thank you for your attention and support in addressing this discrepancy.
@pigmej
Dear Spacemesh Team,
Thank you for your prompt response. I’ve gathered feedback from other users experiencing the same issue, indicating that this is not an isolated incident. This problem has persisted in previous versions and continues in version 1.2.10. Additionally, I am not the sole user encountering this issue. My public node connection count is 24, and the configuration for one of my three public nodes across different IPs in the local network is as follows:
Public Node Configuration:
"p2p": { "disable-reuseport": false, "p2p-disable-legacy-discovery": true, "autoscale-peers": true, "bootnodes": [ // bootnode list here ], "direct": [ // direct connections list here "/ip4/0.0.0.0/tcp/6002/p2p/12D3KooWGRy3zzi3L78FXrPDGuMfhYJE4WCiiRz7LHdQLNdJaHHN", "/ip4/0.0.0.0/tcp/6001/p2p/12D3KooWGebdETd67qCV82iT1Cop5ZVG3EfyPa8m9mFbnY3a6qbv" ............... ] }Private Node Configuration:
"p2p": { "listen": "/ip4/0.0.0.0/tcp/6004", "disable-reuseport": false, "p2p-disable-legacy-discovery": true, "autoscale-peers": true, "disable-dht": true, "direct": [ "/ip4/192.168.1.23/tcp/7510/p2p/12D3KooWSqyS5UMQJTdNzsMn6iFBi86vgmuZsyF7u3V4C2AhcV66", "/ip4/192.168.1.23/tcp/7511/p2p/12D3KooWJdTW12gcyL3jbpH1gd65rQJSADpDELKYsK4AKM8AFH9p", "/ip4/127.0.0.1/tcp/7510/p2p/12D3KooWNBBcJNzMFjtkXKBmekvqy9izBZduy7E8tqnXnc1om9hZ" ], "bootnodes": [], "min-peers": 3, "low-peers": 10, "high-peers": 20, "inbound-fraction": 1, "outbound-fraction": 0.5 }I can confirm that each Node ID in the configurations provided is unique to avoid any conflicts within the network.I believe these details regarding node configurations might be helpful in the investigation. If there’s any additional information or specific tests required from my end to assist in resolving this issue, please let me know. I remain enthusiastic about contributing to the resolution process and eagerly await further guidance from your team.
Thank you for your continued attention and support.