spruned: sPRUNED: c-lightning does not get funding updates
This is probably not a pure sPRUNED issue, but maybe you have some experience with that issue already. I run a sPRUNED bitcoin node with c-lightning 0.6 and from the looks of it everything looks nice. Configuration as follows: https://gist.github.com/Stadicus/a05c3c5ac6a63cdcfe1aae2b77f17cba
The issue, however, is that c-lightning does not get funding updates. The first send was received by c-lightning, but then no more. On-chain transactions to the newaddr
are not shown, and incoming channels are visible, but with status CHANNELD_AWAITING_LOCKIN:They've confirmed funding, we haven't yet.
Meaning, that my c-lightning did not get the onchain funding transaction info.
Is there something special to consider with that constellation?
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 28 (25 by maintainers)
Oh, great. Once you establish a reliable connections pool it should go even better. Hopefully… :p
Glad to hear 😃 To be honest I’m beginning to be skeptical about the bootstrap feature, especially on low end devices. Probably it makes sense to set it at a very low value by default.
Please check out the 0.0.2a8
@Stadicus gave me a lot of useful info, thanks for the logs.
Looks like at some point the peers he’s connected to refuses to answer to a specific InvItem request, the one for the block 000000000000000000204c6219796604394a43bb765bdd25a39e9eb7aa7d2cbb fails on any connected peer. [I really have to investigate why this block, not the one before, not the one after, is hard to be fetched]
The previous peers implementation was naive: spruned didn’t add new peers in the pure p2p way, but instead used only the ones available on the dns seed.
The new release,
0.0.2a70.0.2a8, implement the getaddr method on the p2p layer, and so now spruned collect, persist and uses new peers once discovered.I’ve also reduced timeouts, re-implemented the peers ban when too many failures happened which, at some point, disappeared 😦.
Would you please try it and see if things goes better.
As far as I know, the dev-rescan-outputs command uses the gettxout api, which is electrum based in spruned, while the normal usage is backed by the getblock api, which is p2p network based, this may (but it’s hard, just a guessing) drive into inconsistencies.
Also, I don’t use services-based daemons (clightning & spruned with systemd scripts), so when it crashes, here it stops, and I see it. If you, instead, are using something to have them restarted when they stops, this may explain why you go into this. The usage is very intensive, and maybe at some point clightning is requesting data from spruned which is still syncing (cause both of them restarted) and… mmhm, weid stuff happens.
My advices are to:
increase the spruned blocks cache as much as you can (1-2gb should be nice to increase rescans speed. I would say “as much as you can”, however, is the best option)
download the latest spruned version released tonight on pypi, which wraps a more consistent bitcoind API emulation (with error code -1 on generic failures) and should reduce third party apps errors (clightning, in this case) and so rescans.