multi-scrobbler: Last.fm api calls hang forever after some time

Hello,

I’m running multi-scrobbler in a dockerized environment. I set the source to last.fm and target to maloja.

When I boot up the app, it syncs the backlog successfully, but last.fm gets stuck at “polling” after while.

image

In this screenshot, I restarted the app while I was playing a song, and it was visible my last.fm account as “scrobbling”.

After a while, it got like this:

image

Then it gets stuck at polling.

As you can see from the first screenshot, when I try to restart the source, it retries and says [Sources] [Lastfm - myLastFm] Could not stop polling! Or polling signal was lost :(

image

At this state, when I restart the container, the container understands the backlog, and syncs accordingly, then it gets stuck at polling again.

I’m accessing the instance as http://ip:port , if that would matter.

So I believe my issue should be about polling, and debug log could not help me.

Here’s my debug log, if that would help: https://0x0.st/HDuS.log

What am I missing, could you help please?

Thanks in advance!

Edit: After a while, these log lines are populated, but nothing is caught or scrobbled, and I’ve been listening to music during this period constantly:

2024-01-31T00:53:53+03:00 verbose : [Heartbeat] [Clients] Checked Dead letter queue for 1 clients.
2024-01-31T00:53:53+03:00 verbose : [Heartbeat] [Clients] Starting check...
2024-01-31T00:53:53+03:00 verbose : [Heartbeat] [Sources] Checked 1 sources for restart signals.
2024-01-31T00:53:53+03:00 verbose : [Heartbeat] [Sources] Starting check...

About this issue

  • Original URL
  • State: closed
  • Created 5 months ago
  • Reactions: 1
  • Comments: 20 (8 by maintainers)

Commits related to this issue

Most upvoted comments

Lastfm timeout fix released in 0.6.5

At a second thought, I agree. On a product I would not implement this. Actually I thought about arguments being injected to npm script (something like node build/index.js $extraParams, where extra params are --dns-result-order=preferipv4) but nah, I agree, it’ll bring unnecessary complexity.

I’m also not sure it would even work. musl ignores basic stuff like this in low-level calls, requiring ipv6 to be completely disabled in the container in order to get around it. There are countless other similar symptoms that have roots in other parts of musl 1 https://github.com/nodejs/docker-node/issues/602 https://github.com/gliderlabs/docker-alpine/issues/539 https://github.com/FoxxMD/multi-scrobbler/issues/88 to the point of being acknowledged as a known caveat.

If this is the case, indeed, it’ll be futile.

I’m going to call this particular issue – lastfm not having request timeouts – as being solved and address the dns problem by changing distro bases eventually. If you don’t mind, I’ll ping you when I get around to building those images as you have a reproducible environment for this and can test reliably.

Yup, since I believe we found the caveat, I believe my original issue could ne marked as solved.

Also, sure! Feel free to ping me anytime, I’ll be more than happy to test and help when needed.

Also thank you for staying with me and actually debugging the log files and keeping up with this 🙏

Until a new release comes, I’ll stick with the experimental, with ipv6 disabled on host kernel level.

Please feel free to close the issue as your liking for your planning.

Glad to hear this is helping resolve the issue.

this is a high quality VPS

If you have the ability to run MS from source you could try that and see if there is a difference in frequency of timeouts. Other users in the past have had DNS issues that may or may not have been caused by the OS used in the docker image, alpine.

whatever you did in develop tag earlier…I believe it consumes the history as true source instead?

The changes in develop actually shouldn’t have done anything to help with this issue…the history vs. player stuff is purely presentational – MS was already only using LFM history and the player was superficial but now its indicated in the ui and logs instead of being behind the scenes.

every time I switched my tag to another one, the script asked for re-authentication.

This would occur when using docker if you are only using ENV based configuration and have not mounted a volume into the docker container to persist the created credentials.

Ahh, just after I posted this I got an error 😢 (I’ll definitely increase my retry limit lol). However I believe since it’s the status it would not cause missing any scrobbles, right?

Yes it should not miss any scrobbles since polling for the source was restarted. You can increase the number of request retries and/or poll retries using file-based configs with either config.json or lastfm.json via maxPollRetries and maxRequestRetries