datadog-agent: Error while processing transaction: error while sending transaction

I was able to build agent from master. After running ./bin/agent/agent start -c ./bin/agent/dist/ I’m getting weird following error message

2018-03-30 19:57:38 CEST | ERROR | (worker.go:135 in process) | Error while processing transaction: error while sending transaction, rescheduling it: Post https://6-0-0-app.agent.datadoghq.com/api/v1/check_run?api_key=*************************54d58: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

There is also no connection visible on dashboard. One other thing is version Agent 6.0.0 - Commit: - Serialization version: but master is on 6.1.1.

Anyone can help here debug what is going on.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 8
  • Comments: 37 (8 by maintainers)

Most upvoted comments

I also face the same issue: 2021-05-11 22:51:00 UTC | PROCESS | ERROR | (pkg/forwarder/worker.go:178 in process) | Error while processing transaction: error while sending transaction, rescheduling it: Post "https://orchestrator.datadoghq.com/api/v1/orchestrator": EOF

@ocervell We’re seeing this in our Kubernetes clusters as well. Were you able to fix this? We get the following error, which originates in worker.go not the domain_forwarder.go. If other users are seeing similar logs, then it looks like this may be a problem with the retry routine itself (or at least how users have it configured).

2021-05-17 17:55:25 UTC | PROCESS | ERROR | (pkg/forwarder/worker.go:174 in process) | Too many errors for endpoint 'https://process.datadoghq.com/api/v1/container': retrying later
2021-05-17 17:55:36 UTC | PROCESS | ERROR | (pkg/forwarder/domain_forwarder.go:133 in retryTransactions) | Dropped 3 transactions in this retry attempt: 0 for exceeding the retry queue payloads size limit of 15728640, 3 because the workers are too busy

As I mentioned, this has become a bit of a “collector” issue for a bunch of different problems with similar error messages. I think the internal tracking will be sufficient, and there’s no need to re-open this issue.

@joaquin386 @ocervell Were you able to fix this ? I’m getting the same errors on ECS (EC2).

Edit : This was fixed. I was setting env vars for proxy. Hence the timeouts.