concourse: Clean up noisy 'failed-to-transition' log during volume GC

Bug Report

Nov 03 20:09:38 somewhere.io docker[3779]: {"timestamp":"1509739778.442883492","source":"atc","message":"atc.volume-collector.run.orphaned-volumes.mark-created-as-destroying.failed-to-transition","log_level":2,"data":{"error":"volume cannot be destroyed as children are present","session":"26.14766.1.3","volume":"480df3bf-97f8-4b3b-6b15-464eea436eb5","worker":"primaryworker"}}

The work dir is btrfs based.

The following can also be handy:

  • Concourse version: 3.6.0
  • Deployment type (BOSH/Docker/binary): binary
  • Infrastructure/IaaS:
  • Browser (if applicable):
  • Did this used to work? yes

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 5
  • Comments: 16 (8 by maintainers)

Commits related to this issue

Most upvoted comments

Also seeing this on our 3.8.0 k8s deployment

{"timestamp":"1513025322.749217510","source":"atc","message":"atc.volume-collector.run.orphaned-volumes.mark-created-as-destroying.failed-to-transition","log_level":2,"data":{"error":"volume cannot be destroyed as children are present","session":"64.436.1.27","volume":"0d000b92-4edf-488e-67c6-c58f6d3dd6c3","worker":"concourse-worker-2"}}

Any update on this issue? I’m trying to diagnose a volume inflation on the 8 workers we have in our concourse instance. I’m constantly having to retire workers and nuke the folder the workers use on the file system. The volume count never goes down and always increases on all workers. I’m trying to investigate the database rows to see if I can find any correlation between anything.

Any relation to this: https://github.com/concourse/concourse/issues/1419

@xtremerui we’re on 3.9 and get log messages like https://github.com/concourse/concourse/issues/1780#issuecomment-350853575 every 30 seconds. Let you know once we upgrade.