kops: Protokube fails to mount volumes because they are already mounted

  1. What kops version are you running? The command kops version, will display this information.
kops version
Version 1.8.1 (git-94ef202)
  1. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag.
kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:59:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  1. What cloud provider are you using?

AWS

  1. What commands did you run? What is the simplest way to reproduce this issue?

kops create -f cluster.yaml

  1. What happened after the commands executed?

The cluster starts to provision but fails during protokube setting up volumes a full log is:

protokube version 0.1                                                                                                                                                            
I0327 20:34:09.057938   17871 aws_volume.go:72] AWS API Request: ec2metadata/GetMetadata                                                                                         
I0327 20:34:09.060970   17871 aws_volume.go:72] AWS API Request: ec2metadata/GetMetadata                                                                                         
I0327 20:34:09.061784   17871 aws_volume.go:72] AWS API Request: ec2metadata/GetMetadata                                                                                         
I0327 20:34:09.064735   17871 aws_volume.go:72] AWS API Request: ec2/DescribeInstances                                                                                           
I0327 20:34:09.164567   17871 aws_volume.go:72] AWS API Request: ec2/DescribeVolumes                                                                                             
I0327 20:34:09.164590   17871 dnscontroller.go:101] starting DNS controller                                                                                                      
I0327 20:34:09.164631   17871 dnscache.go:75] querying all DNS zones (no cached results)                                                                                         
I0327 20:34:09.164780   17871 route53.go:50] AWS request: route53 ListHostedZones                                                                                                
I0327 20:34:09.294527   17871 volume_mounter.go:254] Trying to mount master volume: "vol-0da35d47f283ecf0f"                                                                      
I0327 20:34:09.294705   17871 aws_volume.go:72] AWS API Request: ec2/AttachVolume                                                                                                
I0327 20:34:09.616691   17871 aws_volume.go:396] AttachVolume request returned {                                                                                                 
  AttachTime: 2018-03-27 20:34:09.6 +0000 UTC,                                                                                                                                   
  Device: "/dev/xvdu",                                                                                                                                                           
  InstanceId: "i-0975b8438670d96ad",                                                                                                                                             
  State: "attaching",                                                                                                                                                            
  VolumeId: "vol-0da35d47f283ecf0f"                                                                                                                                              
}                                                                                                                                                                                
I0327 20:34:09.616892   17871 aws_volume.go:72] AWS API Request: ec2/DescribeVolumes                                                                                             
I0327 20:34:09.729976   17871 volume_mounter.go:254] Trying to mount master volume: "vol-0a6aa23bbe82ef00d"                                                                      
I0327 20:34:09.730102   17871 aws_volume.go:72] AWS API Request: ec2/AttachVolume                                                                                                
I0327 20:34:10.052057   17871 aws_volume.go:396] AttachVolume request returned {                                                                                                 
  AttachTime: 2018-03-27 20:34:10.04 +0000 UTC,                                                                                                                                  
  Device: "/dev/xvdv",                                                                                                                                                           
  InstanceId: "i-0975b8438670d96ad",                                                                                                                                             
  State: "attaching",                                                                                                                                                            
  VolumeId: "vol-0a6aa23bbe82ef00d"                                                                                                                                              
}                                                                                                                                                                                
I0327 20:34:10.052295   17871 aws_volume.go:72] AWS API Request: ec2/DescribeVolumes                                                                                             
I0327 20:34:10.137236   17871 volume_mounter.go:273] Currently attached volumes: [{"ID":"vol-0da35d47f283ecf0f","LocalDevice":"/dev/xvdu","AttachedTo":"","Mountpoint":"","Statu$
":"available","Info":{"Description":"vol-0da35d47f283ecf0f","EtcdClusters":[{"clusterKey":"main","nodeName":"a","nodeNames":["a","c","d"]}]}} {"ID":"vol-0a6aa23bbe82ef00d","Loc$
lDevice":"/dev/xvdv","AttachedTo":"","Mountpoint":"","Status":"available","Info":{"Description":"vol-0a6aa23bbe82ef00d","EtcdClusters":[{"clusterKey":"events","nodeName":"a","n$
deNames":["a","c","d"]}]}}]                                                                                                                                                      
I0327 20:34:10.137353   17871 volume_mounter.go:58] Master volume "vol-0da35d47f283ecf0f" is attached at "/dev/xvdu"                                                             
I0327 20:34:10.137409   17871 volume_mounter.go:72] Doing safe-format-and-mount of /dev/xvdu to /mnt/master-vol-0da35d47f283ecf0f                                                
I0327 20:34:10.137433   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"                               
I0327 20:34:10.137444   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached                                                                   
I0327 20:34:11.137695   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"                               
I0327 20:34:11.137724   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached                                                                   
I0327 20:34:12.137963   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"                               
I0327 20:34:12.137995   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached                                                                   
I0327 20:34:13.138238   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"                               
I0327 20:34:13.138268   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached                                                                   
I0327 20:34:14.138491   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"                               
I0327 20:34:14.138520   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached                                                                   I0327 20:34:15.138760   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"                               
I0327 20:34:15.138789   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached                                                                   
I0327 20:34:16.139009   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:16.139479   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:17.139703   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:17.139732   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:18.139975   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:18.140004   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:19.140221   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:19.140249   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:20.140492   17871 aws_volume.go:318] nvme path not found "/rootfs/dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0da35d47f283ecf0f"
I0327 20:34:20.140520   17871 volume_mounter.go:107] Waiting for volume "vol-0da35d47f283ecf0f" to be attached
I0327 20:34:21.140885   17871 volume_mounter.go:110] Found volume "vol-0da35d47f283ecf0f" mounted at device "/dev/xvdu"
I0327 20:34:21.142975   17871 volume_mounter.go:147] Creating mount directory "/rootfs/mnt/master-vol-0da35d47f283ecf0f"
I0327 20:34:21.143147   17871 volume_mounter.go:152] Mounting device "/dev/xvdu" on "/mnt/master-vol-0da35d47f283ecf0f"
I0327 20:34:21.143165   17871 mount_linux.go:366] Checking for issues with fsck on disk: /dev/xvdu
I0327 20:34:21.213260   17871 mount_linux.go:379] `fsck` error fsck from util-linux 2.23.2
fsck.ext2: Bad magic number in super-block while trying to open /dev/xvdu
/dev/xvdu:
The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>

I0327 20:34:21.213303   17871 mount_linux.go:384] Attempting to mount disk:  /dev/xvdu /mnt/master-vol-0da35d47f283ecf0f
I0327 20:34:21.343827   17871 mount_linux.go:430] Attempting to determine if disk "/dev/xvdu" is formatted using lsblk with args: ([-n -o FSTYPE /dev/xvdu])
I0327 20:34:21.351350   17871 mount_linux.go:433] Output: "\n"
I0327 20:34:21.351379   17871 mount_linux.go:404] Disk "/dev/xvdu" appears to be unformatted, attempting to format as type: "ext4" with options: [-F /dev/xvdu]
I0327 20:34:22.648419   17871 mount_linux.go:408] Disk successfully formatted (mkfs): ext4 - /dev/xvdu /mnt/master-vol-0da35d47f283ecf0f
I0327 20:34:22.681840   17871 mount_linux.go:168] Detected OS with systemd
W0327 20:34:22.682674   17871 volume_mounter.go:76] unable to mount master volume: "device already mounted at /rootfs/mnt/master-vol-0da35d47f283ecf0f, but is /dev/xvdu and we want /rootfs/dev/xvdu"
  1. What did you expect to happen?

remount the partitions.

I used this exact same config about two weeks ago to provision a cluster and that went fine, I’m not sure what changed if anything.

  1. Please provide your cluster manifest. Execute kops get --name my.example.com -oyaml to display your cluster manifest. You may want to remove your cluster name and other sensitive information.

  2. Please run the commands with most verbose logging by adding the -v 10 flag. Paste the logs into this report, or in a gist and provide the gist link here.

  3. Anything else do we need to know?

If I go into the container manually umount and remount the path as it expects it works and finishes bootstrapping the cluster.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 11
  • Comments: 40 (9 by maintainers)

Commits related to this issue

Most upvoted comments

For everybody who looks for Waiting for volume "vol-......." to be attached (like I was) - this might have to do with m5 (or other 5th gen instances) of AWS as mentioned in #4844 or k8s #49926. Reverting to m4 instances might do the trick.

This is still an issue, as far as I can tell.

We have seen the same, or a very similar issue. kops 1.9, kube 1.9.3, debian stretch base image:

May 12 11:04:24 ip-172-22-39-194 docker[4410]: I0512 11:04:24.000871    4441 volume_mounter.go:255] Trying to mount master volume: "vol-0fa0e257fc6
380a0a"
May 12 11:04:24 ip-172-22-39-194 docker[4410]:   VolumeId: "vol-0fa0e257fc6380a0a"
May 12 11:04:24 ip-172-22-39-194 docker[4410]: I0512 11:04:24.303251    4441 volume_mounter.go:274] Currently attached volumes: [{"ID":"vol-01ed0d8
ca320e4288","LocalDevice":"/dev/xvdu","AttachedTo":"","Mountpoint":"","Status":"available","Info":{"Description":"vol-01ed0d8ca320e4288","EtcdClust
ers":[{"clusterKey":"main","nodeName":"eu-west-1a","nodeNames":["eu-west-1a","eu-west-1b","eu-west-1c"]}]}} {"ID":"vol-0fa0e257fc6380a0a","LocalDev
ice":"/dev/xvdv","AttachedTo":"","Mountpoint":"","Status":"available","Info":{"Description":"vol-0fa0e257fc6380a0a","EtcdClusters":[{"clusterKey":"
events","nodeName":"eu-west-1a","nodeNames":["eu-west-1a","eu-west-1b","eu-west-1c"]}]}}]
May 12 11:04:25 ip-172-22-39-194 docker[4410]: I0512 11:04:25.485833    4441 volume_mounter.go:58] Master volume "vol-0fa0e257fc6380a0a" is attache
d at "/dev/xvdv"
May 12 11:04:25 ip-172-22-39-194 docker[4410]: I0512 11:04:25.485844    4441 volume_mounter.go:72] Doing safe-format-and-mount of /dev/xvdv to /mnt
/master-vol-0fa0e257fc6380a0a
May 12 11:04:25 ip-172-22-39-194 docker[4410]: I0512 11:04:25.485874    4441 volume_mounter.go:107] Waiting for volume "vol-0fa0e257fc6380a0a" to b
e attached
May 12 11:04:26 ip-172-22-39-194 docker[4410]: I0512 11:04:26.486055    4441 volume_mounter.go:107] Waiting for volume "vol-0fa0e257fc6380a0a" to b
e attached
...

Waiting for volume continued until after a reboot. At the second boot the AWS volume is already attached and things work. This is the etcd-events volume. There are no Wating messages for etcd-main which seems to have successfully attached. It looks like the volume successfully attached, but something failed to either accept, or notice it.

Believed fixed by #4849, i.e. will be in 1.9.0-beta.1

@ctbarrett we’ve experienced exactly the same issue with the latest stable coreos. The following one liner does the trick but would prefer a proper fix: docker exec $(docker ps -qf "ancestor=protokube:1.8.0") bash -c 'umount /rootfs/mnt/*'

Ran across this today on one of our clusters

  • kops 1.13.0
  • kubernetes 1.12.10
  • m4 instances

Switching to Debian Stretch did not solve the issue for us. Protokube still tried to mount volumes that were already mounted.