rexray: rexray/s3fs:0.11.2 could not create /data directory in s3.

Summary

rexray/s3fs:0.11.2 could not create /data directory in s3.

Bug Reports

I am using rexray/s3fs with docker plugin. When I tried 0.11.1, it was working well. When I using latest version of rexray/s3fs it was not working. so I checked the version and it was 0.11.2 updated 3days ago. In this version, it creates s3 bucket but could not create /data in bucket. and could not remount it.

Version

docker plugin: rexray/s3fs:0.11.2

Call sequence

$ docker plugin install rexray/s3fs \
>   --grant-all-permissions \
>   S3FS_REGION=ap-southeast-1 \
>   S3FS_ENDPOINT=s3.ap-southeast-1.amazonaws.com \
>   S3FS_ACCESSKEY=access \
>   S3FS_SECRETKEY=secret+

latest: Pulling from rexray/s3fs
abc12345d458: Download complete
Digest: sha256:b3a2d374d8dedf8648679d5c8c69f5cb2c57e2c27a60678a1c5164f4d137cd17
Status: Downloaded newer image for rexray/s3fs:latest
Installed plugin rexray/s3fs

$ docker volume create -d rexray/s3fs --name rex-t4
rex-t4

$ docker run -it --rm -v rex-t4 alpine
# echo hello > /rex-t4/world.txt
# exit

$ docker run -it --rm -v rex-t4:/rex alpine
/run/torcx/bin/docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/9d4afe70f797cb98f9576c8ede30ce794a2b436c7237c29865bec261a2f8f566/rootfs': VolumeDriver.Mount: docker-legacy: Mount: rex-t4: failed: error mounting s3fs bucket.
ERRO[0001] error waiting for container: context canceled

Configuration Files

in docker plugin directory

rexray:
  loglevel: warn
libstorage:
  service: s3fs
  integration:
    volume:
      operations:
        create:
          default:
            fsType: ext4
        mount:
          preempt: false
  server:
    services:
      s3fs:
        driver: s3fs

Logs

/run/torcx/bin/docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/9d4afe70f797cb98f9576c8ede30ce794a2b436c7237c29865bec261a2f8f566/rootfs': VolumeDriver.Mount: docker-legacy: Mount: rex-t4: failed: error mounting s3fs bucket.
ERRO[0001] error waiting for container: context canceled.

Plugin inspect

gist

Service Log

gist

About this issue

  • Original URL
  • State: open
  • Created 6 years ago
  • Comments: 18 (6 by maintainers)

Most upvoted comments

Hello,

Having the same issue here.

With the rexray/s3fs:0.11.4 (latest as of February 2019) stand-alone Docker Plugin, the s3 bucket is not mounted. Instead the mountpoint is “local”, despite that inspecting the volume the correct s3fs driver is listed.

Not a credentials problem.

Mounting the bucket with the same config and Mount flags work in the host.

I believe there is some hidden state with respect to volumes, where multiple instances of the plugin start to cause the error mounting buckets.

Multiple versions or instances of the plugin can be created either via docker plugin install --alias or docker plugin rm/install.

Messages such as Error response from daemon: plugin rexray/s3fs:latest is in use when attempting to disable plugins make me suspect that there is volume state not represented in docker volume ls.

It seems to be especially problematic when the different versions are using the same S3 account, because they are wanting to have multiple volumes with the same name.

Avoiding --alias seems to help. And restarting the docker engine after docker plugin rm seems to reset this hidden state.

Yes, latest version didn’t work to create new volume and mount that volume. I am still using rexray/s3fs:0.11.1.

I attached a scenario for the latest version of rexray /s3fs.

  • install latest version of rexray/s3fs
  • check docker plugins
  • create volume with rexray/s3fs
  • check inspect of created volume
  • run docker with volume name and volume driver
  • check the mounted volume
core@core-01 ~ $ docker plugin install rexray/s3fs \
>   --grant-all-permissions \
>   S3FS_REGION=ap-southeast-1 \
>   S3FS_ACCESSKEY=*** \
>   S3FS_SECRETKEY=***

latest: Pulling from rexray/s3fs
b5ce7aed9a2b: Download complete
Digest: sha256:f06a8d9e64e6d616298646082c33252703af807b274288643e9668789d06cd1e
Status: Downloaded newer image for rexray/s3fs:latest
Installed plugin rexray/s3fs

re@core-01 ~ $ docker plugin ls
ID                  NAME                 DESCRIPTION                                     ENABLED
0ffc2ee7ad1d        rexray/s3fs:latest   REX-Ray FUSE Driver for Amazon Simple Storag…   true

core@core-01 ~ $ docker volume create -d rexray/s3fs rex-data-test-t1
rex-data-test-t1

core@core-01 ~ $ docker volume inspect rex-data-test-t1
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "rexray/s3fs:latest",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/plugins/0ffc2ee7ad1d3bb91c2d9bd4066322af1662fdc18129fff5b8553469c98613c8/rootfs",
        "Name": "rex-data-test-t1",
        "Options": {},
        "Scope": "global",
        "Status": {
            "availabilityZone": "",
            "fields": null,
            "iops": 0,
            "name": "rex-data-test-t1",
            "server": "s3fs",
            "service": "s3fs",
            "size": 0,
            "type": ""
        }
    }
]

core@core-01 ~ $ docker run --rm -it -v rex-data-test-t1:/data --volume-driver=rexray/s3fs python:3.6-alpine sh
Unable to find image 'python:3.6-alpine' locally
3.6-alpine: Pulling from library/python
8e3ba11ec2a2: Pull complete
4001a9c615cb: Pull complete
b686bb33394f: Pull complete
a81620f6cf07: Pull complete
3a84f1b6f0b0: Pull complete
Digest: sha256:13cef553ebb01de55dd9f62285a9f98ae1c194d36952ff20153716bccabea316
Status: Downloaded newer image for python:3.6-alpine

/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  15.6G    963.7M     13.8G   6% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                   998.2M         0    998.2M   0% /sys/fs/cgroup
/dev/sda9                15.6G    963.7M     13.8G   6% /data
/dev/sda9                15.6G    963.7M     13.8G   6% /etc/resolv.conf
/dev/sda9                15.6G    963.7M     13.8G   6% /etc/hostname
/dev/sda9                15.6G    963.7M     13.8G   6% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/latency_stats
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                   998.2M         0    998.2M   0% /proc/scsi
tmpfs                   998.2M         0    998.2M   0% /sys/firmware