rclone: getcwd syscall does not work in rclone mounted drives

What is the problem you are having with rclone?

Programs that use the getcwd() syscall fail when inside an rclone mount with ENOENT.

For example I go into a rclone mounted drive cd ~/dropbox/Documents, then run something like node -p "process.cwd()", this fails with ENOENT.

I even made a basic C script with nothing but the syscall:

#include <unistd.h>
#include <stdio.h>
#include <limits.h>

int main() {
  char cwd[PATH_MAX];
  if (getcwd(cwd, sizeof(cwd)) != NULL) {
    printf("%s\n", cwd);
  } else {
    perror("getcwd() error");
    return 1;
  }
  return 0;
}

Which also fails, so this confirms it’s the syscall failing and not something in Node/Python.

NOTE: This only fails in a subdirectory, it does not fail in the mount directory itself e.g. ~/mount/ is fine, but ~/mount/subdirectory/ is not.

What is your rclone version (output from rclone version)

1.51.0

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04 and 19.10 both fail.

Which cloud storage system are you using? (eg Google Drive)

Multiple, failed for WebDAV (nextcloud), Dropbox and Google Drive.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

The WebDAV (nextcloud) is mounted as:

/usr/bin/rclone mount nextcloud:/ /home/jamesernator/nextcloud

The google drive is mounted as:

/usr/bin/rclone mount googleDrive:/ /home/jamesernator/googleDrive --dir-cache-time 30s --cache-dir /home/jamesernator/.rclone-mount-cache/ --vfs-cache-max-age 720h0m0s --vfs-cache-mode full --vfs-cache-max-size 2G

And dropbox is mounted using a cache remote with:

/usr/bin/rclone mount dropbox-cache:/ /home/jamesernator/dropbox

NOTE: All are mounted automatically through systemd services, however manually mounting does not fix the situation.

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

This logs contains the results of running rclone mount -vv "nextcloud:/" ~/nextcloud-test/ 2> out.log" followed by the following two commands in zsh` (in a separate shell):

cd nextcloud-test/Documents
node -e 'process.cwd()'

out.log

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 2
  • Comments: 22 (17 by maintainers)

Commits related to this issue

Most upvoted comments

A massive thank you to @tv42 who has worked out what is going on! See bazil/fuse#250 for the full explanation.

It turned out to be a problem in rclone. It was sending different nodes to the fuse layer with the same information in which caused fuse to believe that the directory had been deleted and re-created. This was fixed by caching the actual node for an item and returning that.

Please have a go with this and tell me what you think.

https://beta.rclone.org/branch/v1.51.0-264-g2696e369-fix-4104-cwd-disappears-beta/ (uploaded in 15-30 mins)

The fix for this went into 1.52 so it should be all good now 😃

I tried it and it does indeed appear to be fixed

Superfast as always… 😛

Hadn’t gotten a chance to try it yet. Just set it on both my active mounts.

I’ve merged this to master now which means it will be in the latest beta in 15-30 mins and released in v1.52

Will do. I assume this is still the best way to reproduce it: #2157 (comment)?

You need to add --attr-timeout 0 to the mount command. I tried it and it does indeed appear to be fixed 😃

I haven’t figured out what is going on yet…

I submitted https://github.com/bazil/fuse/issues/250 in the hope that someone more knowledgeable about FUSE internals might help 😃