salt: Forced remount because options changed when no options changed (2014.7 regression)

Running a highstate on minions with some NFS mounts results in the mount being remounted every time. This did not occur under 2014.1

----------
[INFO    ] Completed state [/nfsmnt] at time 17:21:20.145018
[INFO    ] Running state [/nfsmnt] at time 17:21:20.146726
[INFO    ] Executing state mount.mounted for /nfsmnt
[INFO    ] Executing command 'mount -l' in directory '/root'
[INFO    ] Executing command 'mount -l' in directory '/root'
[INFO    ] Executing command 'mount -o rw,tcp,bg,hard,intr,remount -t nfshost:/nfsmnt /nfsmnt ' in directory '/root'
[INFO    ] {'umount': 'Forced remount because options changed'}
[INFO    ] Completed state [/nfsmnt] at time 17:21:20.267764
...
          ID: /nfsmnt
    Function: mount.mounted
      Result: True
     Comment:
     Started: 10:04:16.078806
    Duration: 68.802 ms
     Changes:
              ----------
              umount:
                  Forced remount because options changed

Running mount -l shows the following:

...
nfshost:/nfsmnt on /nfsmnt type nfs (rw,remount,tcp,bg,hard,intr,addr=x.x.x.x)

I can only assume it’s breaking due to the addr option (which is automatically filled by the OS by the looks of it, it was never manually specified as a mount option) or the ordering.

The mount.mounted state looks as follows:

/nfsmnt:
  mount.mounted:
    - device: nfshost:/nfsmnt
    - fstype: nfs
    - opts: rw,tcp,bg,hard,intr
# salt-call --versions-report
           Salt: 2014.7.0
         Python: 2.6.8 (unknown, Nov  7 2012, 14:47:45)
         Jinja2: 2.5.5
       M2Crypto: 0.21.1
 msgpack-python: 0.1.12
   msgpack-pure: Not Installed
       pycrypto: 2.3
        libnacl: Not Installed
         PyYAML: 3.08
          ioflo: Not Installed
          PyZMQ: 14.3.1
           RAET: Not Installed
            ZMQ: 4.0.4
           Mako: Not Installed

About this issue

  • Original URL
  • State: closed
  • Created 10 years ago
  • Comments: 82 (59 by maintainers)

Commits related to this issue

Most upvoted comments

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.

same bug here with user_xattr for ext4 fs State:

lvm-lv-srv-mount:
  mount.mounted:
    - name: /srv
    - device: /dev/mapper/sys-srv
    - fstype: ext4
    - opts: noatime,nodev,user_xattr
    - dump: 0
    - pass_num: 2
    - persist: True
    - mkmnt: True

Output:

          ID: lvm-lv-srv-mount
    Function: mount.mounted
        Name: /srv
      Result: True
     Comment: Target was already mounted. Entry already exists in the fstab.
     Started: 21:53:42.155559
    Duration: 99.332 ms
     Changes:
              ----------
              umount:
                  Forced remount because options (user_xattr) changed

I got the same problem. I’ve resolved it by replacing:

nfs-montato:
  mount.mounted:
    - name: /var/www
    - device: "my.hostname:/data"
    - opts: "rw,rsize=32768,wsize=32768,hard,tcp,nfsvers=3,timeo=3,retrans=10"
    - fstype: nfs

with:

nfs-montato:
  mount.mounted:
    - name: /var/www
    - device: "my.hostname:/data"
    - opts: "rw,rsize=32768,wsize=32768,hard,tcp,vers=3,timeo=3,retrans=10"
    - fstype: nfs

Now the remount is not forced every time, and it works. That’s because if i mount a share with the option nfsvers=3 and then run the command “mount”, i can see that parameter show as vers=3, not “nfsvers” ! Looking at the nfs manpange:

 vers=n         This option is an alternative to the nfsvers option.  It is included for compatibility with other operating systems

So, using “vers” instead of “nfsvers” is a good workaround.