I recently had this weird issue on a Linux server on which I had replaced some disks and recreated some mdadm RAID arrays and filesystems.
Then I had a new mount point that I for some reason could not seem to mount.
I mounted it and mount
would not complain.
However the content of the mount point would not change.
It resulted in this situation:
~# mount /data
~# umount /data
umount: /data: not mounted.
Notice that mount seemingly mounts correctly
(the return code of mount
was 0 - I checked)
I then checked dmesg
:
[13853.539371] XFS (md1): Mounting V5 Filesystem
[13853.587453] XFS (md1): Ending clean mount
[13853.643398] xfs filesystem being mounted at /data supports timestamps until 2038 (0x7fffffff)
[13853.658731] XFS (md1): Unmounting Filesystem
After a lot of digging I found out that SystemD seems to be the culprit.
At boot time systemd-fstab-generator
generates various dynamic unit files for
each mount point.
For some reason, SystemD seems to be of the opinion that this mount point should not be mounted. So it unmounts the filesystem immediately when I mount it.
The bit that fixed this was to run:
systemctl daemon-reload
This behavior was reported as a bug in 2015: https://github.com/systemd/systemd/issues/1741. At the time of writing, the bug is still open.