docker-mailserver: ClamAV - permission denied on amavis tmp dir
Context
Hi, I have the same problem like the closed issue no #1353 I’ve installed the mailserver on an kubernetes cluster and want to use the integrated ClamAV service for scanning. with (ENABLE_CLAMAV=1).
What is affected by this bug?
The mails will not be scanned by ClamAV. It generate an permission denied error log message
When does this occur?
on running the application.
How do we replicate the issue?
- Install (like wiki)
- send a mail from existing user to himself
- see logs
Actual Behavior
On an incoming email, this is writes in the logs:
run_av (ClamAV-clamd) FAILED - unexpected , output=“/var/lib/amavis/tmp/amavis-20201117T091752-00289-lTvNEEz7/parts: lstat() failed: Permission denied. ERROR\n” amavis[289]: (00289-01) (!)ClamAV-clamd av-scanner FAILED: CODE(0x55dda70f2a50) unexpected , output=“/var/lib/amavis/tmp/amavis-20201117T091752-00289-lTvNEEz7/parts: lstat() failed: Permission denied. ERROR\n” at (eval 98) line 951. amavis[289]: (00289-01) (!)WARN: all primary virus scanners failed, considering backups
Expected behavior (i.e. solution)
ClamAV should be able to analyze the emails without permission problems
Your Environment
- Amount of RAM available: 8GB
- Mailserver version used: latest
- Docker version used: 19.03.13
- kubernetes version used: v1.19.3
- Environment settings relevant to the config:
System
A kubernetes Cluster with an NFS pv
System configuration
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mail-storage
labels:
type: local
namespace: mailserver
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: xxx.xxx.xxx.xxx
path: "/srv/nfs/kubedata/smtp/mail-storage"
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mail-storage-claim
namespace: mailserver
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
configmap.yaml
data:
ONE_DIR: "1"
deployment.yaml
- name: data
mountPath: /var/mail
subPath: data
- name: data
mountPath: /var/mail-state
subPath: state
- name: data
mountPath: /var/log/mail
subPath: log
- name: data
persistentVolumeClaim:
claimName: mail-storage-claim
NFS
And when I do ls -aln /var/lib/amavis/tmp/, I get this :
drwxrwx— 4 112 114 4096 Nov 17 10:21 . drwxr-x— 8 112 114 4096 Nov 17 10:19 … drwxr-x— 3 112 114 4096 Nov 17 10:20 amavis-20201117T092012-00283-H7KnOmih drwxr-x— 3 112 114 4096 Nov 17 10:21 amavis-20201117T092102-00282-5WnlwIhO
and on the NFS server i do ls -aln /srv/nfs/kubedata/smtp/mail-storage/state/lib-amavis/tmp, I get this:
drwxrwx— 4 112 114 4096 Nov 17 10:21 . drwxr-x— 8 112 114 4096 Nov 17 10:19 … drwxr-x— 3 112 114 4096 Nov 17 10:20 amavis-20201117T092012-00283-H7KnOmih drwxr-x— 3 112 114 4096 Nov 17 10:21 amavis-20201117T092102-00282-5WnlwIhO
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 21 (12 by maintainers)
Just an fyi, a very late one, but this seems to be due to amavis rather than any config setting in docker-mailserver
you can see here, line 7656, the permissions are set to 0750, no write permissions for the group. User
clamavis in theamavisgroup, but without correct permissions this makes no differenceas a temporary fix I suggest a
config/user-patches.shlike this[edit] attached wrong version of user-patches.sh, didn’t fix tmp/subdirs
If you have a test cluster I would suggest trying one thing at a time. Specifically get rid of the NFS server and use a simpler mount option. If that works you know it is NFS-related (which I suspect).
EDIT: another possibility is that Amavis crashes due to out of memory. With K8s there is normally no swap, so if memory is scarce you will crash. That can look like this. Have you really allocated 8G to this specific deployment? That should be more than enough, just wondering? Is it the resource limit or resource requirement?