iotedge: One or more errors occurred. (Permission denied /var/run/iotedge/mgmt.sock) caused by: docker returned exit code: 1, stderr = One or more errors occurred. (Permission denied

Hello

I just installed IoTEdge on Raspberry strech following this: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge?view=iotedge-2018-06&tabs=linux

However I get these errors below.

3 weeks I installed another one and it worked perfectly with the same instructions.

pi@raspberrypi:/etc/iotedge $ sudo iotedge check --verbose
Configuration checks
--------------------
√ config.yaml is well-formed - OK
√ config.yaml has well-formed connection string - OK
√ container engine is installed and functional - OK
√ config.yaml has correct hostname - OK
× config.yaml has correct URIs for daemon mgmt endpoint - Error
    One or more errors occurred. (Permission denied /var/run/iotedge/mgmt.sock)
        caused by: docker returned exit code: 1, stderr = One or more errors occurred. (Permission denied /var/run/iotedge/mgmt.sock)
√ latest security daemon - OK
√ host time is close to real time - OK
√ container time is close to host time - OK
‼ DNS server - Warning
    Container engine is not configured with DNS server setting, which may impact connectivity to IoT Hub.
    Please see https://aka.ms/iotedge-prod-checklist-dns for best practices.
    You can ignore this warning if you are setting DNS server per module in the Edge deployment.
        caused by: Could not open container engine config file /etc/docker/daemon.json
        caused by: No such file or directory (os error 2)
‼ production readiness: certificates - Warning
    The Edge device is using self-signed automatically-generated development certificates.
    They will expire in 89 days (at 2021-02-22 07:24:52 UTC) causing module-to-module and downstream device communication to fail on an active deployment.
    After the certs have expired, restarting the IoT Edge daemon will trigger it to generate new development certs.
    Please consider using production certificates instead. See https://aka.ms/iotedge-prod-checklist-certs for best practices.
√ production readiness: container engine - OK
‼ production readiness: logs policy - Warning
    Container engine is not configured to rotate module logs which may cause it run out of disk space.
    Please see https://aka.ms/iotedge-prod-checklist-logs for best practices.
    You can ignore this warning if you are setting log policy per module in the Edge deployment.
        caused by: Could not open container engine config file /etc/docker/daemon.json
        caused by: No such file or directory (os error 2)
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
    The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
    Data might be lost if the module is deleted or updated.
    Please see https://aka.ms/iotedge-storage-host for best practices.
× production readiness: Edge Hub's storage directory is persisted on the host filesystem - Error
    Could not check current state of edgeHub container
        caused by: docker returned exit code: 1, stderr = Error: No such object: edgeHub

Connectivity checks
-------------------
√ host can connect to and perform TLS handshake with IoT Hub AMQP port - OK
√ host can connect to and perform TLS handshake with IoT Hub HTTPS / WebSockets port - OK
√ host can connect to and perform TLS handshake with IoT Hub MQTT port - OK
√ container on the default network can connect to IoT Hub AMQP port - OK
√ container on the default network can connect to IoT Hub HTTPS / WebSockets port - OK
√ container on the default network can connect to IoT Hub MQTT port - OK
√ container on the IoT Edge module network can connect to IoT Hub AMQP port - OK
√ container on the IoT Edge module network can connect to IoT Hub HTTPS / WebSockets port - OK
√ container on the IoT Edge module network can connect to IoT Hub MQTT port - OK

17 check(s) succeeded.
4 check(s) raised warnings.
2 check(s) raised errors.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 28 (9 by maintainers)

Most upvoted comments

I ran into this same issue, and if nothing else helps, check if you have capital letters in your iot hub name. I had “companyHub”, and when I renamed it to “companyhub”, the above errors went away and everything started to work. This happened with 1.2 version of IoT Edge.

One of the late additions to 1.0.10 was PR #3572 - Since I’ve run out of other possibilities, this may be a source of problems.

From the PR:

I chose to create UID 1000 when doing this for Edge on K8s, and I chose to continue to use this as the default for running as non-root on docker.

Regardless of the UID chosen for the container, this must match the UID ownership of the management socket file to allow edgeAgent access to the management socket. If the user runs the agent as non-root through Docker settings, the management socket must be set.

Setting UID on management socket.

Example using UID 1000.

Ubuntu/debian 9 systems

sudo systemctl edit iotedge.mgmt.socket
# add the following lines

[Socket] SocketUser=1000

#save the override file.
sudo systemctl daemon-reload
sudo systemctl restart iotedge.mgmt.socket

On other systems

sudo chown 1000:iotedge /var/lib/iotedge/mgmt.sock

If this doesn’t address the problem, is it possible for you to create a support ticket in Azure? That way we have a private channel to debug the issue.