moby: Panic when pulling an image, ApplyLayer - too many links"

Hello everyone,

I wanted to pull nsqio/nsq:v0.3.6 but my docker daemon crashed (with all my containers 😦) with the following error.

It only occured on one server (docker 1.7.1, kernel 3.13.x, btrfs backend, lot of free ram and lot of free disk space), everything went well on the others. The error is repeatable, I have not restarted the server yet.

I’m wondering the cause of this error, and maybe a fix for handling this error could be done to avoid a panic.

time="2015-10-04T23:19:55.905551060+02:00" level=info msg="POST /v1.16/images/create?fromImage=nsqio%2Fnsq&tag=v0.3.6"
time="2015-10-04T23:19:58.897438752+02:00" level=error msg="Error from V2 registry: ApplyLayer exit status 1 stdout:  stderr: link /bin/[ /bin/run-parts: too many links"  
panic: runtime error: invalid memory address or nil pointer dereference 
[signal 0xb code=0x1 addr=0x20 pc=0x62fc6a] 

goroutine 131472 [running]: 
bufio.(*Writer).flush(0xc20c46a9c0, 0x0, 0x0) 
        /usr/local/go/src/bufio/bufio.go:530 +0xda 
bufio.(*Writer).Flush(0xc20c46a9c0, 0x0, 0x0) 
        /usr/local/go/src/bufio/bufio.go:519 +0x3a 
net/http.(*response).Flush(0xc208fe74a0) 
        /usr/local/go/src/net/http/server.go:1047 +0x4c 
github.com/docker/docker/pkg/ioutils.(*WriteFlusher).Write(0xc209bc4600, 0xc2096ba140, 0xbd, 0x137, 0xbd, 0x0, 0x0) 
        /go/src/github.com/docker/docker/pkg/ioutils/writeflusher.go:21 +0x145 
github.com/docker/docker/pkg/progressreader.(*Config).Read(0xc208a77030, 0xc2094cc000, 0x8000, 0x8000, 0x4000, 0x0, 0x0) 
        /go/src/github.com/docker/docker/pkg/progressreader/progressreader.go:37 +0x2e5 
io.Copy(0x7f622cf1d2a8, 0xc20b24b1b0, 0x7f6228421000, 0xc208a77030, 0xbfe7e8, 0x0, 0x0) 
        /usr/local/go/src/io/io.go:362 +0x1f6 
github.com/docker/docker/graph.func·008(0xc209892140, 0x0, 0x0) 
        /go/src/github.com/docker/docker/graph/pull.go:602 +0xd42 
github.com/docker/docker/graph.func·009(0xc209892140) 
        /go/src/github.com/docker/docker/graph/pull.go:626 +0x2f 
created by github.com/docker/docker/graph.(*TagStore).pullV2Tag 
        /go/src/github.com/docker/docker/graph/pull.go:627 +0x2671 

docker version

Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

docker info

Containers: 62
Images: 384
Storage Driver: btrfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-65-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 4
Total Memory: 15.59 GiB
Name: <hostname>
ID: LDCP:QPPH:XD7G:R2XJ:TWW3:ALIX:EA5Q:DO5M:RVCI:VZBE:HVP3:A3R2
Registry: https://index.docker.io/v1/

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Comments: 15 (8 by maintainers)

Most upvoted comments

Btrfs used to have a very low hardlink limit if files existed in same directories like they do for busybox. This was fixed in Linux and should be default since 3.12. man mkfs.btrfs:

       extref
           (default since btrfs-progs 3.12, kernel support since 3.7)

           increased hardlink limit per file in a directory to 65536, older
           kernels supported a varying number of hardlinks depending on the
           sum of all file name sizes that can be stored into one metadata
           block

For existing devices you can use btrfstune -r