rancher: rancher-nfs segfaults when CATTLE_URL contains .local TLD
Rancher versions: rancher/server: 1.6.2 and 1.6.3 rancher/agent: 1.2.2 and 1.2.5
Infrastructure Stack versions: healthcheck: rancher/healthcheck:v0.3.1 ipsec: network-services: scheduler:
Docker version: (docker version,docker info preferred)
Client:
Version: 17.03.1-ce
API version: 1.27
Go version: go1.7.5
Git commit: c6d412e
Built: Tue Mar 28 00:40:02 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.1-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: c6d412e
Built: Tue Mar 28 00:40:02 2017
OS/Arch: linux/amd64
Experimental: false
Containers: 10
Running: 9
Paused: 0
Stopped: 1
Images: 9
Server Version: 17.03.1-ce
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.34-rancher
Operating System: RancherOS v1.0.3
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 994 MiB
Name: ROS02
ID: VE46:EIRI:PYP2:6PWH:L7N3:DZFL:3ELT:M3NW:AAUQ:SA77:L4JT:A5S3
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Operating system and kernel: (cat /etc/os-release, uname -r preferred)’
4.9.34-rancher
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO) VM (vSphere)
Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB) Single node with external DB, Synology as NFS server
Environment Template: (Cattle/Kubernetes/Swarm/Mesos) Cattle
Steps to Reproduce: Clean install, add rancher-nfs
Results: I can create a NFS mount without rancher-nfs but when i use the plugin I’m seeing this in the container logs
time="2017-07-12T17:52:38Z" level=info msg=Running
time="2017-07-12T17:52:38Z" level=info msg=Starting
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x63 pc=0x7fe4d3689259]
runtime stack:
runtime.throw(0x9c85dc, 0x2a)
/usr/local/go/src/runtime/panic.go:566 +0x95
runtime.sigpanic()
/usr/local/go/src/runtime/sigpanic_unix.go:12 +0x2cc
goroutine 23 [syscall, locked to thread]:
runtime.cgocall(0x7b2380, 0xc420024df8, 0xc400000000)
/usr/local/go/src/runtime/cgocall.go:131 +0x110 fp=0xc420024db0 sp=0xc420024d70
net._C2func_getaddrinfo(0x7fe4cc0008c0, 0x0, 0xc420280a80, 0xc420026900, 0x0, 0x0, 0x0)
??:0 +0x68 fp=0xc420024df8 sp=0xc420024db0
net.cgoLookupIPCNAME(0xc42021fec0, 0x11, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/net/cgo_unix.go:146 +0x37c fp=0xc420024f18 sp=0xc420024df8
net.cgoIPLookup(0xc42028a060, 0xc42021fec0, 0x11)
/usr/local/go/src/net/cgo_unix.go:198 +0x4d fp=0xc420024fa8 sp=0xc420024f18
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc420024fb0 sp=0xc420024fa8
created by net.cgoLookupIP
/usr/local/go/src/net/cgo_unix.go:208 +0xb4
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 3
- Comments: 16 (6 by maintainers)
There is a warning in the UI if you try to use
.localfor the host registration URL, but not for every possible place an address can go. The rest is academic, but:RFCs are (roughly) sequentially numbered and individual memos do not generally change after they are “done”. The DNS spec (1034/5) is from the 1980’s. Multicast IP didn’t even exist (or wasn’t standardized) yet (1112). Obviously it doesn’t mention something from decades in the future.
Ignoring time travel, there is no reason it would mention it because the DNS protocol doesn’t care about what domain is being asked for, and mDNS does not change the DNS protocol. The
.localTLD was never previously delegated, and then 6762 defined it to have special behavior clients must implement to use it. Via requests using the standard DNS protocol to a special address (“One-Shot Multicast DNS Queries”), plus additional capabilities.The problem is that it is a convenient word and people like to (ab)use it (including Microsoft in particular for suggesting people do it for years), but this was never a valid use of the TLD or a good idea. People using it are squatting on an undelegated TLD and should not be shocked that it might break. (see also: Google now owns
.devand recently turned on HSTS for the entire TLD, which broke lots of developers who usehttp://thing.devon their laptops).@superseb
.localis a proper TLD. The problem lies within golang and thus rancher not (m)DNS. To expect one to overhaul their entire infrastructure as a workaround seems a bit much to me.Filesystem Size Used Available Use% Mounted on overlay 14.9G 1.3G 12.8G 10% / tmpfs 477.7M 0 477.7M 0% /dev tmpfs 497.0M 0 497.0M 0% /sys/fs/cgroup /dev/sda1 14.9G 1.3G 12.8G 10% /opt /dev/sda1 14.9G 1.3G 12.8G 10% /mnt none 497.0M 616.0K 496.4M 0% /run /dev/sda1 14.9G 1.3G 12.8G 10% /home /dev/sda1 14.9G 1.3G 12.8G 10% /media none 497.0M 616.0K 496.4M 0% /var/run /dev/sda1 14.9G 1.3G 12.8G 10% /usr/sbin/iptables /dev/sda1 14.9G 1.3G 12.8G 10% /etc/selinux /dev/sda1 14.9G 1.3G 12.8G 10% /var/log /dev/sda1 14.9G 1.3G 12.8G 10% /etc/docker devtmpfs 477.7M 0 477.7M 0% /host/dev shm 64.0M 0 64.0M 0% /host/dev/shm /dev/sda1 14.9G 1.3G 12.8G 10% /usr/lib/firmware /dev/sda1 14.9G 1.3G 12.8G 10% /usr/lib/modules /dev/sda1 14.9G 1.3G 12.8G 10% /usr/bin/ros /dev/sda1 14.9G 1.3G 12.8G 10% /usr/share/ros /dev/sda1 14.9G 1.3G 12.8G 10% /var/lib/rancher /dev/sda1 14.9G 1.3G 12.8G 10% /var/lib/docker /dev/sda1 14.9G 1.3G 12.8G 10% /var/lib/rancher/cache /dev/sda1 14.9G 1.3G 12.8G 10% /var/lib/rancher/conf /dev/sda1 14.9G 1.3G 12.8G 10% /etc/ssl/certs/ca-certificates.crt.rancher /dev/sda1 14.9G 1.3G 12.8G 10% /etc/resolv.conf /dev/sda1 14.9G 1.3G 12.8G 10% /etc/hostname /dev/sda1 14.9G 1.3G 12.8G 10% /etc/hosts shm 64.0M 0 64.0M 0% /dev/shm devtmpfs 477.7M 0 477.7M 0% /dev shm 64.0M 0 64.0M 0% /dev/shm