go: cmd/compile, cmd/link: can't build large arm binaries with external linking, Kubernetes now too big
Please answer these questions before submitting your issue. Thanks!
What version of Go are you using (go version
)?
go1.6.3
What operating system and processor architecture are you using (go env
)?
linux/arm
What did you do?
If possible, provide a recipe for reproducing the error.
A complete runnable program is good.
A link on play.golang.org is best.
Just run
docker run -it gcr.io/google_containers/kube-apiserver-arm:v1.4.0-alpha.3 /usr/local/bin/kube-apiserver
What did you expect to see?
kube-apiserver starting
What did you see instead?
unexpected fault address 0x40d62ec
fatal error: fault
[signal 0xb code=0x2 addr=0x40d62ec pc=0x40d62ec]
goroutine 1 [running, locked to thread]:
runtime.throw(0x2cc9010, 0x5)
/usr/local/go/src/runtime/panic.go:547 +0x78 fp=0x1482beec sp=0x1482bee0
runtime.sigpanic()
/usr/local/go/src/runtime/sigpanic_unix.go:27 +0x280 fp=0x1482bf18 sp=0x1482beec
k8s.io/kubernetes/vendor/github.com/docker/engine-api/types/versions.init()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/docker/engine-api/types/versions/compare.go:62 +0x4c fp=0x1482bf20 sp=0x1482bf1c
k8s.io/kubernetes/vendor/github.com/docker/engine-api/types/filters.init()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/docker/engine-api/types/filters/parse.go:295 +0x5c fp=0x1482bf34 sp=0x1482bf20
k8s.io/kubernetes/vendor/github.com/docker/engine-api/types.init()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/docker/engine-api/types/types.go:473 +0x5c fp=0x1482bf38 sp=0x1482bf34
k8s.io/kubernetes/pkg/credentialprovider.init()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/credentialprovider/provider.go:123 +0x7c fp=0x1482bf6c sp=0x1482bf38
k8s.io/kubernetes/pkg/credentialprovider/aws.init()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/credentialprovider/aws/aws_credentials.go:232 +0x70 fp=0x1482bf70 sp=0x1482bf6c
k8s.io/kubernetes/pkg/cloudprovider/providers/aws.init()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/aws/sets_ippermissions.go:146 +0xb0 fp=0x1482bf90 sp=0x1482bf70
k8s.io/kubernetes/pkg/cloudprovider/providers.init()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/cloudprovider/providers/providers.go:30 +0x4c fp=0x1482bf94 sp=0x1482bf90
k8s.io/kubernetes/cmd/kube-apiserver/app.init()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/app/server.go:310 +0x4c fp=0x1482bf98 sp=0x1482bf94
main.init()
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kube-apiserver/apiserver.go:53 +0x5c fp=0x1482bf9c sp=0x1482bf98
runtime.main()
/usr/local/go/src/runtime/proc.go:177 +0x274 fp=0x1482bfc4 sp=0x1482bf9c
runtime.goexit()
/usr/local/go/src/runtime/asm_arm.s:990 +0x4 fp=0x1482bfc4 sp=0x1482bfc4
goroutine 5 [chan receive]:
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x462d1f8)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x60
created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x2cc
Seems like Kubernetes with it’s deps just grew too big for arm 😦 Any help here would be really appreciated. We’re releasing Kubernetes in about 10 days
I need help from some Go guru here that knows how the internals work!
I assume this is a Go issue rather than a Kubernetes issue, since the file that’s segfaulting doesn’t have an init()
Please take a look as quickly as possible! -> @lavalamp @smarterclayton @ixdy @rsc @davecheney @wojtek-t @jfrazelle @bradfitz
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 62 (39 by maintainers)
Commits related to this issue
- Merge pull request #32517 from luxas/fix_arm_ppc64le Automatic merge from submit-queue Use a patched golang version for building linux/arm Fixes: #29904 Right now, linux/arm is broken because of... — committed to deads2k/kubernetes by deleted user 8 years ago
- Merge pull request #32517 from luxas/fix_arm_ppc64le Automatic merge from submit-queue Use a patched golang version for building linux/arm Fixes: #29904 Right now, linux/arm is broken because of a... — committed to eparis/kubernetes by deleted user 8 years ago
- Merge pull request #32517 from luxas/fix_arm_ppc64le Automatic merge from submit-queue Use a patched golang version for building linux/arm Fixes: #29904 Right now, linux/arm is broken because of a... — committed to shyamjvs/kubernetes by deleted user 8 years ago
I will try to bring up external linking support on ARM soon.
Send CL https://go-review.googlesource.com/c/28857/ for “large mode”, for discussion and play. With it
kube-apiserver
at least passesinit
’s. (I don’t know how I should use that program)Has any investigation been done to determine if the binaries built by Kubernetes could be reduced in size? I suggested this when the problem was first reported. That would not only resolve your issue with ppc64le and arm but improve compile and link times, save space for your binaries, etc. etc. I looked briefly at the packages included in hyperkube and it appeared to have many duplicate path names rooted at different locations. Could there be new versions of some packages being added without removing the old ones that are no longer used? Or could the binaries get split in any way? I can send you more detail on the package path names I am referring to if you want @luxas.
Other interesting observations:
So it might be a limitation in golang…
Your faulting address is the pc (program counter) and it is greater than 2**26. I’m not familiar with arm, but could it be that the 32 bit arm platform has a limit on the size of the programs it can run and these programs are exceeding it? These are the same programs we have issues with on ppc64le due to their size.
can you bisect for the exact sha that started the breakage, not the tag