istio: galley log says authorizationpolicies.rbac.istio.io/v1alpha1 resource type not found
Describe the bug The istio-galley, istio-pilot, and istio-policy pods all go into a crash loop. I think the cause is related to these errors in the galley log:
2019-03-15T19:49:33.102258Z info kube authorizationpolicies.rbac.istio.io/v1alpha1 resource type not found
2019-03-15T19:49:34.102430Z info kube authorizationpolicies.rbac.istio.io/v1alpha1 resource type not found
2019-03-15T19:49:34.103557Z info kube authorizationpolicies.rbac.istio.io/v1alpha1 resource type not found
2019-03-15T19:49:34.103583Z fatal Unable to initialize Galley Server: timed out waiting for the condition: the following resource type(s) were not found: [authorizationpolicies]
The errors in the pilot log are:
2019-03-15T19:57:15.955346Z info mcp (re)trying to establish new MCP sink stream
2019-03-15T19:57:15.955440Z error mcp Failed to create a new MCP sink stream: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 172.20.216.107:9901: connect: connection refused"
2019-03-15T19:57:16.955613Z info mcp (re)trying to establish new MCP sink stream
2019-03-15T19:57:16.955702Z error mcp Failed to create a new MCP sink stream: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 172.20.216.107:9901: connect: connection refused"
2019-03-15T19:57:17.534246Z info pickfirstBalancer: HandleSubConnStateChange: 0xc4204bd8c0, CONNECTING
2019-03-15T19:57:17.955922Z info mcp (re)trying to establish new MCP sink stream
The errors in the policy log are:
2019-03-15T20:03:33.585556Z info mcp [32] istio/config/v1alpha2/legacy/bypasses
2019-03-15T20:03:33.585565Z info mcp [33] istio/config/v1alpha2/legacy/checknothings
2019-03-15T20:03:33.585570Z info mcp [34] istio/config/v1alpha2/legacy/fluentds
2019-03-15T20:03:33.585575Z info mcp [35] istio/config/v1alpha2/legacy/memquotas
2019-03-15T20:03:33.585579Z info mcp [36] istio/policy/v1beta1/rules
2019-03-15T20:03:33.585600Z info parsed scheme: ""
2019-03-15T20:03:33.585613Z info scheme "" not registered, fallback to default scheme
2019-03-15T20:03:33.585698Z info Using new MCP client sink stack
2019-03-15T20:03:33.585745Z info Awaiting for config store sync...
2019-03-15T20:03:33.585809Z info mcp (re)trying to establish new MCP sink stream
2019-03-15T20:03:33.585833Z info ccResolverWrapper: sending new addresses to cc: [{istio-galley.istio-system.svc:9901 0 <nil>}]
2019-03-15T20:03:33.585845Z info ClientConn switching balancer to "pick_first"
2019-03-15T20:03:33.585908Z info pickfirstBalancer: HandleSubConnStateChange: 0xc420499960, CONNECTING
2019-03-15T20:03:33.585913Z info blockingPicker: the picked transport is not ready, loop back to repick
Expected behavior I expected all of the containers to run. I have another cluster where everything is working normally.
Steps to reproduce the bug I am not sure of the exact steps to reproduce.
Version 1.1.0 installed from the 1.1.0 chart kubectl: 1.11.8 EKS
Installation helm chart from istio.io repository
Environment AWS EKS
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 26 (19 by maintainers)
I have sent out a PR to fix this, meanwhile you can manually apply the crd-12.yaml to solve it.
On Fri, Mar 15, 2019 at 8:21 PM Thomas (TJ) Zimmerman < notifications@github.com> wrote:
I had to add this CRD, which is CRD-12 here: istio/install/kubernetes/helm/istio-init/files/crd-12.yaml However, I think that still needs to make it into the istio-init helm chart.
This was my mistake. My istio.io repo was installed with this as the url: https://storage.googleapis.com/istio-prerelease/daily-build/master-latest-daily/charts
I changed it to: https://gcsweb.istio.io/gcs/istio-release/releases/1.1.0/charts/ and that seems correct.
I just reran everything from scratch and it looks like 1.1.0 was deployed instead of latest. And everything is stable. I guess this was user error somehow after all. Thank you for your help in troubleshooting this.