rancher: ipsec logs samonitor: error initiating missing SA child

Rancher versions: rancher/server: 1.6.14 / 1.6.15-rc3 rancher/agent: 1.2.9 / 1.2.10-rc3

Infrastructure Stack versions: healthcheck: 0.3.3 ipsec: v0.13.9 network-services: 0.2.9

Docker version: 1.13.1 uname -r 4.4.0-1020-aws AMI: ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20170619.1 (ami-1c45e273) was provided by Rancher default for eu-central-1 zone

Operating system and kernel: Ubuntu 16.04

Type/provider of hosts: AWS

Setup details: single node rancher & HA rancher + rds

Environment Template: Cattle

Steps to Reproduce: start 2-3 hosts with several stacks firewall rules wide pen (all inbound & aoutbound tcp & udp allowed 0.0.0.0/0) Results:

  1. healthcheck sometimes ok sometimes stick in initializing state. healthcheck containers logs (many of those)
2/13/2018 1:11:09 AMtime="2018-02-12T22:11:09Z" level=error msg="Failed to report status a143f076-51fe-468e-9981-2a50ba01758d_8d5a078a-17ea-4766-9a31-d160db939f73_1=INIT: Bad response from [http://18.194.67.110:4444/v1/serviceevents], statusCode [409]. Status [409 Conflict]. Body: [{\"id\":\"987d9ca8-245c-4a43-93a5-9f271a6dce5c\",\"type\":\"error\",\"links\":{},\"actions\":{},\"status\":409,\"code\":\"Conflict\",\"message\":\"Conflict\",\"detail\":null,\"baseType\":\"error\"}]"
2/13/2018 1:11:09 AMtime="2018-02-12T22:11:09Z" level=error msg="Failed to report status a143f076-51fe-468e-9981-2a50ba01758d_6415cca1-c063-4c5e-91fc-5eaaaf59e0cd_1=DOWN: Bad response from [http://18.194.67.110:4444/v1/serviceevents], statusCode [409]. Status [409 Conflict]. Body: [{\"id\":\"3aa455a3-41d1-4a6c-bdc5-303722ead526\",\"type\":\"error\",\"links\":{},\"actions\":{},\"status\":409,\"code\":\"Conflict\",\"message\":\"Conflict\",\"detail\":null,\"baseType\":\"error\"}]"
  1. ipsec containers constantly restarting, reinitializing, or failed state. never ok actually
2/13/2018 1:27:41 AMtime="2018-02-12T22:27:41Z" level=error msg="samonitor: error initiating missing SA child-123.123.123.123: unsuccessful Initiate: establishing CHILD_SA 'child-123.123.123.123' failed"
2/13/2018 1:27:42 AM07[ENC] parsed IKE_SA_time="2018-02-12T22:27:42Z" level=info msg=Reconfiguring
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg=Reconfiguring
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg="Deleted policy: {Dst: 10.42.0.0/16, Src: 10.42.193.216/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir fwd, Priority: 10000, Index: 464962, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 123.123.123.123, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg="Deleted policy: {Dst: 10.42.0.0/16, Src: 10.42.173.46/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir fwd, Priority: 10000, Index: 464938, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 123.123.123.123, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg="Deleted policy: {Dst: 10.42.173.46/32, Src: 10.42.0.0/16, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir out, Priority: 10000, Index: 464921, Mark: <nil>, Tmpls: [{Dst: 123.123.123.123, Src: 10.42.89.166, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg="Deleted policy: {Dst: 10.42.176.111/32, Src: 10.42.0.0/16, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir out, Priority: 10000, Index: 464897, Mark: <nil>, Tmpls: [{Dst: 172.31.13.217, Src: 10.42.89.166, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg="Deleted policy: {Dst: 10.42.0.0/16, Src: 10.42.173.46/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir in, Priority: 10000, Index: 464928, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 123.123.123.123, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg="Deleted policy: {Dst: 10.42.0.0/16, Src: 10.42.176.111/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir fwd, Priority: 10000, Index: 464914, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 172.31.13.217, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg="Deleted policy: {Dst: 10.42.193.216/32, Src: 10.42.0.0/16, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir out, Priority: 10000, Index: 464945, Mark: <nil>, Tmpls: [{Dst: 123.123.123.123, Src: 10.42.89.166, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg="Deleted policy: {Dst: 10.42.0.0/16, Src: 10.42.176.111/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir in, Priority: 10000, Index: 464904, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 172.31.13.217, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:42 AMtime="2018-02-12T22:27:42Z" level=info msg="Deleted policy: {Dst: 10.42.0.0/16, Src: 10.42.193.216/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir in, Priority: 10000, Index: 464952, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 123.123.123.123, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:43 AMtime="2018-02-12T22:27:43Z" level=info msg=Reconfiguring
2/13/2018 1:27:44 AMtime="2018-02-12T22:27:44Z" level=info msg=Reconfiguring
2/13/2018 1:27:44 AMtime="2018-02-12T22:27:44Z" level=info msg="Added policy: {Dst: 10.42.0.0/16, Src: 10.42.8.90/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir fwd, Priority: 10000, Index: 0, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 123.123.123.123, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:44 AMtime="2018-02-12T22:27:44Z" level=info msg="Added policy: {Dst: 10.42.149.194/32, Src: 10.42.0.0/16, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir out, Priority: 10000, Index: 0, Mark: <nil>, Tmpls: [{Dst: 172.31.13.217, Src: 10.42.89.166, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:44 AMtime="2018-02-12T22:27:44Z" level=info msg="Added policy: {Dst: 10.42.0.0/16, Src: 10.42.149.194/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir in, Priority: 10000, Index: 0, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 172.31.13.217, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:44 AMtime="2018-02-12T22:27:44Z" level=info msg="Added policy: {Dst: 10.42.0.0/16, Src: 10.42.149.194/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir fwd, Priority: 10000, Index: 0, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 172.31.13.217, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:44 AMtime="2018-02-12T22:27:44Z" level=info msg="Added policy: {Dst: 10.42.8.90/32, Src: 10.42.0.0/16, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir out, Priority: 10000, Index: 0, Mark: <nil>, Tmpls: [{Dst: 123.123.123.123, Src: 10.42.89.166, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:44 AMtime="2018-02-12T22:27:44Z" level=info msg="Added policy: {Dst: 10.42.0.0/16, Src: 10.42.8.90/32, Proto: 0, DstPort: 0, SrcPort: 0, Dir: dir in, Priority: 10000, Index: 0, Mark: <nil>, Tmpls: [{Dst: 10.42.89.166, Src: 123.123.123.123, Proto: esp, Mode: tunnel, Spi: 0x0, Reqid: 0x4d2}]}"
2/13/2018 1:27:46 AMtime="2018-02-12T22:27:46Z" level=info msg=Reconfiguring
2/13/2018 1:27:46 AMtime="2018-02-12T22:27:46Z" level=info msg=Reconfiguring

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 19 (8 by maintainers)

Commits related to this issue

Most upvoted comments

If you want to obtain dynamic global IP, you can get it with public-ipv4.

docker run --rm --privileged -e CATTLE_AGENT_IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4) -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.10-rc4 http://123.123.123.123:8080/v1/scripts/blahblahblah1:blahblahblah2:blahblahblah3

but, it is necessary to open it at 0.0.0.0 / 0, so it is recommended to use it on a private network.

Thank you. I did not enable the proxy protocol. It was solved by enabling this!

$ aws elb create-load-balancer-policy --load-balancer-name <LB_NAME> --policy-name <POLICY_NAME> --policy-type-name ProxyProtocolPolicyType --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true
$ aws elb set-load-balancer-policies-for-backend-server --load-balancer-name <LB_NAME> --instance-port 443 --policy-names <POLICY_NAME>
$ aws elb set-load-balancer-policies-for-backend-server --load-balancer-name <LB_NAME> --instance-port 8080 --policy-names <POLICY_NAME>```