syndesis: The syndesis-server pod fails to start on OCP4

This is a…


[ ] Feature request
[ ] Regression (a behavior that used to work and stopped working in a new release)
[*] Bug report  
[ ] Documentation issue or request

Description

I deployed the Syndesis Operator on an OpenShift cluster created by the OpenShift installer. The operator was deployed successfully after the following commands:

$ oc new-project syndesis
$ oc apply -f deploy/syndesis-crd.yml
$ oc apply -f deploy/syndesis-operator.yml
$ oc get templates -n syndesis

$ oc process syndesis-operator -p NAMESPACE=syndesis | oc create -f -

I then applied the Syndesis CR:

$ oc apply -f deploy/syndesis.yml

I then performed a search for available routes and found that a syndesis and todo route existed:

$ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
syndesis syndesis-syndesis2.apps.agreene2.devcluster.openshift.com syndesis-oauthproxy 8443 reencrypt/Redirect None
todo todo-syndesis-syndesis2.apps.agreene2.devcluster.openshift.com / todo 8080 edge/Allow None

Visiting the syndesis route showed that requests for integrations and reservations were failing:


    Failing Integration Requests: syndesis-syndesis.apps.clusteranik120.devcluster.openshift.com/api/v1/metrics/integrations
    Failing Reservation Requests: syndesis-syndesis.apps.clusteranik120.devcluster.openshift.com/api/v1/event/reservations

A quick review of the running pods showed that the syndesis-server pod kept restarting and could not reach a running state, the logs from a short lived pod are attached as syndesis-server.txt

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 19 (7 by maintainers)

Commits related to this issue

Most upvoted comments

I created a branch ocp4 from master. Let’s consolidate changes there. I am working on changes for the CA cert issue.

Good catch @anik120 !

@heiko-braun @rhuss

Have there been some changes between OCP 3 and 4 how the platform maps certificates into a Pod’s filesystem?

https://github.com/openshift/openshift-docs/issues/12487#issuecomment-431101731

Pods that currently consume the service-serving CA bundle from /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt should migrate to obtaining the CA bundle from a configMap annotated with "service.alpha.openshift.io/inject-cabundle=true"

Looks like some code changes are needed to make the operator OCP 4 compliant.

Thanks @anik120, your feedback is very useful. We didn’t yet do any verification on OKD 4, but it looks like we better do 😃