pipeline: Design for triggering PipelineRuns from events
Expected Behavior
We need to be able to trigger the creation of PipelineRuns from events, e.g. from knative/eventing events.
~This is a placeholder issue we can use to gather requirements + ideas, which we can synthesize in this doc (note doc editable by members of knative-dev@googlegroups.com).~
~The most recent and relevant proposal~
Now THIS is the most recent proposal/design which proposes creating a top level project based on the POC at https://github.com/tektoncd/experimental/tree/master/tekton-listener 🎉🎉🎉
Actual Behavior
PipelineRuns, and all PipelineResources required to execute them, must be created manually.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 1
- Comments: 15 (5 by maintainers)
Commits related to this issue
- Add initial list of owners for event triggering âš¡ Initial list of owners will be the folks who created the design (see https://github.com/tektoncd/pipeline/issues/315 for design docs and discussions) — committed to bobcatfish/triggers by bobcatfish 5 years ago
- Add initial list of owners for event triggering âš¡ Initial list of owners will be the folks who created the design (see https://github.com/tektoncd/pipeline/issues/315 for design docs and discussions) — committed to bobcatfish/triggers by bobcatfish 5 years ago
- Add initial list of owners for event triggering âš¡ Initial list of owners will be the folks who created the design (see https://github.com/tektoncd/pipeline/issues/315 for design docs and discussions) — committed to openshift/tektoncd-triggers by bobcatfish 5 years ago
Brainstorming about this topic…
With the existing API, the way to trigger a
Pipeline(or aTask) is to define thePipelineResources, define thePipelineRunand run it, so the service that defines thePipelineRunalso triggers it. I can image the use cases where aPipelineRunis defined but not yet executed, for instance if there are parts of the pipeline that we want to decouple, we can havepipelineAto publish an event when it’s done (e.g. https://github.com/knative/build-pipeline/issues/587), andpipelineBto be triggered by that event. A real life example could bepipelineAruns some tests,pipelineBdoes test result post-processing, or publishes a comment on github or so. IfpipelineAis used as a build template for kservice, once it completes the kservice will be deployed.pipelineBwill run asynchronously and perform any postprocessing that is needed but that should not block the service deployment.A simplifying assumption would be that service that generates
PipelineRunobjects knows everything needed bypipelineAandpipelineBfrom the beginning. If we remove that assumption, need we need to create a way for aPipelineRunto receive inputs via a CloudEvent.In terms of implementations, if the
PipelineRunspecifies a channel as trigger, I think there are two options.PipelineRuncontroller becomes anAddressableand it subscribes to all channels defined in anyPipelineRunthat has not been executed yetPipelineRunis created, thePipelineRuncontroller generates andAddressablethat subscribes to the channel. When theAddressablereceives the event, it patches thePipelineRunwith info from the Cloud Event, and it sets a field that tells thePipelineRuncontroller that thePipelineRunis ready to be executed now. The same applies forTaskRun.In option (1) the controller needs to process Cloud Events. Having one place that receives all events may be more efficient, but it may become an issue on a large scale system. Having one component doing reconciliation and listening for triggers may also be not ideal as the two functions may interfere which each other.
In option (2) the controller logic is not changed that much and there is a good separation of concerns. It may be possible to implement multiple
Addressablesthat react to different kind of events… even though the same could be achieved by using Adapters that convert events to cloud events. The downside is that we create one service just to receive a single event.An option (3) in the middle could be to have a single external addressable that subscribes to all events of all pipelines and tasks, which lives as long as the controller lives, with subscriptions added and removed on demand.