pipeline: Design for triggering PipelineRuns from events

Expected Behavior

We need to be able to trigger the creation of PipelineRuns from events, e.g. from knative/eventing events.

~This is a placeholder issue we can use to gather requirements + ideas, which we can synthesize in this doc (note doc editable by members of knative-dev@googlegroups.com).~

~The most recent and relevant proposal~

Now THIS is the most recent proposal/design which proposes creating a top level project based on the POC at https://github.com/tektoncd/experimental/tree/master/tekton-listener 🎉🎉🎉

Actual Behavior

PipelineRuns, and all PipelineResources required to execute them, must be created manually.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 1
  • Comments: 15 (5 by maintainers)

Commits related to this issue

Most upvoted comments

Brainstorming about this topic…

With the existing API, the way to trigger a Pipeline (or a Task) is to define the PipelineResources, define the PipelineRun and run it, so the service that defines the PipelineRun also triggers it. I can image the use cases where a PipelineRun is defined but not yet executed, for instance if there are parts of the pipeline that we want to decouple, we can have pipelineA to publish an event when it’s done (e.g. https://github.com/knative/build-pipeline/issues/587), and pipelineB to be triggered by that event. A real life example could be pipelineA runs some tests, pipelineB does test result post-processing, or publishes a comment on github or so. If pipelineA is used as a build template for kservice, once it completes the kservice will be deployed. pipelineB will run asynchronously and perform any postprocessing that is needed but that should not block the service deployment.

A simplifying assumption would be that service that generates PipelineRun objects knows everything needed by pipelineA and pipelineB from the beginning. If we remove that assumption, need we need to create a way for a PipelineRun to receive inputs via a CloudEvent.

In terms of implementations, if the PipelineRun specifies a channel as trigger, I think there are two options.

  1. The PipelineRun controller becomes an Addressable and it subscribes to all channels defined in any PipelineRun that has not been executed yet
  2. When the PipelineRun is created, the PipelineRun controller generates and Addressable that subscribes to the channel. When the Addressable receives the event, it patches the PipelineRun with info from the Cloud Event, and it sets a field that tells the PipelineRun controller that the PipelineRun is ready to be executed now. The same applies for TaskRun.

In option (1) the controller needs to process Cloud Events. Having one place that receives all events may be more efficient, but it may become an issue on a large scale system. Having one component doing reconciliation and listening for triggers may also be not ideal as the two functions may interfere which each other.

In option (2) the controller logic is not changed that much and there is a good separation of concerns. It may be possible to implement multiple Addressables that react to different kind of events… even though the same could be achieved by using Adapters that convert events to cloud events. The downside is that we create one service just to receive a single event.

An option (3) in the middle could be to have a single external addressable that subscribes to all events of all pipelines and tasks, which lives as long as the controller lives, with subscriptions added and removed on demand.