dapr: [Proposal] Workflow building block and engine
In what area(s)?
/area runtime
What?
This document proposes that the Dapr runtime be extended to include a new workflow building block. This building block, in combination with a lightweight, portable, workflow engine will enable developers to express workflows as code that can be executed, interacted with, monitored, and debugged using the Dapr runtime.
Why?
Many complex business processes are well modeled as a workflow - a set of steps needing to be orchestrated that require resiliency and guarantee completion (success or failure are both completions). To build such workflows, developers are often faced with needing to solve a host of complex problems, including (but not limited to):
- Scheduling
- Lifecycle management
- State storage
- Monitoring and debugging
- Resiliency
- Failure handling mechanisms
Based on available data, it is clear that workflows are quite popular; at the time of this writing, the number of daily executions for hosted workflows and their tasks is in the billions per day across tens of thousands of Azure subscriptions.
What is a workflow?
A workflow, for the purpose of this proposal, is defined as application logic that defines a business process or data flow that:
- Has a specific, pre-defined, deterministic lifecycle (e.g Pending -> Running -> [Completed | Failed | Terminated])
- Is guaranteed to complete
- Is durable (i.e completion in the face of transient errors)
- Can be scheduled to start or execute steps at or after some future time
- Can be paused and resumed (explicitly or implicitly)
- Can execute portions of the workflow in serial or parallel
- Can be directly addressed by external agents (i.e an instance of the workflow can be interacted with directly - paused, resumed, queried, etc.)
- May be versioned
- May be stateful
- May create new sub-workflows and optionally wait for those to complete before progressing
- May rely on external components to perform its job (i.e HTTPS API calls, pub/sub message queues, etc.)
Why Dapr?
Dapr already contains many of the building blocks required to provide reliability, scalability, and durability to the execution of workflows. Building such an engine inside Dapr, and providing the necessary building blocks will help to increase developer productivity through re-usability of existing features and independence from the underlying execution mechanism thereby increasing portability.
In addition to the built-in execution engine, Dapr can provide a consistent programming interface for interacting with third-party workflow execution systems (i.e AWS SWF, Apache Camel, Drools) for those who are already using these tools. Thereby providing a standardized interface for working with both external workflows as well as those running inside Dapr.
Proposal
High-level overview of changes
We propose that the following features / capabilities be added to the Dapr runtime:
- A new “workflow” building block
- A portable, lightweight, workflow engine embedded into the Dapr sidecar capable of supporting long-running, resilient, and durable workflows through Dapr’s building blocks
- An expressive, developer-friendly, programming model for building workflows as code
- Support for containerized, declarative, workflows (such as the CNCF Serverless Workflow specification)
- Extensions to the Dapr dashboard for monitoring / managing workflow execution
- APIs for interacting with workflows
The Workflow building block
As mentioned before, this proposal includes the addition of a new workflow building block. Like most of the other Dapr building blocks (state stores, pubsub, etc.) the workflow building block will consist of two primary things:
- A pluggable component model for integrating various workflow engines
- A set of APIs for managing workflows (start, schedule, pause, resume, cancel)
Similar to the built-in support for actors, we also propose implementing a built-in runtime for workflows (see the DTFx-go engine described in the next section). Unlike actors, the workflow runtime component can be swapped out with an alternate implementation. If developers want to work with other workflow engines, such as externally hosted workflow services like Azure Logic Apps, AWS Step Functions, or Temporal.io, they can do so with alternate community-contributed workflow components.
The value of this building block for vendors is that workflows supported by their platforms can be exposed as APIs with support for HTTP and the Dapr SDKs. The less visible but benefits of mTLS, distributed tracing, etc. will also be available. Various abstractions, such as async HTTP polling, can also be supported via Dapr without the workflow vendor needing to implement it themselves.
Introducing DTFx-go
We propose adding a lightweight, portable, embedded workflow engine (DTFx-go) in the Dapr sidecar that leverages existing Dapr components, including actors and state storage, in its underlying implementation. By being lightweight and portable developers will be able to execute workflows that run inside DFTx-go locally as well as in production with minimal overhead; this enhances the developer experience by integrating workflows with the existing Dapr development model that users enjoy.
The new engine will be written in Go and inspired by the existing Durable Task Framework (DTFx) engine. We’ll call this new version of the framework DTFx-go to distinguish it from the .NET implementation (which is not part of this proposal) and it will exist as an open-source project with a permissive, e.g., Apache 2.0, license so that it remains compatible as a dependency for CNCF projects. Note that it’s important to ensure this engine remains lightweight so as not to noticeably increase the size of the Dapr sidecar.
Importantly, DTFx-go will not be exposed to the application layer. Rather, the Dapr sidecar will expose DTFx-go functionality over a gRPC stream. The Dapr sidecar will not execute any app-specific workflow logic or load any declarative workflow documents. Instead, app containers will be responsible for hosting the actual workflow logic. The Dapr sidecar can send and receive workflow commands over gRPC to and from connected app’s workflow logic, execute commands on behalf of the workflow (service invocation, invoking bindings, etc.). Other concerns such as activation, scale-out, and state persistence will be handled by internally managed actors. More details on all of this will be discussed in subsequent sections.
Execution, scheduling and resilience
Internally, Dapr workflow instances will be implemented as actors. Actors drive workflow execution by communicating with the workflow SDK over a gRPC stream. By using actors, the problem of placement and scalability are already solved for us.

The execution of individual workflows will be triggered using actor reminders as they are both persistent and durable (two critical features of workflows). If a container or node crashes during a workflow’s execution, the actor’s reminder will ensure it gets activated again and resumes where it left off (using state storage to provide durability, see below).
To prevent a workflow from blocking (unintentionally) each workflow will be composed of two separate actor components, one acting as the scheduler / coordinator and the other performing the actual work (calling API services, performing computation, etc.).

Storage of state and durability
In order for a workflow execution to reliably complete in the face of transient errors, it must be durable – meaning that it is able to store data at checkpoints as it makes progress. To achieve this, workflow executions will rely on Dapr’s state storage to provide stable storage such that the workflow can be safely resumed from a known-state in the event that it is explicitly paused or a step is prematurely terminated (system failure, lack of resources, etc.).
Workflows as code
The term “workflow as code” refers to the implementation of a workflow’s logic using general purpose programming languages. “Workflow as code” is used in a growing number of modern workflow frameworks, such as Azure Durable Functions, Temporal.io, and Prefect (Orion). The advantage of this approach is its developer-friendliness. Developers can use a programming language that they already know (no need to learn a new DSL or YAML schema), they have access to the language’s standard libraries, can build their own libraries and abstractions, can use debuggers and examine local variables, and can even write unit tests for their workflows just like they would any other part of their application logic.
The Dapr SDK will internally communicate with the DTFx-go gRPC endpoint in the Dapr sidecar to receive new workflow events and send new workflow commands, but these protocol details will be hidden from the developer. Due to the complexities of the workflow protocol, we are not proposing any HTTP API for the runtime aspect of this feature.
Support for declarative workflows
We expect workflows as code to be very popular for developers because working with code is both very natural for developers and is much more expressive and flexible compared to declarative workflow modeling languages. In spite of this, there will still be users who will prefer or require workflows to be declarative. To support this, we propose building an experience for declarative workflows as a layer on top of the “workflow as code” foundation. A variety of declarative workflows could be supported in this way. For example, this model could be used to support the AWS Step Functions workflow syntax, the Azure Logic Apps workflow syntax, or even the Google Cloud Workflow syntax. However, for the purpose of this proposal, we’ll focus on what it would look like to support the CNCF Serverless Workflow specification. Note, however, that the proposed model could be used to support any number of declarative multiple workflow schemas.
CNCF Serverless Workflows
Serverless Workflow (SLWF) consists of an open-source standards-based DSL and dev tools for authoring and validating workflows in either JSON or YAML. SLWF was specifically selected for this proposal because it represents a cloud native and industry standard way to author workflows. There are a set of already existing open-source tools for generating and validating these workflows that can be adopted by the community. It’s also an ideal fit for Dapr since it’s under the CNCF umbrella (currently as a sandbox project). This proposal would support the SLWF project by providing it with a lightweight, portable runtime – i.e., the Dapr sidecar.
Hosting Serverless Workflows
In this proposal, we use the Dapr SDKs to build a new, portable SLWF runtime that leverages the Dapr sidecar. Most likely it is implemented as a reusable container image and supports loading workflow definition files from Dapr state stores (the exact details need to be worked out). Note that the Dapr sidecar doesn’t load any workflow definitions. Rather, the sidecar simply drives the execution of the workflows, leaving all other details to the application layer.
API
Start Workflow API
HTTP / gRPC
Developers can start workflow instances by issuing an HTTP (or gRPC) API call to the Dapr sidecar:
POST http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}/start
Workflows are assumed to have a type that is identified by the {workflowType} parameter. Each workflow instance must also be created with a unique {instanceId} value. The payload of the request is the input of the workflow. If a workflow instance with this ID already exists, this call will fail with an HTTP 409 Conflict.
To support asynchronous HTTP polling pattern by HTTP clients, this API will return an HTTP 202 Accepted response with a Location header containing a URL that can be used to get the status of the workflow (see further below). When the workflow completes, this endpoint will return an HTTP 200 response. If it fails, the endpoint can return a 4XX or 5XX error HTTP response code. Some of these details may need to be configurable since there is no universal protocol for async API handling.
Input bindings
For certain types of automation scenarios, it can be useful to trigger new instances of workflows directly from Dapr input bindings. For example, it may be useful to trigger a workflow in response to a tweet from a particular user account using the Twitter input binding. Another example is starting a new workflow in response to a Kubernetes event, like a deployment creation event.
The instance ID and input payload for the workflow depends on the configuration of the input binding. For example, a user may want to use a Tweet’s unique ID or the name of the Kubernetes deployment as the instance ID.
Pub/Sub
Workflows can also be started directly from pub/sub events, similar to the proposal for Actor pub/sub. Configuration on the pub/sub topic can be used to identify an appropriate instance ID and input payload to use for initializing the workflow. In the simplest case, the source + ID of the cloud event message can be used as the workflow’s instance ID.
Terminate workflow API
HTTP / gRPC
Workflow instances can also be terminated using an explicit API call.
POST http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}/terminate
Workflow termination is primarily an operation that a service operator takes if a particular business process needs to be cancelled, or if a problem with the workflow requires it to be stopped to mitigate impact to other services.
If a payload is included in the POST request, it will be saved as the output of the workflow instance.
Raise Event API
Workflows are especially useful when they can wait for and be driven by external events. For example, a workflow could subscribe to events from a pubsub topic as shown in the Phone Verification sample. However, this capability shouldn’t be limited to pub/sub events.
HTTP / gRPC
An API should exist for publishing events directly to a workflow instance:
POST http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}/raiseEvent
The result of the “raise event” API is an HTTP 202 Accepted, indicating that the event was received but possibly not yet processed. A workflow can consume an external event using the waitForExternalEvent SDK method.
Get workflow metadata API
HTTP / gRPC
Users can fetch the metadata of a workflow instance using an explicit API call.
GET http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}
The result of this call is workflow instance metadata, such as its start time, runtime status, completion time (if completed), and custom or runtime-specific status. If supported by the target runtime, workflow inputs and outputs can also be fetched using the query API.
Purge workflow metadata API
Users can delete all state associated with a workflow using the following API:
DELETE http://localhost:3500/v1.0/workflows/{workflowType}/{instanceId}
When using the embedded workflow component, this will delete all state stored by the workflow’s underlying actor(s).
Footnotes and Examples
Example 1: Bank transaction
In this example, the workflow is implemented as a JavaScript generator function. The “bank1” and “bank2” parameters are Microservice apps that use Dapr, each of which expose “withdraw” and “deposit” APIs. The Dapr APIs available to the workflow come from the context parameter object and return a “task” which effectively the same as a Promise. Calling yield on the task causes the workflow to durably checkpoint its progress and wait until Dapr responds with the output of the service method. The value of the task is the service invocation result. If any service method call fails with an error, the error is surfaced as a raised JavaScript error that can be caught using normal try/catch syntax. This code can also be debugged using a Node.js debugger.
Note that the details around how code is written will vary depending on the language. For example, a C# SDK would allow developers to use async/await instead of yield. Regardless of the language details, the core capabilities will be the same across all languages.
import { DaprWorkflowClient, DaprWorkflowContext, HttpMethod } from "dapr-client";
const daprHost = process.env.DAPR_HOST || "127.0.0.1"; // Dapr sidecar host
const daprPort = process.env.DAPR_WF_PORT || "50001"; // Dapr sidecar port for workflow
const workflowClient = new DaprWorkflowClient(daprHost, daprPort);
// Funds transfer workflow which receives a context object from Dapr and an input
workflowClient.addWorkflow('transfer-funds-workflow', function*(context: DaprWorkflowContext, op: any) {
// use built-in methods for generating psuedo-random data in a workflow-safe way
const transactionId = context.createV5uuid();
// try to withdraw funds from the source account.
const success = yield context.invoker.invoke("bank1", "withdraw", HttpMethod.POST, {
srcAccount: op.srcAccount,
amount: op.amount,
transactionId
});
if (!success.success) {
return "Insufficient funds";
}
try {
// attempt to deposit into the dest account, which is part of a separate microservice app
yield context.invoker.invoke("bank2", "deposit", HttpMethod.POST, {
destAccount: op.destAccount,
amount: op.amount,
transactionId
});
return "success";
} catch {
// compensate for failures by returning the funds to the original account
yield context.invoker.invoke("bank1", "deposit", HttpMethod.POST, {
destAccount: op.srcAccount,
amount: op.amount,
transactionId
});
return "failure";
}
});
// Call start() to start processing workflow events
workflowClient.start();
Example 2: Phone Verification
Here’s another sample that shows how a developer might build an SMS phone verification workflow. The workflow receives some user’s phone number, creates a challenge code, delivers the challenge code to the user’s SMS number, and waits for the user to respond with the correct challenge code.
The important takeaway is that the end-to-end workflow can be represented as a single, easy-to-understand function. Rather than relying directly on actors to hold state explicitly, state (such as the challenge code) can simply be stored in local variables, drastically reducing the overall code complexity and making the solution easily unit testable.
import { DaprWorkflowClient, DaprWorkflowContext, HttpMethod } from "dapr-client";
const daprHost = process.env.DAPR_HOST || "127.0.0.1"; // Dapr sidecar host
const daprPort = process.env.DAPR_WF_PORT || "50001"; // Dapr sidecar port for workflow
const workflowClient = new DaprWorkflowClient(daprHost, daprPort);
// Phone number verification workflow which receives a context object from Dapr and an input
workflowClient.addWorkflow('phone-verification', function*(context: DaprWorkflowContext, phoneNumber: string) {
// Create a challenge code and send a notification to the user's phone
const challengeCode = yield context.invoker.invoke("authService", "createSmsChallenge", HttpMethod.POST, {
phoneNumber
});
// Schedule a durable timer for some future date (e.g. 5 minutes or perhaps even 24 hours from now)
const expirationTimer = context.createTimer(challengeCode.expiration);
// The user gets three tries to respond with the right challenge code
let authenticated = false;
for (let i = 0; i <= 3; i++) {
// subscribe to the event representing the user challenge response
const responseTask = context.pubsub.subscribeOnce("my-pubsub-component", "sms-challenge-topic");
// block the workflow until either the timeout expires or we get a response event
const winner = yield context.whenAny([expirationTimer, responseTask]);
if (winner === expirationTimer) {
break; // timeout expired
}
// we get a pubsub event with the user's SMS challenge response
if (responseTask.result.data.challengeNumber === challengeCode.number) {
authenticated = true; // challenge verified!
expirationTimer.cancel();
break;
}
}
// the return value is available as part of the workflow status. Alternatively, we could send a notification.
return authenticated;
});
// Call listen() to start processing workflow events
workflowClient.listen();
Example 3: Declarative workflow for monitoring patient vitals
The following is an example of a very simple SLWF workflow definition that listens on three different event types and invokes a function depending on which event was received.
{
"id": "monitorPatientVitalsWorkflow",
"version": "1.0",
"name": "Monitor Patient Vitals Workflow",
"states": [
{
"name": "Monitor Vitals",
"type": "event",
"onEvents": [
{
"eventRefs": [
"High Body Temp Event",
"High Blood Pressure Event"
],
"actions": [{"functionRef": "Invoke Dispatch Nurse Function"}]
},
{
"eventRefs": ["High Respiration Rate Event"],
"actions": [{"functionRef": "Invoke Dispatch Pulmonologist Function"}]
}
],
"end": true
}
],
"functions": "file://my/services/asyncapipatientservicedefs.json",
"events": "file://my/events/patientcloudeventsdefs.yml"
}
The functions defined in this workflow would map to Dapr service invocation calls. Similarly, the events would map to incoming Dapr pub/sub events. Behind the scenes, the runtime (which is built using the Dapr SDK APIs mentioned previously) handles the communication with the Dapr sidecar, which in turn manages the checkpointing of state and recovery semantics for the workflows.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 75
- Comments: 66 (34 by maintainers)
Steering Committee Update
The Dapr STC voted in favor of accepting this proposal and the Workflows building block.
For awareness, I’ve started working on a reference implementation for the embedded workflow engine. If you’re interested in following along, it can be found here.
Note that the reference implementation is for demonstrating the feasibility of building a reliable Durable Task-based workflow engine backed by Dapr Actors. It’s written in C#, uses the Dapr Actors SDK for .NET, and will run outside the Dapr sidecar. It’s not intended to be used in the final implementation. However, it will support the same gRPC contract and can therefore be used to build/test Dapr Workflow SDK implementations in parallel with the actual embedded engine development.
Correct - the embedded engine will be the “out of the box” option for anyone that doesn’t want to install additional infrastructure into their cluster. We want external workflow services to be supported by the building block (we expect many will prefer to use workflow systems that they already know and love), but not required.
Yes, this is essentially a programming model that sits on top of actors, so you’ll still need to configure a state store that supports actors.
I’m not too worried about confusion because DTFx-go is just an implementation detail that most users won’t know or care about. Users of Dapr will simply be presented with “Dapr Workflow” as a concept and we wouldn’t necessarily expose the same extensibility or tooling. The existing DTFx isn’t super well-known outside of Azure Functions or internal Microsoft circles, so I’m assuming there won’t be a lot of opportunity for confusion even for folks who care to look at the implementation details.
FWIW, the DTFx-go backend storage provider will one built specifically for storing state and load balancing via the Dapr Actors infrastructure.
Yes, integration with the Dapr Dashboard is definitely part of the plan.
Just an update for those who may be interested: the first iteration of the Durable Task Framework clone for Go is here: https://github.com/microsoft/durabletask-go.
This is just the core engine, the gRPC contract (for the runtime), and the core engine abstractions. I verified that it’s compatible with the existing gRPC-based Durable Task SDK for .NET (the same one I used as the basis for my POC demo) by running all the existing Durable Task integration tests and pointing them at this engine. This is part 1 of delivering the embedded Dapr Workflow engine.
Part 2 will include the full Dapr integration. The code above doesn’t include anything related to Dapr - it’s pure Durable Task Framework stuff. In the next phase of the engine work, the plan is to import the above package as a dependency into the Dapr GitHub repo (starting in the
feature/workflowsbranch ofdapr/dapr) and implement the Actor-based backend using the Dapr Go SDK. Once this is done, we should be able to faithfully reproduce the previous demo but using the real Dapr sidecar without any POC bits or extra sidecars.Sorry for the radio silence. I will partially blame summer vacation schedules. 😃
Just as an FYI to interested folks, some initial POC work has been completed and we’re hoping to share progress and maybe do some demos at an upcoming Dapr Community Call.
Frieds, If you’re interested in the Serverless Workflow specification for this effort, please let us know! We are at the CNCF slack,
#serverless-workflowchannel.For anyone who may have missed the community call, you can see a recording of the Dapr Workflow POCs using this link: https://www.youtube.com/watch?v=8Aj1WUzVvGs&t=115s.
@tstojecki
I’m not an authority on this, but theres a couple of ways this could go…
It is up to your user code to
try {} catch {}around any Operation that may fail. In thecatch {}block perform any necessary compensation around that Operation. An example compensation which might be perform the same operation again aka. a manual retry.The developer/operator applies custom resiliency policies in Dapr to express the retry mechanism at the CRD/infrastructure.
A combination of both 2 and 3 i.e. Apply resiliency policies first, and then fall back to Exception handling if the resiliency policy is exhausted/unsuccessful.
If Dapr Workflows follows the tried and tested strategy of Azure Durable Functions, then any unhandled/uncaught Exception will put the Workflow into a
failedstate.In this failed
stateyou can either :Since this proposal is accepted by STC, we will start to design and try to bring the first simple, yet runnable demo to quickly verify our ideas and thus it will be a baseline to us for more deep and detail discussion.
Amazing to see this proposal! Big yes to it!!
Most Dapr maintainers and STC members won’t be attending KubeCon EU physically to the best of my knowledge. It would be best to schedule a virtual call to discuss this.
ContinueAsNew, yes - this will be important for application patterns like eternal workflows.
Durable entities is TBD. Dapr already has native support for actors, but I can imagine we might bring it in to support things like distributed critical sections in workflows at some point.
@olitomlinson – I think those are great questions so thanks for asking them! I will avoid sounding like an echo chamber since Chris, Hal and Yaron mostly answered everyone’s questions (they’re so quick they answered before I even saw them!).
That being said, I can see a world where it might be possible to define workflows without using a language SDK, similar to how GitHub declares its workflows as YAML, with built-in predefined actions (or actions people have written similar to components). However, the explicit goal of this design is to avoid yet another workflow language (“yawful?” 😄) and allow developers to use their language of choice.
The goal is to make the Dapr WF APIs (and all Dapr APIs in general) a standard and via work that is on-going on our API-spec special interest group.
I agree with what you’re saying here, but I actually think its valid. In this case, we certainly want to encourage users to choose the default, tested and optimized path of least resistance yet open the door for other runtimes if there are special considerations to be made.
@jplane I wonder if there might be a slight misunderstanding. We’re not proposing that the internal execution engine should support plugging in existing WF runtimes like Step Functions or Temporal.io. That would be a really hard problem to solve, as you suggested, and may not be in everyone’s best interest. Rather, we’re proposing two specific stories for how other workflow languages and/or runtimes can be pulled in:
The latter point (2) isn’t strictly “pluggable extensibility”, per-se, but more of a model for how developers could contribute their own declarative workflow runtimes that internally rely on the Dapr Workflow built-in engine. It’s very similar to the POC SLWF prototype you and I built some time back on top of Durable Functions - the existing Durable engine was used to implement scheduling, durability, etc. and a layer on top was built to interpret the SLWF markup and interface with the Durable APIs.
I hope that makes sense. I can try to clarify further if it’s still confusing.
Yes, thanks @olitomlinson for calling this out. @johnewart I think we need to update the description above to reflect this important coding constraint.
JFYI, first PR into the
feature/workflowsbranch has been merged: https://github.com/dapr/dapr/pull/5301.It introduces an internal actor concept which is used by the new durable task-based workflow engine. More PRs will be published over the next few weeks that flesh out the full workflow engine feature set.
@tstojecki regarding your question about the demo:
Just to add to what @olitomlinson said, there are two types of “failures” to consider:
In the first case (application failures), retry policies will be governed by retry policies. Custom resiliency policies will definitely apply and you’ll also be able to apply custom logic error handling/retry logic directly in your workflow code using normal error handling constructs, like try/catch (more details on this to come).
In the second case (infrastructure failures), retries will be automatic. For example, if your workflow invokes a service and the node hosting that service crashes, the service invocation will be retried automatically. Similarly, if the node hosting the workflow crashes, the workflow will be restarted automatically and will resume from where it left off. Technically, the workflow will restart its execution from the beginning, but any operations (service invocation, pub/sub, etc.) that were already executed will be skipped and only new operations (the ones that weren’t yet started) will be executed.
bump so this doesn’t go stale
What’s the timescales looking like for getting something in hand, even if its just an alpha build that we can play around with? Thanks
Yes, it hasn’t been accepted yet.
Not much of a deviation. The Configuration API started out as gRPC only, and the upcoming Distributed Lock API is also gRPC only.
The original post goes into the details of the workflow building block APIs, which describe how existing app code can interact with Dapr workflows, whether self-hosted or externally hosted. APIs for implementing self-hosted workflows like
context.invoker.invokearen’t currently enumerated. Right now, we’re expecting to cover core Dapr APIs, like service invocation, pub/sub, bindings, etc. but will likely have a few others as well. Exact details TBD.Yes, absolutely. The code samples above show only one call to
workflowClient.addWorkflow(...), but multiple calls can be made to register multiple workflows from the same app.Yes, a single container image/app can host workflows and service invocation endpoints together, so if the
context.invoker.invokecall targets the currently running Dapr app, then the same container image would be the one that receives the service invocation request.Behind the scenes, yes. This actor is designed to do any work that may take an indeterminate amount of time to complete, like service invocation. This frees up the scheduler actor to do other work, like respond to queries.
Not necessarily. Technically, the scheduler actor could do all the I/O on behalf of the workflow code. The worker actor is really only for potentially long-running I/O, to keep the scheduler actor from getting blocked for too long (actors are single threaded). We may have the scheduler actor do other types of I/O directly, like publishing pub/sub messages.