Bruce Silver's blog post - Data Flow in Business Automation
Bruce Silver
Blog

Data Flow in Business Automation

By Bruce Silver

Read Time: 4 Minutes

In BPMN Method and Style, which deals with non-executable models, we don’t worry about process data. It’s left out of the models entirely. We just pretend that any data produced or received by process tasks is available downstream. In Business Automation, i.e., executable BPMN, that’s not the case.

Process data is not pervasive, available automatically wherever it’s needed. Instead it “flows”, point to point, between process variables shown in the diagram and process tasks. In fact, data flow is what makes the process executable. With most BPMN tools, defining the data flow requires programming, but Trisotech borrows boxed expressions and FEEL from DMN to make data flow – and by extension, executable process design – Low-Code. This post explains how that works.

First, we need to distinguish process variables from task variables. Process variables are shown in the BPMN diagram, as data inputs, data outputs, and regular data objects. A data input signifies data received by the process from outside. When we deploy an executable process as a cloud service, a data input is an input parameter of the API call that triggers the process. Similarly, a data output is an output parameter of the API call, the response. A data object is a process variable created and populated within a running process.

Task variables are not shown in the process diagram. They are properties of the tasks, and the BPMN spec does not specify how to display them to modelers. What is shown in the diagram are data associations, dotted arrows linking process variables to and from process tasks. A data input association from a process variable to a task means a mapping exists between that variable and a task input. Similarly, a data output association from a task to a process variable means a mapping exists between a task output and the process variable.The internal logic of each task, say a decision service or external cloud service, references the task variables. It makes no reference to process variables.

On the Trisotech platform, the task inputs and outputs are shown in the Data Mapping boxed expressions in the task configuration. Task variable definitions depend on the task type. For Service tasks, Decision tasks, and Call Activities, they are defined by the called element, not by the calling task. For Script and User tasks, they are defined within the process model, in fact within the Data Mapping configuration.

A Service task calls a REST service operation. The Operation Library in the BPMN tool provides a catalog of service operations available to the Service tasks in the process. Operation Library entries are typically created by the tool automatically by importing an OpenAPI or OData file from the service provider, but they can also be created manually from the API documentation. Each entry in the Operation Library specifies, among other details, the service input and output parameters. When you bind a Service task to an entry in the Operation Library, the Service task inputs are the service’s input parameters and the Service task outputs are the service’s output parameters.

A Decision task calls a DMN decision service on the Trisotech platform. In the DMN model, you create the decision service, specifying the output decisions, which become the service outputs, and the service inputs, which could be input data or supporting decisions. When you bind a Decision task to the decision service, the service inputs become the Decision task inputs and the service outputs become the Decision task outputs.

A Call Activity calls an executable process on the Trisotech platform. When you bind the Call Activity to the called process, the data inputs of the called process become the task inputs of the Call Activity, and the data outputs of the called process become the task outputs of the Call Activity.

With Service tasks, Decision tasks, and Call Activities, the task inputs and outputs are given. They cannot be changed in the calling process. It is not necessary for process variables in the calling process to have the same name or data type as the task variables they connect to. All that is required is that a mapping can be defined between them.

Let’s look at some examples. Below we see the DRD of a decision model for automated approval of an employee vacation request, and below that the decision service definition, Approve Vacation Request.

In BPMN, the Decision task Approve Vacation Request is bound to this decision service.

Here you see the process variables Vacation Request (data input), Employee Record (data object), and Requested Days (data object). Vacation Request and Employee Record have data input associations to the Decision task. There is a data output association to Requested Days and two more to variables not seen in this view.

The Data Mapping dialog of Approve Vacation Request shows the mapping between the process variables and the task inputs and outputs. Remember, with Decision tasks, the task inputs and outputs are determined already, as the input and output parameters of the decision service. In the Input Mapping below, the task inputs are shown in the Mapping section, labeled Inner data inputs. The process variables with data input associations are listed at the top, under Context. The Mapping expression is a FEEL expression of those process variables. In this case, it is very simple, the identity mapping, but that is not always the case.

In the Output Mapping below, the task outputs are shown in the Context section, and the process variables in the first column of the Mapping section. The Output Mapping is also quite simple, except note that the task output Vacation Approval is mapped to two different process variables, Approval and Approval out. That is because a process data output cannot be the source of an incoming data association to another task, so sometimes you have to duplicate the process variable.

Not all data mappings are so simple. Let’s look at the Service task Insert Trade in the process model Execute Trade, below.

Insert Trade is bound to the OData operation Create trades, which inserts a database record. Its Operation Library entry specifies the task input parameter trade, type NS.trade, shown below.

Actually, because the ID value is generated by the database, we need to exclude it from the Input Mapping. Also, the process variable trade uses a FEEL date and time value for Timestamp instead of the Unix timestamp used in NS.trade. We can accommodate these details by using a context in the Input Mapping instead of a simple literal expression. It looks like this:

The context allows each component of the task input trade to be mapped individually from the process variables trade and Quote (which provides the price at the time of trade execution). As we saw with the Decision task, the Context section lists the available process variables, and the first column of the Mapping section lists the task inputs. Note also that the Price mapping uses the conditional boxed expression for if..then..else, new in DMN 1.4, designed to make complex literal expressions more business-friendly.

Finally, let’s look at the Script task in this process, Validate Trade Value. Script and User tasks differ from the other task types in that the modeler actually defines the task inputs and outputs in the Data Mapping. On the Trisotech platform, a Script task has a single task output with a value determined by a single FEEL literal expression. As you can see in the process diagram, the process variables with incoming data associations to Validate Trade Value are trade, Quote, and portfolio, and the task output is passed to the variable Error list. This task checks to see if the value of a Buy trade exceeds the portfolio Cash balance or the number of shares in a Sell trade exceeds the portfolio share holdings.

In the Script task Input Mapping, the modeler defines the names and types of the task inputs in the first column of the Mapping section. The mapping to those task inputs here are more complex literal expressions. The script literal expression references those task inputs, not the process variables. User tasks work the same way, except they are not limited to a single task output.

Hopefully you see now how data flow turns a non-executable process into an executable one. The triggering API call populates one or more process data inputs, from which data is passed via Input Mapping to task inputs. The resulting task outputs are mapped to other process variables (data objects), and so forth, until the API response is returned from the process data output. On the Trisotech platform, none of this requires programming. It’s just a matter of drawing the data associations and filling in the Data Mapping tables using FEEL.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - BPMN Call Activity vs Subprocess: What's the Difference?
Bruce Silver
Blog

BPMN Call Activity vs Subprocess: What’s the Difference?

By Bruce Silver

Read Time: 3 Minutes

BPMN has an element that looks very much like a subprocess, except drawn with a thick border. Perhaps you have used it yourself… probably incorrectly. Its name is call activity, and it is similar in several ways to a subprocess, but it is not the same. Both elements are important, but valuable for different reasons. This post will explain.

Call activity and subprocess share important characteristics. Both are simultaneously a single activity and an activity flow from start event to end event. And both may be rendered in the diagrams in two different ways: collapsed, denoted with a [+] marker at bottom center, or expanded, an enlarged rounded rectangle enclosing the flow. In BPMN 1.x, call activity and subprocess were actually variants of the same element. What we today know as a subprocess back then was called an embedded subprocess, and call activity was called a reusable subprocess. Of course, no one knew exactly what that meant. Some thought it had to with the shape – collapsed or expanded – but it had nothing to do with that. BPMN 2.0 resolved the distinction more clearly.

A subprocess is really just a graphical device. It allows a process fragment to be collapsed into a single activity to allow a higher level view of the end-to-end process. Rather than needing to create a high-level process model you can print on one page and a separate detailed model that stretches over six feet of wall space, and keeping those models in sync as the process definition evolves, subprocesses allow a single model of cross-functional end-to-end processes containing both high-level and detailed views. That model is not a single diagram but a hierarchy of diagrams, in which the contents of subprocesses are detailed in separate child-level diagrams. This presupposes use of the collapsed subprocess shape, with the child level drawn in a separate diagram, as opposed to the expanded shape, where parent and child levels are drawn in the same diagram.

In my work, I rarely if ever use the expanded subprocess shape. In Method and Style, my top-down methodology for non-executable models, you start with the collapsed subprocess in the parent-level diagram, then click the [+] marker to create a hyperlinked diagram where you add the child-level flow. Method and Style gives subprocesses additional importance by interpreting the labels of the child-level end events as the distinct end states of the subprocess, and requiring that any subprocess with multiple end states be followed by an XOR gateway, one gate per end state with matching labels. In this way, the end-to-end flow logic is evident from the printed diagrams even without defining process data.

While Method and Style simply would not work without subprocesses, the child-level flows enclosed by each subprocess have no independent meaning or definition. They are simply fragments of a larger process selected by the modeler to simplify visualization of the end-to-end process logic. This is fine for non-executable models, but it is far less valuable in executable BPMN. A subprocess is not executable. The unit of execution in BPMN is a process. So if you want to make some fragment of a larger process executable, you need to define that fragment as a top-level process and invoke it from the larger process using a call activity. The flow contained in a call activity is an independently-defined process. Where BPMN 1.x described call activity as a reusable subprocess, a better description would be an executable subprocess.

Call activities are as essential to Business Automation as subprocesses are to Method and Style . Executable models add to descriptive BPMN the elements of data flow – process variables (data objects) and their mapping to and from task inputs and outputs. In Business Automation, the shape labels so important in Method and Style effectively become annotations; the real logic is defined by the process data. Every task in executable BPMN defines a data mapping from process variables to the task inputs and from the task outputs to other process variables. The task inputs of a call activity are, by definition, the data inputs of the called process, and the task outputs of a call activity similarly are the data outputs of the called process. In the calling process, the call activity specifies a mapping from its process variables to and from the data inputs and outputs of the called process. The existence of such a mapping is indicated by data associations in the parent-level diagram. The data mappings themselves are defined in configuration of the call activity.

This mapping is what makes the called process reusable. Its data inputs and outputs are independent of the variables of any calling process, so long as there is a definable mapping. In contrast, the child level of a subprocess is not reusable, meaning defined once and used multiple times. It has no definition outside of the context of the larger process in which it is embedded. More to the point, a subprocess has no data inputs or data outputs. The individual tasks within the child level have them, but the subprocess as a whole does not. And this in fact is what makes subprocesses less useful in executable BPMN, because in the parent level diagram you cannot draw data flow to the boundary of a collapsed subprocess. Instead you need to draw the data flow in the child-level diagram only, losing the visual context of the flow from and back to the parent level. (You could fake it by using directional association connectors in the parent-level diagram – actually not a bad idea, and more consistent with Method and Style – but a real data association is not allowed by the metamodel.)

In Business Automation, a BPMN process can be compiled and deployed as a cloud REST service. As I have described numerous times in past posts, this requires just a single Cloud Publish mouse click on the Trisotech platform. And you could invoke that process from another BPMN process with a normal Service task. But when the calling process and the called process share a common platform, such as Trisotech, invocation with a call activity is much better. That’s because, while independently defined, the calling process and the called process are tightly coupled. The call does not have to travel across the internet to invoke a service generated by a specific version of the process. The call stays within the common platform and invokes the latest version of the called process, which does not even have to be compiled and deployed. That may seem like a small thing, but in practice it makes development much easier.

While it is possible to create executable models top-down as in Method and Style, more often I will define the called processes first as independent models and then assemble them in the calling process using call activities. In the Trisotech Workflow Modeler, to bind a call activity shape to the called process, you can either drag the called process onto the diagram from the model repository (called Digital Enterprise Graph) or click on the Reuse button on the menu, which lets you pick from processes in the model repository. The call activity shape then displays a lock icon, indicating it is bound to the called process, inheriting its defined datatypes and synchronized to any changes in its internal logic.

Another important benefit of call activities in Business Automation is testing, which often takes as much effort as the design itself. Unlike DMN, which you can test in bits and pieces in the modeling environment, testing BPMN requires compilation and deployment, possible only at the process level. Breaking a complex process into small executable fragments makes the testing much more manageable, and the larger process can then assemble those fragments – which are now also reusable in other scenarios – using call activities.

The bottom line is that call activities play as vital a role in executable BPMN as subprocesses do in non-executable BPMN. Call activities are useful in non-executable BPMN only in the rare case where a subprocess is used multiple times in the same process or, possibly, where an organization has standardized the names of subprocesses for reuse across the enterprise. And while subprocesses are important in non-executable BPMN – Method and Style, in particular – they are most often better replaced with call activities in executable models.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

A Methodology for Low-Code Business Automation with BPMN and DMN
Bruce Silver
Blog

A Methodology for Low-Code Business Automation with BPMN and DMN

By Bruce Silver

Read Time: 5 Minutes

In recent posts, I have explained why anyone who can create rich spreadsheet models using Excel formulas can learn to turn those into Low-Code Business Automation services on the Trisotech platform, using BPMN and DMN, and why FEEL and boxed expressions are actually more business-friendly than PowerFX, the new name for Excel’s formula language. That’s the why. Now let’s discuss the how.

In recent client engagements, I’ve noticed a consistent pattern: A subject-matter expert has a great new product idea, which he or she can demonstrate in Excel. A business event, represented by a new row in Excel Table 1, generates some outcome represented by new or updated rows in Tables 2, 3, and so on. The unique IP is encapsulated in the business logic of that outcome. The challenge is to take those Excel examples and generalize the logic using BPMN and DMN, and replace Excel with a real database. This constitutes a Business Automation Service.

Business Automation Service

The Basic Pattern

In BPMN terms, a Business Automation service typically looks like this:

Each service is modeled as a short-running BPMN process, composed of Service tasks, Decision tasks, and Call Activities representing other short-running processes. The service is triggered by a business event, represented by a client API call. The first step is validating the business event. This typically involves some decision logic on the business event and some retrieved existing data. If valid, one or more Service tasks retrieve additional database records, followed by a Decision task that implements the subject matter expert’s business logic in DMN. In this methodology, DMN is used for all modeler-defined business logic, not just that related to decisions. The result of that business logic is saved in new or updated database table records, and selected portions of it may be returned in the service response. It’s a simple pattern, but fits a wide variety of circumstances, from enterprise applications to TurboTax on the desktop.

Implementation Methodology

Over the past year, I’ve developed a methodology for implementing that pattern.

It goes like this:

The first step is whiteboarding the business logic in Excel. In many cases, the subject matter expert has already done this! It’s important to create a full set of example business events, and to use Excel formulas – with references to cells in the event record – instead of hard-coded values. If you can do that, you can learn to do the rest. The subject matter expert must be able to verify the correctness of the outcome values for each business event.

Now, referring to the BPMN diagram, we start in the middle, with the Decision task labeled here Calculate new/updated table records. A BPMN Decision task – the spec uses the archaic term businessRule task – invokes a DMN decision service. So following whiteboarding, the next step in the methodology is creating a DMN model that generalizes the whiteboard business logic. I prefer to put all the business logic in a single DMN model with multiple output decisions in the resulting decision service, but some clients prefer to break it up into multiple Decision tasks each invoking a simpler decision model. That’s a little more work but easier for some stakeholders to understand. This DMN model is in one sense the hardest part of the project, although debugging DMN is today a lot easier than debugging BPMN, since you can test the DMN model within the modeling environment. The subject matter expert must confirm that the DMN model matches the whiteboard result for all test cases.

On the Trisotech platform, Decision tasks are synchronized to their target DMN service, so that any changes in the DMN logic are automatically reflected in the BPMN, without the need to recompile and deploy the decision service. That’s a big timesaver. Decision tasks also automatically import to the process all FEEL datatypes used in the DMN model, another major convenience.

Next we go back to the beginning of the process and configure each step. The key difference between executable BPMN models and the descriptive models we use in BPMN Method and Style is the data flow. Process variables, depicted in the BPMN diagram as data objects, are mapped to and from the inputs and outputs of the various tasks using dotted connectors called data associations. Trisotech had the brilliant idea of borrowing FEEL and boxed expressions from DMN to business-enable BPMN. In particular, the data mappings just mentioned are modeled as FEEL boxed expressions, sometimes a simple literal expression but possibly a context or decision table. If you know DMN, executable BPMN becomes straightforward. FEEL is also used in gateway logic. Unlike Method and Style, where the gate labels express the gateway logic, in executable BPMN they are just annotations. The actual gateway logic is boolean FEEL expression attached to each gate.

Database operations and interactions with external data use Service tasks. In Trisotech, a Service task executes a REST service operation, mapping process data to the operation input parameters and then mapping the service output to other process variables, again using FEEL and boxed expressions. The BPMN model’s Operation Library is a catalogue of service operations available to Service tasks. Entries in this catalogue come either from importing OpenAPI files and OData files obtained from the service provider or manually creating them from service provider documentation. It sounds difficult, but it’s actually straightforward, and business users can learn to do it.

Example

A Stock Trading App

Here is an example to illustrate the methodology. Suppose we want to create a Stock Trading App that allows users to buy and sell stocks, and maintains the portfolio value and performance using three database tables: Trade, a record of each Buy or Sell transaction; Position Balance, a record of the net open and closed position on each stock traded plus the Cash balance; and Portfolio, a summary of the Position Balance table. In Excel, a subject matter expert models those three tables. Each trade adds a row to the Trade table, which in turn adds two rows to the Position Balance table – one for the traded stock and one for Cash, with total account value and performance summarized in the Portfolio table. The only part of this that requires subject matter expertise is calculating the profit or loss when a stock is sold. When that is captured in the Excel whiteboard, translating that to DMN becomes straightforward. The BPMN Decision task executing this DMN service has the added benefit of automatically including the FEEL datatypes of the three tables in the process model.

Next we need to address data validation of a proposed trade. In a previous post, I explained how to do this in DMN. We need to ensure that all of the required data elements are present, and that they are of the proper type and allowed value. If we do not allow trading on margin or shorting the stock, we need to retrieve the account Portfolio record and make sure there is sufficient Cash to cover a buy trade or sufficient shares in the Portfolio to cover a sell trade. A full-featured trading app requires streaming real-time stock quotes, but in a simplified version you could use real-time stock quotes on demand via the Yahoo Finance API, which is free. As in the case of many public APIs, using it requires manually creating the Operation Library entry from service provider documentation, but this is not difficult. Any problems detected generate an error message, and a gateway prevents the trade from continuing.

Also as explained in previous posts, we can use the OData standard to provide Low-Code APIs to the database tables. OData relies on a data gateway that translates between the API call and the native protocol of each database, exposing all basic database operations as cloud REST calls. In its OData integration, Trisotech does not provide this gateway, but we can use a commercial product for that, Skyvia Connect. The Skyvia endpoint exposes Create, Retrieve, Update, and Delete operations on the three database tables, and Trisotech lets us map process variables to and from the operation parameters using boxed expressions and FEEL.

While this Low-Code methodology is straightforward, to become adept at it requires practice. Executable BPMN, even though it’s Low-Code, requires getting every detail right in order to avoid runtime errors. Debugging executable BPMN is an order of magnitude harder than DMN, because you need to deploy the process in order to test it, and instead of friendly validation errors such as you get in the DMN Modeler, you usually are confronted with a cryptic runtime error message. (Note: Trisotech is currently working on making this much easier.) So the modeler still must spend time on testing, debugging, and data validation up front – things developers take for granted but try the patience of most business users.

Using this methodology, it is possible for non-programmers to create a working cloud-based database app based on BPMN and DMN, something most people would consider impossible.

Learning the Methodology

It’s one thing to read about the methodology, quite another to become proficient at it. It takes attention to the small details – many of which are not obvious from the spec or tool documentation – and repeated practice. To that end, I have developed a new course Low-Code Business Automation with BPMN and DMN, where each student builds their own copy of the above-mentioned Stock Trading App. Students who successfully build the app and make trades with it are recognized with certification. Students need to know DMN already, including FEEL and boxed expressions, but the course teaches all the rest: data flow and mapping, Operation Library, OpenAPI and OData, configuration of the various BPMN task types for execution.

The course is now in beta. I expect some enhancements to the Trisotech platform that will be incorporated in the GA version around the end of the year. In the meantime, you can take a look at the Introduction and Overview here. Please contact me if you are interested in this course or have questions about it.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

FEEL vs Excel Formulas
Bruce Silver
Blog

FEEL vs Excel Formulas

By Bruce Silver

Read Time: 4 Minutes

Last month I showed why Trisotech is a great Low-Code Business Automation platform, based on its use of FEEL and boxed expressions in executable BPMN. How ironic is it, then, that many decision management vendors don’t even include those features in their DMN tools!

The only part of DMN they use is the Decision Requirements Diagram (DRD), but what is the point of using a standard diagram to describe business requirements that are not even testable?
Answer: So they can claim their proprietary tools “follow the standard”.) When pressed, they say that FEEL and boxed expressions are for programmers, “too hard” for business users. In reality, they don’t want business users anywhere near the decision logic.

FEEL vs Excel

So let’s put that aside. A more useful discussion is FEEL vs Excel. Does anyone claim that Excel is for programmers, not for business users? Of course not. But I believe that FEEL and boxed expressions are actually more business-friendly than Excel formulas. Allow me to demonstrate.

First, however, I need to say that I love Excel, and I even use it as part of my Business Automation methodology. What makes Excel formulas truly great is that they are always live. A spreadsheet cell defined as a formula immediately displays the result whenever any cell it depends on changes. You don’t have to click a Submit button; the response is immediate and automatic. The other business-friendly feature of Excel formulas is that their arguments are selected by clicking in a cell, which references the value by its grid location, such as A1. Yes, it’s possible to assign names to cell ranges, but come on, do you do that? No one does that.

I contend that these features make Excel great for creating examples of the business logic, but not so good for generalizing the logic as you would do in an app. Here is an illustration.

In my Business Automation training, the class builds a Stock Trading App. In the methodology we start by whiteboarding the business logic using examples in Excel. With each new Trade event, two rows are added to the PositionBalance table, one for the traded ticker and one for Cash. Without going into the details, the diagram below is an example of that.

Let’s look at the CostBasis column after sale of a stock, such as cell I15. In this table, the value of CostBasis is a simple Excel formula. Whenever a cell definition starts with =, it’s a formula. And this formula references another column of the same row, and some columns from two rows earlier. A subject matter expert can easily create logic in this way based on examples. But why does it reference values from two rows earlier? The answer in this case is that it is the most recent row with the same value of column D, the Ticker. It won’t always be two rows earlier. To generalize the logic, you need a lookup formula.

No problem. Excel has many built-in functions in its Formulas ribbon. The modern lookup formula is called XLOOKUP, with the explanation below:

Using this, for cell I15, we can generalize the formula H15*I13/H13, where row 13 is the last row preceding row 15 in which the column D value is the same, as:

It’s always live, but is it business-friendly?

Hardly. And that’s not the half of it, because the simple Excel formula in cell I15 only applies to tickers other than Cash when TradeShares is negative. Other formulas apply for Cash and positive TradeShares! So to create the general case of the PositionBalance table in Excel would be extremely difficult.

Let’s compare FEEL and boxed expressions. The complexity of XLOOKUP was related simply to finding the last PositionBalance record for the ticker in question. In Low-Code BPMN, following a query of the PositionBalance table of records matching the ticker name, the variable Last position is a simple filter expression, looking for maximum ID value. It’s much simpler than XLOOKUP.

And remember, we need a different CostBasis formula depending on Cash or non-Cash, and Buy or Sell trade. In Excel formulas, you would need to wrap all that XLOOKUP business inside IF functions:

IF([cond], [complex XLOOKUP formula], IF([cond2], [complex XLOOKUP formula2], [else formula]))

What a mess! FEEL and boxed expressions handle that easily as well:

Here we have a decision table inside a context, where each context entry models a column of PositionBalance. The only complexity here is accounting for the possibility that there is no previous PositionBalance record for the ticker, and even that is a simple if..then..else.

And this illustrates a major advantage of FEEL over Excel formulas. Except for arithmetic operators (+, *, etc.) and comparison operators (=, >, etc.), Excel formulas rely entirely on nested functions. They don’t even have logical operators like and. FEEL allows nested functions as well, but it relies on a long list of operators that greatly simplify the syntax: if..then..else, for..in..return, filters, logical operators, and so forth. There is simply no comparison; FEEL is much more business-friendly than Excel formulas!

Here is another example, published by Microsoft to show off the Excel formula language, which is being rebranded Power FX, the Low-Code language behind Microsoft PowerApps. It’s actually very cool, but what’s cool is the always live part, not the formula language behind it.

The formula in cell A2 displays the text following the last space in cell A1, and it does this live as you type in cell A1. Take a look at the Power Apps version:

Doing this in the Excel formula language is impressive, but I could not decode the deeply nested function logic without instructions from StackOverflow:

LEN(A1)-LEN(SUBSTITUTE(A1," ","")) Count of spaces in the original string
SUBSTITUTE(A1," ","|", ... ) Replaces just thefinalspace with a|
FIND("|", ... ) Finds the absolute position of that replaced|(that was the final space)
Right(A1,LEN(A1) - ... )) Returns all characters after that|

Now Trisotech currently does not have the always live feature – I hope that it will someday! – but FEEL in a boxed context certainly can implement the Excel formula logic in a much more business-friendly way.

You don’t really even need the context, as the single literal expression is simple enough:

split(Input string, " ")[-1]

Now you tell me, which is more business-friendly, FEEL or Excel formula language, Power FX?

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Low-Code Standards-Based Business Automation Services
Bruce Silver
Blog

Low-Code Standards-Based Business Automation Services

By Bruce Silver

Read Time: 3 Minutes

There is no hotter segment of Business Automation software today than Low-Code.

Low-Code refers to application development based on models – diagrams and tables – and business-accessible expression languages, not program code. Gartner assesses Low-Code as a $13.8 billion market in 2021, growing to $46.6 billion in 2023. And, they say, by 2024 it will account for 65% of all application development! According to Forrester, 100% of companies that have adopted Low-Code report satisfactory return on investment. KPMG research indicates that in the past year, the number of executives naming Low-Code as their most important automation investment has nearly tripled. So it’s a big deal.

Trisotech doesn’t call its platform Low-Code, and I understand why, but to me it absolutely is. It checks off all the boxes except for one – and that one isn’t all that important when we restrict ourselves to Business Automation Services. Moreover, the vast majority of Low-Code tools are based on proprietary diagrams and expression languages. Trisotech’s ace in the hole is its tools are based on vendor-independent standards: BPMN and DMN, of course, and also OpenAPI and OData. This post will explain why all this is important, and how you can take advantage of it.

What Is Low-Code?

Low-Code doesn’t mean full-blown apps can be created by ordinary business users with no developer assistance. Microsoft’s term for the user of Low-Code tools is citizen developer – possibly a technically oriented business user, possibly a developer. I would call any business user who can use the Formulas tab in Excel (not Macros, that’s code) a citizen developer. A survey by Low-Code vendor Creatio found that only one third of citizen developers using Low-Code were business users. In fact, some Low-Code implementations are done entirely by developers.

All surveys report the key benefit of Low-Code is time to value. Whether implemented by business users, developers, or a team involving both, Low-Code solutions that take months with traditional development can be rolled out in a few weeks with Low-Code. One factor is the tooling, as model-based development is just faster than code. Another factor is the scarcity of developer resources, especially in the wake of COVID. With Low-Code, subject matter experts – citizen developers – can get started, or even complete the job, themselves, without having to wait months for available IT resources.

The Business Automation Landscape

There are now hundreds of software products that call themselves Low-Code. Each typically focuses on one of three application types:

  1. Automated Workflow Processes, long-running flows involving activities performed by multiple users and systems. While the majority of BPM Suites, including the BPMN-based ones, are still aimed at Java developers, a few are now focused on Low-Code.
  2. Page Flows, web applications facing a single user involving interactions with multiple systems. A good example here is Microsoft PowerApps.
  3. Straight-Through Processes, short-running flows involving interactions with multiple systems, but with no human involvement other than initiating their execution.

In addition to segmenting Business Automation tools as Code vs Low-Code, we can also segment by Proprietary vs Standards-Based. The key standards for Business Automation are BPMN, DMN, and (eventually) CMMN, which have the added benefit of being designed to work together. DMN, the standard for defining business logic, was conceived from the start as Low-Code, using standard tabular formats called boxed expressions in conjunction with FEEL, a business-friendly expression language. BPMN, the standard for service orchestration (workflow), was not: While the basics of the flow are described by a diagram, the details required to make the flow executable – the data and expression language – typically require a Java developer.

Trisotech’s brilliant idea was to borrow boxed expressions and FEEL from DMN and use them in BPMN. Unlike other platforms, Trisotech offers a standards-based Low-Code approach to Business Automation, in particular applications of the third type, Straight-Through Processes. This suggests the view of the Business Automation landscape pictured below, with Trisotech in the sweet spot, Low-Code and Standards-Based.

Features of a Low-Code Platform

To be included in their Magic Quadrant report on Low-Code Application Platforms, Gartner lists the following features as required:

Feature Trisotech Support
Low-Code design of data Yes
Includes database and UI features Not yet
Able to call third-party APIs Yes
Application versioning Yes
Low-Code design of UIs, business logic, and data Yes for logic and data; not yet for UIs
Tools accessible to business users Yes
One-step deployment Yes
Repository of reusable components Yes

I believe the main reason why Trisotech does not call its platform Low-Code is the current absence of a form-builder and related UI tools, which are principal features of most Low-Code platforms. But I would argue that this doesn’t really matter, for a couple reasons. First, Trisotech is not really intended to create complete applications. It simply provides – to use a term from a decade ago – the middleware, that is, software that goes in between the application front end and systems of record, such as enterprise apps and databases. Second, the application segment where it currently works best, Straight-Through Processes, has no need for User task forms and screenflows. Trisotech’s role is to create Business Application Servicestriggered in response to either a business event or user interaction from an external client app.

Trisotech Low-Code Business Automation Architecture

The diagram below illustrates Trisotech’s Low-Code Business Automation architecture. The Business Automation service as a whole is a BPMN straight-through process triggered by either a REST call from the external Client App or a Business Event. All of the business logic – not just what we normally think of as “decisions” – is created using DMN, invoked as needed by BPMN Decision tasks. Operations on databases and other systems of record are executed by BPMN Service tasks using Trisotech’s Operation Library, a catalogue of REST interfaces configured simply by importing OpenAPI or OData files. APIs to third-party external services are configured and invoked the same way.

The glue that binds all this together in a Low-Code way is the combination of FEEL and boxed expressions. These are used not only in the DMN business logic but in BPMN as well, mapping between process variables (FEEL) and the inputs and outputs of Service tasks, Decision tasks, and User tasks, as well as for gateway logic. Basically, if you know your way around DMN, you can create Business Automation services yourself.

The Biggest Obstacle

In Creatio’s survey, 60% of respondents reported the biggest obstacle to Low-Code adoption is simply the lack of experience with the platforms. The solution there is obvious: Ask Trisotech for a trial account. But creating Business Automation services is not as easy as just turning on the tool. You need to know DMN inside out, at the level of my DMN Advanced training. And beyond that, executable BPMN involves countless details that are not covered in BPMN Method and Style, which is about non-executable processes. It requires an additional training course, and I’m working on that right now. Stay tuned for more details.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Sandy Kemsley's Vlog - Design tips for combining BPMN, CMMN and DMN models
Sandy Kemsley
Vlog

Design tips for combining BPMN, CMMN and DMN models

By Sandy Kemsley

Video Time: 5 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here today for the Trisotech blog with some practical design tips for when you’re combining process and decision models.

let’s start with the big three modeling standards for processes and decisions. There’s BPMN for structured processes, CMMN for case management and DMN for decisions. The standards tell you how to combine the models and you can look at graphical versions like Trisotech Triple Crown infographic to see how this works. Basically, process models and case models can call each other, and then either of them can call a decision model. Now what the standards don’t show is why you would use a particular model type to model different parts of your operations? Now, this might seem like a really silly question. If you have a process, you model it using BPMN, if you have a case structure, you use CMMN and if you have a decision you use DMN. But the real world is not that simple. In any sort of business, operation, any sort of complex business operation, you’re going to have a combination of process, case, and decision logic. And you need to be able to use more than one model type together. Now when you start doing modeling, you’ll find that process people and rules people come at these business models from different directions: if you’re a process person, like me, then you’ll tend to start with a BPMN or a CMMN model and that’s going to show the overall flow, and then you’re going to add in decision tasks where you need to call out to more complex decision models and then build the DMN models that are called from there. Now, on the other hand, a rule specialist will start by modeling the key decisions in the business operation and then they’re going to add processes to tie those decisions together.

Watch this short video for more…

Follow Sandy on her personal blog Column 2.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ blog

View all

What is Business Process Management?

What is BPM?

All businesses have processes. Processes are typically differentiated from projects because processes are predictable and repeatable. They are the building blocks of operating a business.

While many organizations have similar fundamental processes, the unique parts of their processes, dictated by their specific business methods, form the basis of the organization’s competitive advantage and often their “culture.” A process can be defined as a series of steps leading to identified outcomes. Common business processes are often named with descriptions like Opportunity-to-Order, Order-to-Cash, or Employee Onboarding. The sequence of work and steps performed can vary from instance to instance based on inputs, decisions, timing, dates, etc. However, regardless of the value of these variables, to properly define a process one must know all the possible paths and outcomes in advance i.e., predictability.

Business Process Management is a management discipline with the key goals of discovering, modeling, analyzing and optimizing business processes. While some BPM solution providers might include business process automation – a BPM engine or automation platform – as part of BPM, Trisotech defines process automation as a separate discipline. As a methodology, BPM can be thought of as similar to (and sometimes encompassing) other methodologies like continuous improvement (CI) or total quality management (TCM).

Why is BPM Used?

Once a predictable process has been defined through process discovery, it can be optimized – often called process improvement – and performed over and over in a standard way i.e., repeatability. By the act of discovering and defining a process, the resulting process documentation becomes a valuable organizational asset. The use of BPM can improve business operations’ performance and agility, lower costs and add value to customer products and services. Often cited specific benefits include higher efficiency and productivity, reduced costs, increased revenue, better agility, operational consistency, greater customer service focus, better regulatory compliance, increased security, and higher operational visibility.

Business Process Management Software in healthcare

BPM in Healthcare

Utilizing Business Process Management in Healthcare provides a huge array of opportunities to deliver better patient care and services, reduce errors, improve profitability, and ensure regulatory compliance at every level.

More info on healthcare

Process-driven healthcare organizations can create standardized clinical guidelines that facilitate consistent organizational policy, patient diagnosis, treatment, and reporting for both individuals and populations. Business processes can also provide real-time feedback and recommendations to providers, be embedded in most patient encounter systems, and integrate the use of standards like FHIR® and CDS Hooks.

Business processes are not just valuable in the clinical setting, however. They are also extensively used in healthcare insurance and patient services settings. Examples of these types of processes include claims processing, pharmaceutical and durable medical equipment (DME) pre-authorizations and patient, provider, and facility scheduling.

Business Process Management Software in finance

BPM in Finance

Business Process Management is helping to fuel the disruptive FINTECH industry as well as assisting existing financial institutions and service providers in transforming the way they do business to remain competitive.

More info on Finance

End-to-end business processes are facilitating the digital transformation revolution that has put the customer in the center of a 360 degree organizational view. That in turn has led to the optimization of existing operations and customer-centric policies and procedures. Standardized processes lead to higher operational visibility as well as better and more easily audited regulatory compliance.

BPM is driving better risk assessment and management, underwriting decisions, lending automation, servicing automation, insurance claims processing, and many other financial activities. By using predictable and repeatable business processes the financial industry is experiencing higher growth rates, greater profitability, more rapid digital adoption, higher rates of compliance, improved efficiency, and better security than ever before.

Trisotech BPM Solutions
What is Business Process Management Software from Trisotech?

Digital Enterprise Suite

Trisotech provides 100% browser-based business process management discovery and modeling tools in cloud-hosted or on-premise environments. The process discovery tool – the Discovery Accelerator – helps business people describe the Who, What, When, Where, and Why of how their processes work through simple interactive screens and/or existing written policies, guidelines, business rules or other documents. These business observations can then be turned into an initial workflow starter diagram with the click of a single button.

Business processing modeling software from Trisotech – the Workflow Modeler – is used to produce a visual workflow using the international standard BPMN (Business Process Management Notation) via an intuitive drag-and-drop visual interface. These model diagrams provide comprehensive documentation, analysis, collaboration, and reporting features recognized as the gold standard in the process modeling industry and can be used directly by the Trisotech business process automation software. Using Trisotech’s BPM tools and combining the Workflow Modeler business process mapping software with the Trisotech Business Automation platform gives customers an unrivaled, powerful, and comprehensive low code/no code application development capability.

Improve your Business Processes to drive Business Automation using Trisotech

Free Trial

Business Process Management Software
Trisotech

the
Innovator

View all

BPMN 101: Three Ways a Process Starts
Bruce Silver
Blog

BPMN 101: Three Ways a Process Starts

By Bruce Silver

Read Time: 4 Minutes

Students in my BPMN Method and Style training are often befuddled by how to start a process. I see Conditional events, Error events, all kinds of things. No, stop! While the BPMN spec provides many different types of start events, only three of them are relevant to the non-executable flows most modelers are trying to create. In Method and Style, those are the only ones allowed. This post will explain.

Just Three Ways to Start

There are really only three ways to start a process:

  1. On external request. Here “external” means some person or entity that is not a task performer of the process. The Customer, for example, is always external. Even in internal employee-facing processes, the requester can be considered external if he or she performs no activity in the process, merely requests it and receives back some final status. By far the majority of BPMN processes start on external request.
  2. On a regular schedule, for example on the first day of every month, or every 10 minutes. In non-executable BPMN it doesn’t matter whether the process is initiated automatically or manually, as long as the process runs repeatedly on a schedule.
  3. Manually by a task performer. Occasionally, a person who performs tasks in the process initiates it manually. Some employee-facing processes work this way, although in many cases the initiator could be considered an external requester.

That’s it, just those three. This makes life much simpler for the process modeler, because it not only determines which start event to use but defines the process instance, as well.

Message Start

A process that starts on external request uses a Message start event. Method and Style says the label of the event should be “Receive [messageName]” and requires an incoming message flow labeled with the messageName. The messageName should be just a noun – the name of the message, like “Order” – not an action like “Receive Order” or a state like “Order received”.

The great thing about Message start events is that they tell you right away what the process instance represents. A process model defines a flow that is performed repeatedly in the course of business – not continuously – where each repetition, or instance, of the process has a precise start and end. Understanding the process instance is crucial for creating properly structured models, but students initially have a hard time with that. With a Message start event, it’s easy: Each instance is the handling of that start message, such as an Order. The reason this is important is that the instance of every activity in the process must have one-to-one correspondence with the process instance. (That is not Method and Style; that is from the spec, although not clearly stated as such.) So, for example, if the Message start event is “Receive Order,” every activity in the process must be performed per Order. An activity that adjusts discount codes every month, while related to this process, cannot be part of the process. It must be part of a different process. We’ll see an example of this later.

Timer Start

A process performed on a regular schedule, whether initiated manually or automatically, uses a Timer start event. The label of the event should be the frequency of occurrence, such as “Monthly” or “Every 10 minutes”.

Again, the start event identifies the process instance: It is that single occurrence. And again, every activity in the process must pertain to that occurrence, not a previous occurrence or future occurrence or some individual request received during that occurrence. For example, suppose there is a Project Review process that occurs monthly. The start event is Timer with label “Monthly”, and each instance is a single monthly occurrence. Suppose time runs out to cover all the issues, and some are deferred to next month. A gateway that loops back to the start to handle those would be incorrect! Handling of next month’s issues occurs in next month’s instance of Project Review.

None Start

A process initiated manually by a task performer uses a None start event, with no trigger icon.

It’s unfortunate that BPMN does not have a Manual event trigger, so manual start must use None start, which is also used for the start event of subprocesses (which are started by the incoming sequence flow, not an event) and also loosely specified processes where the start conditions are undefined. In manually started processes, the instance is not easily determined.

Fortunately, these processes are fairly uncommon. I find in most cases where students use a None start, the process is actually in response to an external request, so Message start would be correct.

Instance Alignment

As I mentioned, the BPMN spec requires that instances of all activities in a process must align with the process instance. Otherwise they must be modeled in a separate process that interacts with the first process either via message flows or a shared datastore. Let’s return to the case of an Order process that involves discount codes that are updated monthly. The activity that updates the discount codes cannot be part of the Order process because it is performed monthly, not per Order. It is part of a separate process, Update Discounts, modeled with a Timer start event labeled “Monthly”. The activity of validating the requested discount code against the current list is done per Order, so that is part of the Order process. The link between the two processes, which must be modeled as separate pools, is the datastore “Discount Codes”. The datastore is updated by the Update Discounts process and queried by the Order process.

Method and Style

Method and Style is a modeling methodology that makes the details of the process flow unambiguous from the printed diagram alone… clear even to those who don’t already know how the process works. It addresses the chief complaint of managers, which is the inability of their team’s process diagrams to be understood by all team members without an accompanying text document. The Trisotech platform supports Method and Style by including Style rule validation within the tool. For example, a Message start event without an incoming message flow generates a style error. My BPMN Method and Style training teaches you how to create proper process models consistent with both the spec and the methodology. It includes 60-day use of the Trisotech Workflow Modeler and post-class certification. Certification is based on an exam and completion of a process model reviewed by me and iterated until it is perfect. Perfection of the certification exercise is where most students internalize the classroom teaching.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Data Objects and Executable BPMN

Creating Executable Processes in 5 Mins.

Vlog

5 Minutes with Tom

View all vlog

What Does a Workflow Management System Do?

Workflow Automation software executes computer-driven flows (processes) of human and system tasks, documents, and information across work activities in accordance with flow paths based on business decisions. Workflow Management software also ensures processes – both internally across organizational boundaries, and externally for Customer/Client interactions – are optimized, repeatable and auditable while still being quick and easy to change.

While Robotic Process Automation (RPA) has been making inroads in automating tasks within processes, Workflow Automation software is far more powerful than RPA. However, the two are both compatible and synergistic. RPA bots can automate individual tasks within a business process, but they typically can’t connect those tasks together. Good workflow engines allow RPA tasks to be included as part of a process.

Workflow Design software and Workflow Process software are being effectively used in practically every industry, frequently serving as standard operating procedures software. Digital Workflow software can be especially effective and valuable in two industries: Healthcare and Financial Services.

Workflow Automation Software
in Healthcare

Request Demo
Business Workflow Automation Software

The Healthcare industry is large and diverse. While the focus in healthcare is usually on the front-line workers, the caregivers and providers, there are also multitudes of back-office workers.

Large-scale healthcare organizations are often in linked businesses, networking physicians and other providers, acute care hospitals, long term care facilities and insurance organizations. Using Trisotech’s Business Process Management software all these organizations can create secure standards-based software to help them store and retrieve data as well as standardize and automate workflows and decision making across all parts of their business.

Most healthcare providers want standard processes and decision methodologies that are centralized, easy to understand, automated through workflow engines, and quickly changed by SMEs without resorting to the need for heavy IT involvement. This, in turn, frees up IT resources to work on centralizing, consolidating and making available the latest technologies across organizational silos. This includes providing technologies to support standards like FHIR® for data storage and retrieval, Clinical Quality Language (CQL) and CDS Hooks for clinical decision support in real-time at the point of care. Trisotech’s Workflow Design software and Workflow Automation software supports all these standards and allows automated processes to be fast and easy to change as regulatory requirements and new medications and procedures evolve.

Using Trisotech’s workflow management software, healthcare organizations can develop evidence-based workflow and decision models that are human-readable, machine automatable, and embeddable in most medical encounter systems. While healthcare organizations can and do create their own automatable models, Trisotech also provides pre-built models including nearly 1,000 free customizable care pathways, clinical guidelines, and healthcare decision calculators in the BPM+ Health standard. This way, Trisotech’s process management software enables practitioners to stay updated, accelerate solution adoption and ensure greater consistency in care execution. Additionally, the comprehensive visual models offered by the Workflow Design software are readable by IT, providers and business people, serving as a guideline specification, the guideline logic, the guideline documentation, and the automation code for the workflow engine – all in a single visual artifact!

1,000
FREE customizable care pathways, clinical guidelines, and healthcare decision calculators

Moving to evidence-based practices is very desirable but also often difficult for healthcare organizations. Subject matter experts (SMEs) and clinicians constantly work with IT to translate large volumes of regulatory, new medication and procedure information into organizational policy. This is a consistent and expensive requirement for SMEs using their existing, often antiquated systems to keep information up to date. Trisotech’s Workflow Design software and Workflow Automation software solutions allow healthcare organizations to easily define and deploy evidence-based best practices that offer a consolidated view of the interactions and multiple touchpoints with patients, care pathways, and workflows at the point of care.

Back-office tasks such as pre-authorization, medical necessity determinations and off-label drug prescription approvals have a huge bearing on patient experiences. They are also time-consuming, expensive and highly manual activities. Utilizing Workflow Process software for these types of activities expedites decisions for waiting patients, allows for services to be rendered sooner, and increases ROI. Using Trisotech’s digital workflows paves the way for improving both the perceived and real quality measurements for any healthcare organization.

Healthcare Payers and Insurers are also leveraging Trisotech’s process management software to automate processes like claims processing and pre-authorization determinations, leading to more efficient decision making and significant cost savings. Trisotech’s Automated Workflow software can ensure the correct information is collected at the outset, help pay claims rapidly and organize case management for disputed or confusing exception claims. The workflow software also helps payers keep complete records in case of an audit. Health plan members benefit from a better experience as they can access the care they need with minimal delays and without surprises at the time of claim payments or billing.

More info on Healthcare

Workflow Automation Software
in Financial Services

Request Demo
Business Workflow Automation Software

Financial services make up one of the economy’s most significant and influential sectors. This sector is made up of Banking Services including Retail Banking, Commercial Banking and Investment Banking. Also included are Investment Services, Insurance Services and Tax and Accounting Services.

These businesses are composed of various financial firms including banks, finance companies, lenders, investment houses, real estate brokers, insurance companies, etc. Trisotech has customers using its Workflow Design software and Workflow Automation software in all of these businesses. The typical description of this sector is Financial Services, but it is really made up of both services and Financial Products like mortgages, investments, credit cards, insurance policies, etc. This means that it is not only a business-to-business (B2B) sector but also has a huge business-to-consumer (B2C) component. Marketing, selling and servicing these products is fertile ground for Trisotech’s Workflow Automation software.

Various forms of proprietary financial software have been in use for decades and the adoption of those early technologies now presents the industry with an increasing risk in the form of technical debt. Old technologies are being disrupted by newer cloud-based offerings which include standards-based business process management software that is far better suited to meet the rapidly changing personalization, self-service, risk and compliance needs of today’s marketplace. Indeed, improving client service by automating policies, accounts, investments, claims and more using digital workflows is a cornerstone of the digital transformation efforts in financial services. To simplify the complex process of digital transformation and in order to streamline their processes and decisions, financial enterprises should render organizational workflows and business decision logic into international standards-based visual diagrams and documents. When using Trisotech’s Digital Enterprise Suite, not only can those visual diagrams be shared by business people and technical people they can also be automated by Trisotech’s workflow engine directly from those visual diagrams.

Share
knowledge across your organization and free up your IT resources

Technology replacement is also very important because most organizations have their knowledge, policies and procedures embedded in large complex programs maintained by IT programming staffs. Today, “old school” practices like SMEs maintaining Excel spreadsheets for policies and rules (regulatory and organizational) that then must be “translated” by IT into traditional programming languages and proprietary rules systems are giving way to visual models incorporating standardized “decision services.” Using Trisotech’s Digital Enterprise Suite, modern workflows and decision services can be built and maintained by SMEs and turned into automated business processes by clicking a single button. This, in turn, frees up IT resources to work on centralizing, consolidating and making available additional more current technologies across organizational silos.

Challenges

Primary challenges the financial industry is facing today include rapid and often massive regulatory changes, privacy, security, and fraud prevention, surpassing or keeping up with the competition by exceeding customer expectations, and replacing old technologies with emerging technologies. Trisotech’s workflow software is already recognized as the reference implementation for many international standards such as BPMN, CMMN, and DMN. In the financial industry Trisotech is rapidly taking a leadership position with its implementation of the Mortgage Industry Standards Maintenance Organization (MISMO) standard and support of other standards like the Financial Industry Business Ontology (FIBO), common database connections and multiple AI techniques.

For Fintech organizations, Trisotech’s Workflow Design software and Workflow Automation software accelerate digital transformation by providing the ability to easily define, deploy and maintain improved decision-making and workflows supported by artificial intelligence and machine learning in a graphical environment. While Trisotech’s Digital Enterprise Suite is being used by customers for everything from retail credit card processing to insurance claims processing, one area, underwriting, has been of particularly high value to customers.

Underwriting is the process by which an institution takes on financial risk – typically associated with insurance, loans or investments. Underwriting means assessing the degree of risk for each applicant prior to assuming that risk. That assessment allows organizations to set fair borrowing rates for loans, establish appropriate premiums to cover the cost of insuring policyholders, and creating a market for securities by pricing investment risk.

Underwriters evaluate loans, particularly mortgages, to determine the likelihood that a borrower will pay as promised and that enough collateral is available in the event of default. In the case of insurance, underwriters seek to assess a policyholder’s financial strength, health and other factors and to spread the potential risk among as many people as possible. Underwriting securities determines the underlying value of the company compared to the risk of funding its capital acquisition events such as IPOs. All of these activities lend themselves to digital workflow software solutions.

For example, mortgage loan origination. By utilizing Trisotech’s Workflow Design software, customers are able to build standard operating procedures software for loan origination that encompass the organization’s specific underwriting policies. These workflows can be created and maintained by underwriting experts while complex mathematical models, AI and privacy and security requirements are taken care of by IT personnel. Trisotech provides for all of this in a single common visual model understandable by both the business people and the IT personnel while still maintaining separation of concerns through granular permissions. Once the visual model is complete, a single button click can automate the workflow and make it available to Trisotech’s workflow engine part of the Workflow Process software.

More info on Finance

What Is Workflow in Software?

Digital Enterprise Suite icon

The unique capabilities of Trisotech’s Automated Workflow software are rooted in its ability to simplify the complex process of digital transformation for all. In order to streamline their processes and decisions, enterprises must first know what those processes and decisions are. Discovering and validating them is the responsibility of business leadership not solely IT. Thus, a must-do activity of digital transformation is rendering organizational workflows and business decision logic into international standards-based visual diagrams and documents. Then, they can be shared by business people and technical people and automated directly from those visual diagrams.

Trisotech calls this process Business Services Automation. The Trisotech offering includes visual Workflow Design software and visual Workflow Automation software.

Trisotech Workflow Automation Solutions

Digital Modeling Suite icon

Workflow Design software

The Workflow Design software includes workflow automation software (BPMN), decision automation software (DMN) and case management automation software (CMMN) along with a larger suite of application tools that facilitate workflow discovery, promote organizational standards use and support workflow design life cycles. The software also supports AI and RPA integrations, full API support and the configuration and management of users, permissions and models.

Digital Automation Suite

Workflow Automation software

Trisotech’s process Workflow Automation software includes workflow engines that can directly execute the business process management models. These workflow engines are utilized through RESTful APIs, provide the highest levels of privacy and security and can be containerized and thus scalable on demand across a wide variety of public and private cloud configurations including high availability configurations.

Trisotech’s Workflow Automation software also provides a full rich visual configuration interface supporting server environment configuration, audit logs, debugging tools and management of running workflow instances. Trisotech’s digital workflow software is high in value, low in cost and backed by world-class technical support.

Put succinctly, Trisotech’s Digital Automation Suite (DAS) is an API-first, container-based scalable cloud infrastructure for business automation. It enables complex automation of business workflows, cases and decisions in a simple, integrated run-time environment. It allows organizations to leverage business automation as a source of competitive advantage, via high performance, flexible, and linearly scalable automation engines. The Digital Automation Suite also offers an outcome-driven orchestration of AI and other emerging technologies using international standards and a microservices architecture.

Trisotech

the
Innovator

View all

Bruce Silver's Blog - Using Messages in Executable BPMN
Bruce Silver
Blog

Repeating Activities in BPMN

By Bruce Silver

Read Time: 6 Minutes

BPMN has a way to say an activity should be performed more than once. In fact, it has multiple ways, and students in my BPMN Method and Style training sometimes get them confused. This post will clear things up.

A Loop activity is like a Do-While loop in programming. It means perform the activity once – it could be either a task or subprocess – and then test the loop exit condition, a Boolean expression of process data. If the condition is false, perform the activity again and evaluate the condition once more. If the condition is true, exit the activity on the normal outgoing sequence flow. Repeat until the exit condition is satisfied. It’s quite handy for activities that might require multiple tries to complete successfully. It is possible to set a maximum number of allowed iterations.

The semantics of a Loop activity are exactly the same as a non-Loop activity followed by an XOR gateway that either loops back to the activity or continues on. A Loop activity is indicated in the diagram by a circular arrow at the bottom center. This takes up less space in the diagram than adding the gateway and loopback, but it hides the loop exit condition. For that reason, Method and Style asks the modeler to insert a text annotation on any Loop activity, labeled “Loop until [condition]”.

A Multi-instance (MI) activity is like a For-Each loop in programming. It means perform the activity once – again, either a task or subprocess – for each item in a list, a process data element that contains multiple items. Processing each list item is a separate instance of the activity, and the MI activity as a whole is complete only when all the item instances are complete.

A Multi-instance activity is indicated by three parallel bars at bottom center. If the bars are vertical, it means the instances are performed at the same time, in parallel. If the bars are horizontal, which you often see with human activities, it means the instances are performed sequentially. With an MI activity, Method and Style asks the modeler to insert a text annotation, “For each [item]”.

Many beginners are confused by the difference between Loop and MI-sequential activities. They are not the same!

The key points about Loop activities are:
MI-sequential is not the same:

Let’s look at a simple example of an order.

When the order is received, the process first checks whether the items are in stock. Because an order may contain multiple items, Check stock must be done independently for each order item. In the diagram we indicate this by the text annotation. Let’s say an item is either in stock or out of stock, ignoring the case where the available stock only partially satisfies the order. Since this is processing a list, it’s MI – not Loop! The number of iterations is the number of items in the order.

Normally, in Method and Style, when an activity has two possible outcomes – in stock or out of stock – we follow it with a gateway with two gates, in stock and out of stock. But remember, with an MI activity, each instance has those two outcomes, so the MI activity as a whole has more than two possible end states. In practice, it is common to follow such an MI activity with a gateway that tests whether any item is in stock, or possibly if any item is out of stock.

Here we will continue the process if some order items are in stock. Otherwise we end the process in the end state Out of stock.

Now we Collect payment, which uses the customer’s credit card on file. That could fail. Maybe the card is expired, in which case the customer is advised to enter a valid credit card number. Collect payment is thus a Loop activity. You don’t know, when you start, how many iterations are required. You keep trying it until either it succeeds or you exhaust the allowed number of attempts. Again the loop exit condition and maximum iterations are indicated by a text annotation. There is only one instance of this activity. In the end it either succeeds or fails, so we follow it with a gateway with those two gates.

Here are some common mistakes beginners make with repeating activities:

To make a Multi-instance activity executable in the Trisotech Digital Enterprise Suite, here is how you do it. From our previous example, consider the MI task Check stock.

The input is an Order, type tOrder shown below. It is a collection of Order items, each defined by a SKU, Quantity, and UnitPrice.

In a real process, each instance of Check stock would be a database lookup, but to keep it simple here we just use a script task against the process data input Stock table, a collection of items defined by SKU and QuantityAvailable. The data output In stock items is a list of Order items that are in stock.

We first need to map the data inputs to variables in the script task context using its Data Mapping attribute.

A Multi-instance activity always iterates over a collection, so we need to select it from those supplied by input data associations, here Order. We need to assign a name to the range variable or iterator, meaning one item in the collection. It can be anything we want, so here we name it OrderItem. In the Mapping section we define the variables used in the script task logic and their mappings from the input data associations and the iterator. So variable Stock table is just the input data association of that name, and we define a new variable CurrentItem mapped from the iterator OrderItem. The mapping expression of the iterated variable must be the iterator.

The script task logic is the FEEL expression

if Stock table[SKU = CurrentItem.SKU].QuantityAvailable[1] >= CurrentItem.Quantity then CurrentItem else null

For those unfamiliar with FEEL, here is how it works. Remember we are executing the script independently over each OrderItem, and collecting the results of all the instances.

Stock table[SKU=CurrentItem.SKU].QuantityAvailable

selects the row of Stock table where the SKU matches the CurrentItem.SKU, and then finds the QuantityAvailable for that row. Since SKU is a primary key, the filter selects a single row, but technically a FEEL filter always returns a list, so we need to append [1] to extract the item. Now the script says, if that QuantityAvailable is greater than or equal to the corresponding OrderItem quantity, return the CurrentItem, otherwise return null. We run this script for each OrderItem.

Now we define the data output mapping.

A script task always has a single output. We need to assign it to a variable, here In stock item. For Resulting collection, we select one of the output data associations, here the process data output In stock items. Collected variable says that each item in that collection is assigned the script task output In stock item. Thus the data output In stock items is a collection containing the original Order item if in stock or null if out of stock.

We use In stock items in the logic of the gateway Any in stock? The yes gate condition is defined by the FEEL Boolean expression:

nn count(In stock items)>0,

where nn count(list) is a Trisotech extension function equivalent to count(list[item!=null]).

When we test the logic in Service Library TryIt, we get the expected result. Order item A001 is in stock and Order item A003 is out of stock. Thus In stock items has two items, one of them null, and the process end state is “Some in stock”.

To recap, Loop and MI activities are frequently used in BPMN and they are not the same thing. Use Loop when you need to retry an activity until it succeeds. Use MI – either sequential or parallel – when you need to perform an action for each item in a list.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Models of Regulatory Compliance

Connecting Source Documents to Process and Decision through Concept Models

Presented by:

Tom DeBevoise
Advanced Component Researche inc.

Trisotech partner, Tom DeBevoise demonstrates how to implement Regulatory Compliance using Trisotech software.

Videos

Recorded webinars

View all videos
Attend a live webinar

Bruce Silver's Blog - Using Messages in Executable BPMN
Bruce Silver
Blog

Using Messages in Executable BPMN

By Bruce Silver

Read Time: 6 Minutes

In BPMN, the most common way information is provided to a process from an external source is via a message. Incoming messages are indicated in the diagram by a dashed arrow connector called a message flow, with its tail on a black-box pool or a message node in another process pool and its arrow on a message node in the receiving process, either an activity or message event. In our BPMN Method and Style training, which focuses on non-executable processes, we discuss at length the semantics and usage patterns of messages and message events. But when we take the next step of making our BPMN model executable, we need to think about messages in a new way. This post explains how it works.

Below is a variant of an example from the training, illustrating the use of message boundary events.

The scenario is as follows:

The semantics above are inherent in the definition of message start and boundary events. One thing to note: The state of the process is unknown to the Customer when he or she sends the Cancellation message. Thus a single Cancellation message must be represented in the diagram by separate message flows leading to different message events in the process, each event triggering a different way of handling the message. In the BPMN Method and Style training, we recommend that these message flows carry identical labels and the receiving message events do as well. In executable BPMN, these recommendations become absolute requirements.

When we make a model like this one executable, we need to think about messages from a new angle. An executable process is a service, and incoming messages are really API calls from an external client. On the Trisotech platform, for example, when we click Cloud Publish in a BPMN model, the tool compiles and deploys each process in the model – there could be more than one – as a REST service accessible through the Service Library. If the only interaction with the Customer were the initial Order, we might not draw the Customer pool and message flows at all. Instead we would make Order a process data input, which is equivalent to the start message. But when we have subsequent interactions, as in this model, we should use the message flow representation, because it implies that Cancellation messages addressed to this process are meant to cancel only this particular instance – this particular order – not all instances of the process.

Maybe this has got you thinking…

  1. How do we address a message to a particular process?
  2. How do we address a message to a particular instance of the process?
  3. How does the instance know which message event receives the message?

All good questions, and we will answer them shortly.

The key difference between non-executable and executable BPMN is the latter defines process data in the form of data objects and defines mappings between process activities or events and these data objects. I have to tell you at the start that while all BPMN products generally follow the semantics defined in the spec, the implementation details of executable BPMN vary from product to product. What follows is how it works in the Trisotech Digital Enterprise Suite. I like this platform because it is designed for non-programmers, borrowing from DMN a business-friendly way of modeling and mapping data objects.

When we add the data objects to the diagram, it looks like this:

The diagram now looks cluttered and hard to read, but you can right-click and Hide the data objects, replacing them with three dots at the bottom of each activity mapped to or from the data object. The data objects with black arrows are data outputs, meaning their values are returned upon execution. In this case we simply want to capture in the output the Status of the order – “Received”, “Completed”, or “Cancelled” – and timestamps indicating precisely when the order was Received, Completed, or Cancelled, along with any Refund info. The reason we need to break these out as separate data outputs as opposed to components of a single structured output is that a data output association – that dotted arrow connector from a task or event to the data object – rewrites the entire data object; it does not simply update selected components.

So let’s see how to make this executable. We start with the datatypes, defined on the BPMN ribbon. The Order message is type tOrder, a structure that looks like this:

We assign the message flow Order to this datatype, and this associates the message flow with a message named Order, type tOrder. We do something similar to the message flows named Cancellation, using the type tCancellation.

Later we’ll see how the OrderID component of Cancellation is used to select a particular instance of the Order process.

We also assign the type tOrder to the message start event Receive order. With an incoming message, we always need to save the message payload to one or more data objects, here represented in the diagram as data associations to Order info (type tOrder) and Received, which captures the timestamp of receival. The data mapping from the start event is shown below. Received is populated with the current timestamp, using the Trisotech FEEL extension function now(), and Status is populated with the value “Received”. Order info passes order details to the other activities in the process.

Following the scenario described previously, we collect payment, fulfill the order, and complete. If a Cancellation message for this order is received while in the activity Collect payment, we set the Status to “Cancelled” and end the process in the state Order cancelled. If the Cancellation is received while in the activity Fulfill order, we again set the Status to “Cancelled”, calculate the amount to refund, and end in Order Cancelled. If Fulfill order completes with no Cancellation, we set the Status to “Completed” end in Order complete.

We model the Cancellation message flows in a similar way: Assign them and their catching boundary events to the type tCancellation and map the message content to Cancel info, Status, and Cancelled as we did with Order. For this example we ignore the details of the tasks Process payment and Fulfill order, except that completion of Fulfill order sets the Status and Completed timestamp, as shown by the output data mapping below.

Process Refund calculates the refunded amount. Here we are using simply Order info.Amount, but we made it a decision task to allow for more complex logic, such as a change in the price between time of ordering and cancellation. The data output Refund info reports the refunded amount and a timestamp.

Now we also need to provide a way to identify this particular instance of the order process. The primary method used by Trisotech is called instance tagging. At the process level you define one or more strings that uniquely identify the instance. These must be strings not structured data, so if you want to associate a value, such as the Order ID, with a label, you need to concatenate the label and the value. In the BPMN diagram, right-click Attributes/Process Instance Tags and define one or more label-value strings that identify a single process instance or possibly a small collection of instances:

Since each Order has a unique ID, this tag always identifies a single instance.

We want to tell the Receive cancellation events to accept only Cancellation messages for the current instance. Right-click the boundary event and select Message correlation. As you see below, Trisotech uses a three-stage filter for correlation, although typically one or more of the stages is not used.

The first stage, a message content filter, is a boolean expression of message data that makes no reference to the process instance. The second stage, instance tag matching, is the most common, and what we use in our example. It is a FEEL expression of type Text referencing attributes of the incoming message (here, Cancellation). Only strings matching a process instance tag are received, i.e., Cancellation messages in which the component OrderID matches Order info.ID. Because the instance tags are indexed, the runtime can do this correlation matching very quickly.

The third stage is a boolean expression of process instance data. It is very flexible but unless you have cut down the list of available messages using the first two filters, it could be slow. It should be used only in special cases.

So let’s now test our executable process. To do that, on the Execution ribbon click Cloud Publish, which compiles the logic and deploys it as a REST service accessible through the Service Library. There you see the various endpoints created for this service:

This answers the first question posed at the start of this post: How do we address a message to a particular process? It’s via the endpoint URL. In the modeling tool, my process has the name message flows5 executable, and this is version 1.0 of that process. So a POST to [base]/bpmn/api/run/test/bpmn/message-flows5-executable/1.0 is equivalent to sending the message Order to the start event. You can expand that box to see the required input data and the data returned when the process completes for various response codes:

As you can see, these are the json equivalents of the FEEL message definitions. Also note that POSTing an API call to [base]/trigger/test/bpmn/message-flows5-executable/1.0/default/Cancellation is equivalent to sending the Cancellation message to the boundary events. Here the name of both the process and the message event are part of the endpoint URL. Correlation to the particular instance of the process – this specific order – is achieved via the Process Instance Tag.

We can test execution of this process from the Service Library. The TryIt button exposes a form where you can input Order data.

Clicking Run advances the process to Process payment.

Here the Status is “Received” and the data output Received contains the timestamp of receival. The Continue service section lets you select either Process payment, meaning run this activity, or Receive cancellation, the boundary event. If we click Process payment, we see the data mapped to this task, and we can define any output data values. Clicking Run again advances to Fulfill order. Again the Continue service section lets us either run the task or receive the Cancellation message. This time let’s click Receive cancellation. We are prompted to define the Cancellation message:

Clicking Run now should exit on the exception flow, run the decision task, and end with Status “Cancelled”, end state “Order cancelled”, with a Cancellation timestamp and Refund info populated. You can see from the data outputs that it does exactly that.

The answer to the third question posed at the top – How does the instance know which event catches the message? – is simply based on timing. It can be received only by an active node at runtime.

Let’s recap. As you can see, executable BPMN is a bit more involved than normal non-executable Method and Style. Modeling real-world processes almost always involves incoming messages, which in executable BPMN are API calls to the deployed BPMN service. We model message data in the normal way, via FEEL datatypes, and use data mapping to save that data to data objects used in the process. At runtime, messages are routed to the right process and message event via the endpoint URL, and correlated to a particular instance via Process Instance Tags. We can test and debug the logic, including waiting for message events, via the Service Library. And best of all, none of this requires programming.

If this interests you – and it should! – contact Trisotech to find out how they can set you up for success.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's Blog - BPMN: Database Operations with OData
Bruce Silver
Blog

BPMN: Database Operations with OData

By Bruce Silver

Read Time: 12 Minutes

BPMN’s advocates – myself included – like to proclaim that the language allows non-programmers to define executable processes themselves. But that’s only one-third true.

Yes, in BPMN 2.0 the executable process steps through the shapes of the diagram as drawn by the modeler. That in itself was a monumental achievement a decade ago. “What you model is what you execute,” we liked to say. But it left out two key ingredients that still required programming. First, the process data, mappings, and expression language were assumed to be based on Java or some similar programming language. So while the BPMN diagram describes the paths the process may follow, the logic controlling which path is followed normally requires a programmer. And, of course, the actions provided by the process tasks themselves must be created by programmers. A decade ago, in the heyday of SOA, vendors liked to pretend that all the actions you might want to use in your process have already been nicely programmed and wrapped in a service API, just waiting to be orchestrated by BPMN. Really?

Now you may have noticed Trisotech has begun marketing “business automation as a service,” executable process apps combining BPMN, DMN, and CMMN that can be created entirely by non-programmers, and has been quietly building out the platform that turns that promise into reality. To solve the problem of data modeling and mapping, they borrowed FEEL and boxed expressions from DMN. Brilliant, in my opinion! But that still leaves the issue of task logic. How can non-programmers create that? For that, Trisotech currently offers the following options:

What’s missing in all this is a solution for basic CRUD operations: database record Create, Read, Update, and Delete. Actually, that’s available as well, though a standard called OData. I just never knew how to use it. This post will show you how.

Databases are, as a rule, not cloud-friendly. Each DBMS provider has its own proprietary protocols and APIs. OData is a standard that provides a common abstraction layer for databases, exposing CRUD operations as cloud-friendly REST services. Trisotech has produced a white paper on this and a video, both of which I recommend and will summarize briefly here.

The solution assumes you have some DBMS on a website: SQL Server, Oracle, mySQL, whatever. Between Trisotech and that DBMS is a data gateway, a service that “virtualizes” the DBMS using OData. After you point the gateway to your database, it exposes a Metadata XML file that you download into Trisotech Workflow Modeler or Decision Modeler. This file defines the data structures used in your database tables, which Trisotech uses to generate the service interface and related FEEL datatypes. This makes it possible for non-programmers to define CRUD operations in BPMN models through basic boxed expressions. At runtime, deployed Trisotech processes call the data gateway using the OData protocol to execute the operations.

Sounds easy, right? Well, first you have to provide the data gateway. You can use free open source code from Teiid , or Red Hat JBoss Data Virtualization, a commercial product, or Skyvia, a commercial cloud service. The white paper provides a configuration example for Teiid to get started. The rest of this post assumes you’ve got that in place.

In your Trisotech workspace you have a repository of example models called EU-Rent. The example I’m going to show you is based on the Vacation Request process there. The scenario is you have a database table of employees and their accrued vacation days. The employee submits a vacation request, specifying the vacation start and return dates. Some decision logic controls whether to automatically approve, refuse, or manually adjudicate the request. And if the request is approved, it updates the remaining available vacation days in the database. That’s pretty straightforward. So let’s see how a non-programmer like me can implement this.

We start with the database. I’ve got a simple mySQL table vacation on a website. The structure is created using phpMyAdmin:

For OData to work, you need a primary key field, which we have here with employeeId. It’s common to have the primary key field defined to autoincrement as records are added. phpMyAdmin lets you populate the tables from a CSV file, so let’s add a few records:

Now the data gateway provides a downloadable Metadata XML file that looks like this:

Save that on your PC. You’ll need it to configure the service task that accesses the database.

Here is the process we want to make executable. The main difference from normal non-executable BPMN diagrams is all the data objects, data inputs, and data outputs – the dog-eared page shapes representing process variables – and the dotted arrow data associations mapping them to the task inputs and outputs.

Our vacation table isn’t represented in the diagram, although you could add a datastore as a kind of annotation to represent it. We have a data input Vacation request, specifying the employee email, vacation start and return dates. And we have a data output, Updated Vacation Status, that returns the new employee record after updating the vacation days. When you publish this process as a service, only the data outputs and process end states are returned on execution. The regular data objects are not.

First we need to import the OData service for vacation into our model. In the BPMN ribbon’s Operation Library, click on Import OData and upload the Metadata XML file you saved. Select the table or tables you want imported, and for each table Trisotech exposes a REST interface containing several standard operations: Find, Get, Create, Update, Delete, etc. It also creates FEEL datatypes for the table as a whole, a table record, and a query input structure.

The data input Vacation request looks like this:

The first service task, Fetch Vacation Information, queries our vacation table using the Employee email from Vacation request to obtain the employee’s accrued vacation days. Right-click Attributes/Service task and select the interface for the table and the operation for this task, in this case Find vacation, meaning a query of this table. Right-click Attributes/Data Inputs to see the parameters of the Find vacation operation:

It wasn’t obvious to me what to make of this. You need to consult the OData documentation, which explains how these parameters are used to construct the REST query URI. We want the $filter parameter, which is used in a URI like this:

<myDB>/vacation?filter=<match condition string>

The Data Input Mapping screen provides a boxed expression for the query parameters. For the parameters you use, you need to create a FEEL expression that returns the string that goes to the right of the = in the URI. What was confusing to me at first is that this string is in OData (not FEEL) syntax. For example, if Aaron Smith is issuing the vacation request, the Find vacation operation is invoked with the URI

<myDB>/vacation?filter="email eq 'asmith@eurent.com'"

In that case, the Data Input Mapping for Fetch Vacation Information looks like this:

Note that the FEEL expression has to include the single quotes to wrap the value returned by Vacation request.Employee email. We don’t use parameters besides $filter, so their mapping expressions are blank.

If we right-click Data Outputs, we see the OData output is called vacation, the name of our mySQL table, with the FEEL datatype shown below:

The output is not the whole table, just records matching our filter query, which should be just one. We save this record in the data object Current Vacation Status using the Data Output Mapping shown below. Because vacation.value is a list, we need [1] to extract the item.

The decision task Vacation Approval invokes a DMN decision service. You need to create the decision service first in Decision Modeler and then link to it from the BPMN decision task. The DRD is shown below:

It takes the Vacation Request in combination with the Days Remaining from Current Vacation Status to calculate the number of vacation days requested, and then either automatically approves, rejects, or refers the request for a human decision. Num Days is a context that determines the number of vacation days requested based on the start and return dates. Most of the logic is figuring out the count of weekend days in that span.

Approval returns a structure with components Status and Reason, the latter used in an email to the employee in case the request is refused. The decision table also returns “Refused” in the case of invalid values, such as a return date not later than start date, or either start or return date on a weekend. Readers interested in the calendar arithmetic used in these tables are referred to my DMN Method and Style book or training.

By default, Trisotech generates a decision service for the whole DMN model, but this returns only top-level decisions. We want both Approval and Num Days returned, so we define a decision service Vacation Approval that includes both as output decisions:

Back in BPMN, if we click on the task type icon for our decision task, we navigate our repository and select a decision service to invoke. When we select Vacation Approval, the decision task reuses it by reference, denoted by the closed lock icon. If we modify the decision model, the decision task automatically uses the updated decision logic. This decision service has two inputs, Days Remaining and Vacation Request. The process data available to the decision task are indicated by the incoming data associations from data object Current Vacation Status and data input Vacation request. The Data Input Mapping looks like this:

Because our decision service contains two output decisions, the decision task has two data outputs. We map Approval to the data object Approval Status and Num Days to the data object Num Days. The gateway following the decision task tests the value of Approval.Status, entered as the Condition attribute on each gate. The Send tasks in the model use a Trisotech system service to send an email, which incorporates the Reason value of Approval Status.

In the case where the vacation request is approved, we next need to update the vacation table in mySQL. The service task Update Remaining Vacation uses the same OData interface we used before, this time with the Update vacation operation. The inputs to an update are the primary key of the database, employeeId, and other columns of the record. The only change is we need to subtract Num Days from the original value Current Vacation Status.days. The Data Input Mapping is shown below:

The only output of the update is the id of the updated record. We already know that, but just to check that the operation worked, we can use the Get vacation operation to return the whole updated record. The output of the Get operation, also called vacation, is saved in the process data output Updated Vacation Status.

We also need to model the User task Manually Approve Vacation, but we’ll leave the details of that for another time. Once that is done, we can publish the BPMN model as an executable service simply clicking Cloud Publish in the Execution ribbon. In the Service Library we can now try it out:

Here Aaron Smith, with 14 days remaining, requested 2 vacation days. This is automatically approved, so the Updated Vacation Status data output now shows him with 12 days remaining, which is the expected result.

So let’s recap what we’ve just seen. Our employee vacation status is in a mySQL database vacation that is exposed via OData. When an employee submits a Vacation request, this triggers a process that first uses the OData Find vacation operation with a $filter parameter to return the employee record based on his email address. The remaining days value from that record in combination with values of the Vacation request are used in a decision service that either approves, refuses, or refers the request, and sends an email to the employee with the result. If the request is approved, a second service task uses the OData Update vacation operation to deduct the requested vacation days from the original count, and then uses the OData Get vacation operation to output the updated record for that employee. We published this process in one click as an executable service, and verified that it gives the expected result. And none of it required programming!

If you are playing around with Trisotech BPMN and didn’t think you could make your processes executable, maybe you can. A good place to start is by OData-enabling your databases.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's Blog - BPMN 101: What Is a Process?
Bruce Silver
Blog

BPMN 101: What Is a Process?

By Bruce Silver

Read Time: 6 Minutes

It’s now 10 years since finalization of BPMN 2.0, the acknowledged standard business process description language. BPMN has been widely adopted, by tool vendors and end users alike, but there are still some folks just discovering for the first time. Because of BPMN’s outward similarity to traditional swimlane flowcharts, many of those users think they understand BPMN but are making some fundamental mistakes. This month’s post attempts to set things straight by explaining the basic concepts of BPMN. Even experienced process modelers may be unaware of some of them.

Let’s start at the beginning. What exactly is a process? Dictionary.com defines it as “a systematic series of actions directed to some end”. Some key ideas there:

BPMN is consistent with those ideas, but many swimlane flowcharts are not.

A BPMN process is a sequence of activities leading from some defined triggering event to one or more possible end states. All possible sequences leading from the start event to some end state are defined by the process model. Let me repeat that. A process model does not describe one possible path from start to end but all possible paths. That’s because a BPMN process is something performed repeatedly – not continuously – in the course of business. Each repetition, or process instance, has a definite start and end. That’s why not everything that BPM architecture calls a “process” qualifies as a BPMN process.

BPMN actually originated in 2002 as a business-oriented diagramming language for defining automated processes. To gain acceptance by business, it adopted the basic look of swimlane flowcharts, popular since the 1980s but just for documentation and analysis, not for automation. It was not until BPMN 2.0 that the diagrams were properly executable on an automation engine, but even today the vast majority of BPMN usage remains for documentation and analysis not automation. Nevertheless, the basic concepts of BPMN are rooted in process automation, in fact, a particular style of process automation called orchestration. A BPMN process is an orchestration.

In a BPMN process, the world is divided into three parts:

These concepts come from the BPMN spec. One important concept that comes from traditional business process management, not from the BPMN spec, is that the sequence of activities contained in a process should describe an end-to-end customer-facing transaction, cutting across departments and roles within the organization that provides the process. In other words, an order handling process should begin on receipt of the order and complete with fulfillment and billing (or equivalent accounting) for the order. Order entry, fulfillment, shipping, and billing should not be separate processes, but part of a single cross-functional process in which the process instance represents a single order. The reason is that BPM as a management discipline believes that business should be measured and managed from the perspective of these end-to-end processes rather than siloed departmental functions. BPMN supports this idea, but the spec technically does not require it.

Sometimes a single business process must be modeled as multiple BPMN processes, but this is not because they are performed by different units within the organization. The reason this could happen is if there is not a one-to-one correspondence between the instances of the processes. Within a BPMN process the instance of each activity must have one-to-one correspondence with the process instance. For example, in a process where the instance is an order, the instance of each activity must be an order. That means an activity that updates prices every month cannot be contained in the order process, since its instance is performed once a month not once an order. It requires a separate process that runs monthly.

Representing all the steps of an end-to-end process in a single diagram takes a lot of space, more than fits on a single printed page. BPMN allows a sequence of activities to be represented as a subprocess, shown in two ways: collapsed, as a single activity shape in one diagram, and expanded, as a flow from start to end in a second diagram with a child-to-parent relationship to the first. Thus an end-to-end BPMN model is in general not a single diagram but a hierarchy of diagrams, with a single top-level diagram hyperlinked to child- and grandchild-level diagrams detailing each subprocess. Where traditional flowcharting had to create separate high-level and detail-level models, difficult to keep in sync as the process evolves, BPMN provides both high-level and detail-level views within the context of a single model.

While the BPMN spec describes many different types of start events, in practice a process starts in one of only three ways:

Thus it is possible to interpret how the process starts simply by the trigger icon in the start event.

The ability to understand the process behavior in detail simply by inspecting the diagram, unfortunately, was not top of mind in the BPMN 2.0 task force in 2010. They were solely focused on making the diagrams executable on an automation engine. But to most BPMN users today, precise description based on the diagram alone is much more important. That requires additional conventions on top of the concepts and rules of the spec. The ones I use are called Method and Style. Much of the need for BPMN Method and Style arises from the fact that the process behavior – which path out of a gateway is taken, for example – depends on the values of data objects. But BPMN 2.0 removed data objects from the modeler domain and moved them to the Java programmer domain, so they are not used in non-executable models. To compensate for this, Method and Style’s conventions assign meaning to things that are actually used in non-executable diagrams, such as the labels on end events and gateways or the icon of a start event.

For example, in Method and Style the label of an end event of a process or subprocess indicates its end state, meaning how did it end, either successfully or in some exception state. Each distinct end state is represented by an end event, so the count of end events represents the count of distinct ways it could end. Method and Style says a subprocess with multiple end states MUST be followed by an XOR gateway with an identical number of gates, each labeled to match the end state label. This indicates the path taken by each instance. So if an instance reaches the subprocess end state Report approved it follows the gateway path Report approved, and if it reaches the end state Report rejected it follows the gateway path Report rejected. Conventions like this allow common understanding of the process flow without defining data objects.

BPMN Method and Style provides a list of style rules intended to make the process behavior precisely described from the printed diagram alone. To reinforce these rules, some BPMN tools, including Trisotech Workflow Modeler, include Method and Style validation. In my BPMN Method and Style training, this has proven extremely valuable in raising the quality of BPMN models. You should check it out.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

What is CDS-Hooks?

CDS Hooks is both a free open-source specification and an HL7® published specification for user-facing remote clinical decision support (CDS). CDS Hooks can use FHIR® to represent patient information and recommendations but is architecturally an independent specification. The CDS Hooks specification is licensed under a Creative Commons Attribution 4.0 International License.

HL7 Fast Healthcare Interoperability Resources (FHIR) Logo

Note: HL7®, and FHIR® are the registered trademarks of Health Level Seven International and their use of these trademarks does not constitute an endorsement by HL7.

Learn More

The four basic components of CDS Hooks are:

In 2015, The SMART team launched the CDS Hooks project to trigger third party decision support services. Today, Clinical Decision Support (CDS) Hooks is a Health Level Seven International® (HL7) specification managed by the HL7 Clinical Decision Support (CDS) Workgroup. CDS Hooks provides a way to embed near real-time functionality in a clinician’s workflow when an EHR (Electronic Health Record) system notifies external services that a specific activity occurred within an EHR user session. For example, services can register to be notified when a new patient record is opened, a medication is prescribed, or an encounter begins. Services can then return “cards” displaying text, actionable suggestions, or links to launch a SMART app from within the EHR workflow. A CDS service can gather needed data elements through secure Fast Healthcare Interoperability Resources® (FHIR®) services. Using FHIR services allows CDS Hooks to provide interoperability between healthcare systems operating on different platforms.

SMART – Substitutable Medical Applications, Reusable Technologies
The SMART Health IT project is run out of the Boston Children’s Hospital Computational Health Informatics Program. Through co-development and close collaboration, SMART and FHIR have evolved together since the SMART standard enables FHIR to work as an application platform. The SMART project’s most recognized work is the “SMART on FHIR” application programming interface (API) which enables an app written once to run anywhere in a healthcare system.
SMART on FHIR (Fast Healthcare Interoperability Resources)
SMART on FHIR defines a way for health apps to securely connect to EHR systems. A SMART on FHIR application or service executes via SMART specifications on a FHIR system, extending its functionality through the use of clinical and contextual data. Together with the FHIR models and API, the SMART on FHIR specification components include authorization, authentication, and UI integration.

Who Uses CDS Hooks?

CDS Hooks support is built into all the major EHR products including EPIC, Cerner, Allscripts and athenahealth. Indeed, every type of healthcare organization from large-scale HMOs like UnitedHealth Group, Anthem, Kaiser, and Humana; to PPOs like Clover Health; to teaching hospitals like the University of Utah, Stanford Hospital, Mayo Clinic, and North Shore University Hospital (Northwell); to professional organizations like the American College of Emergency Physicians (ACEP) and the American College of Obstetricians and Gynecologists (ACOG) have access to these technologies.

Standards-Based Implementation of CDC Recommendations use the CDS Hooks interoperability framework for the integration of the guideline recommendations into EHR systems.

In the United States, SMART support is specifically referenced in the 21st Century Cures Act of 2016. The 21st Century Cures Act requires a universal API for health information technology, providing access to all elements of a patient’s record accessible across the SMART API, with no special effort. CDS Hooks are typically implemented as SMART applications.

What is CDS Hooks Used For?

CDS Hooks enables the creation of standard places within the EHR workflow where the EHR can issue a notification of an event that is happening. This notification can be received by an external (typically SMART) application which returns pertinent information to the EHR for display to the EHR user. In FHIR-speak, the information returned is a “card.” Cards can be of three types:

Because FHIR is an open-source standard, new hooks can be created by any interested party. Today, there are six standard “hooks,” which have been defined as part of Version 1 of the standard:

  1. patient-view – Patient View is typically called only once at the beginning of a user’s interaction with a specific patient’s record
  2. order-select – Order Select fires when a clinician selects one or more orders to place for a patient (including orders for medications, procedures, labs and other orders). If supported by the CDS Client, this hook may also be invoked each time the clinician selects a detail regarding the order
  3. order-sign – Order Sign fires when a clinician is ready to sign one or more orders for a patient, (including orders for medications, procedures, labs and other orders).
  4. appointment-book – Appointment Book is invoked when the user is scheduling one or more future encounters/visits for the patient
  5. encounter-start – Encounter Start is invoked when the user is initiating a new encounter. In an inpatient setting, this would be the time of admission. In an outpatient/community environment, this would be the time of patient-check-in for a face-to-face or equivalent for a virtual/telephone encounter
  6. encounter-discharge – Encounter Discharge is invoked when the user is performing the discharge process for an encounter where the notion of ‘discharge’ is relevant – typically an inpatient encounter

Prescription writing is an example you can use to visualize the way this technology works. In the EHR the event of writing the prescription triggers a CDS Hooks (order-select). The hook can be received by a patient engagement application. The application then returns a Suggestion card recommending that the patient be provided educational materials, enrollment into a support program and given an offer for copay assistance. An Information card indicating drug/drug or drug/disease adverse reactions might also be supplied.

The CDS Hooks Standard

The CDS Hooks Standard

The CDS Hooks specification and Implementation Guide describes the RESTful APIs and interactions to integrate Clinical Decision Support (CDS) between CDS Clients (typically EHRs but possibly other health information systems) and CDS Services. All data exchanged through the RESTful APIs is sent and received as JSON structures and transmitted over channels secured using HTTPS (Hypertext Transfer Protocol (HTTP) over Transport Layer Security (TLS).

This specification describes a “hook”-based pattern for invoking decision support from within a clinician’s workflow. The API supports:

User activity inside the clinician’s workflow, typically in the organization’s EHR, triggers CDS hooks in real-time. Examples of this include:

When a triggering activity occurs, the CDS Client notifies each CDS service registered for the activity. These services must then provide near-real-time feedback about the triggering event. Each service gets basic details about the clinical workflow context (via the context parameter of the hook) plus whatever service-specific data are required. This data is often provided by the FHIR prefetch functionality of the CDS Hooks Discovery URL.

Each CDS service can return any number of cards in response to the hook. Cards convey some combination of text (Information card), alternative suggestions (Suggestion card), and links to apps or reference materials (App Link card). A user sees these cards, one or more of each type, in the EHR user interface, and can interact with them as follows:

A good way to get a single-page technical overview of CDS Hooks including Discovery, Request and Response specs is to view the CDS Hooks “cheat sheet”.

Download Cheat Sheet

Trisotech and CDS Hooks

Trisotech CDS Hooks option generates CDS Hooks compliant end points for Trisotech automation services.

Learn More
Digital Automation Suite logo

CDS Hooks Discovery Endpoint

Trisotech modeling and automation platforms support the CDS Hooks 1.0 standard for processing and data prefetch in the Workflow and Decision Management modelers. Once the desired actions are modeled, publishing the models to the Service Library for automation purposes automatically provides a CDS Hooks Discovery endpoint URL for that service. That endpoint describes the CDS Service definition including the prefetch information required to invoke it according to the CDS Hooks 1.0 standard. Currently, this endpoint implements the standard CDS Hooks discovery endpoint for Patient View (patient-view).

CDS Hooks FHIR Prefetch

Using the CDS Hooks endpoint features requires preparation through entries in model inputs. Each input that will be prefetched from FHIR needs to have a custom attribute called FHIR that contains the query for the prefetch. For example, in a Workflow Data Input shape the custom attribute to prefetch the current patient information might look like this: FHIR: Patient/{{context.patientId}}

If you were using a decision model the DMN Input shape to prefetch a specific-code observation from the current patient might have a custom attribute like this: FHIR: Observation?subject={{context.patientId}}&code=29463-7

If your model is properly configured with the FHIR custom attributes and published to the Trisotech Service Library, an HTTP “GET” on the provided endpoint will result in a JSON description of the service. Each CDS Service is described by the following attributes:

Field Optionality Type Description
hook REQUIRED string The hook this service should be invoked on
title RECOMMENDED string The human-friendly name of this service
description REQUIRED string The description of this service
id REQUIRED string The {id} portion of the URL to this service which is available at {baseUrl}/cds-services/{id}
prefetch OPTIONAL object An object containing key/value pairs of FHIR queries that this service is requesting that the EHR prefetch and provide on each service call. The key is a string that describes the type of data being requested and the value is a string representing the FHIR query.

Also, the prefetched data can be “POST”ed at the base URL (removing the /cds-services from the discovery URL) to execute the service.

CDS Card FEEL Templates

Trisotech modeling also supplies modeling templates for all three types of CDS Cards. Information, Suggestion and App Link card templates are available in the FEEL language format for inclusion in Decision and Workflow models.

CDS Card Automatic Default Generation and Explicit Card Mapping

A CDS Card is automatically generated from the service unless your service is designed to explicitly output cards. By default, an Information card that has the name of the service as its summary will be automatically generated. Each service output will be added to the card detail in the format Key: Value.

For explicit mapping needs, if your model/service outputs a data type that is consistent with the CDS Card definition (required fields: summary, indicator, and source), only these outputs will be transformed to cards. It should be moted that your model/service can have one or many outputs that are CDS Cards. An output can also be a collection of cards. The data type of the output is what is considered to determine if explicit mapping is used for a service.

CDS Hooks Sandbox Testing

The CDS Hooks community provides a publicly available sandbox for services testing and all Trisotech modeling/service CDS Hooks features are tested and demonstrated using the CDS Hooks sandbox.

CDS Hooks Continuous Improvement

Trisotech is continuously improving CDS Hooks features including plans to add additional Version 1 standard hooks such as order-select, encounter-start, etc.

Trisotech

the
Pioneer

View all

Learn how it works

Request Demo

Confirm your budget

Request Pricing

Discuss your project

Request Meeting
Graph