Coverage Initiation: Trisotech’s approach to standards-based workflow and decision automation 

Coverage Initiation: Trisotech’s approach to standards-based workflow and decision automation 

Download Article

Content

Articles

View all

Sandy Kemsley’s Vlog - Business automation best practices #1 - Introduction
Sandy Kemsley Photo
Vlog

Business automation best practices

#1 – Introduction

By Sandy Kemsley

Video Time: 4 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with the first in a series on best practices in business automation application development.

I want to set the stage for this series by talking today about what I call the automation imperative. We have a lot of cool technology now, that can be applied to automating business processes and decisions:

Now, all this technology makes more automation possible. These newer technologies let us automate things faster and better, and in fact automate things that were never possible before. But this has a downside. Yours isn’t the only organization with access to these technologies and everyone else is trying to automate their businesses using the same methods and the same technologies. That means whatever market you’re in it’s become more competitive. Since that rising tide of new technology is floating all the boats, not just yours. More nimble competitors will force you to automate in order to compete, or you’ll die in the marketplace. This is the automation imperative. It’s essential that you leverage automation in your business, or you just won’t be able to survive.

So easy right? Just plug in some of these new technologies and off you go? Sadly not that easy. A study done by Boston Consulting Group, showed that 70% of “digital transformation projects” don’t meet their targets. 70! 7-0, that’s a big number. Well okay, digital transformation is one of those loose terms that gets applied to pretty much any IT project these days, so let’s focus that down a little bit. Ernst & Young looked at RPA projects. Now, robotic process automation vendors claim that you’ll have your return on investment before they even get driven all the way out of your parking lot.

Now, what E&Y found though, is that 30 to 50 percent of initial RPA projects fail. So, if organizations aren’t succeeding with a technology that’s supposed to be the most risk-free way to improve your business through automation, then there might be some problems with the technology, but probably, someone is doing something wrong with how they’re implementing these projects as well. We have billions of dollars being spent on technology projects that fail. A lot of companies are failing at business automation. You don’t have to be one of them.

Now, this series of videos is focusing on best practices, that will help you to maximize your chances of success in application development for business automation. Or if you want to look at it in a more negative light, they will minimize your chances of failure.

I’m going to split the series into three parts:

So we’re going to be covering quite a bit of territory and looking at a number of things.

Now, in each of these videos, I’m going to discuss a couple of things you should be doing. So those best practices and also some of the indicators that you might be failing. So some of these anti-patterns that you can be looking for as you were going through your projects to figure out if something’s going off the rails maybe before you have a serious failure. So stay tuned for the coming videos in this series.

That’s it for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Bruce Silver's blog post - DMN's Killer App
Bruce Silver
Blog

DMN’s Killer App

By Bruce Silver

Read Time: 7 Minutes

Presented at Decision Camp 2022

Watch the video
View the Slides

As a longtime DMN practitioner and from the beginning one of its biggest boosters, I am what you call a true believer. But while DMN continues to gain traction, even I would have to admit it has so far underperformed in the marketplace. The industry analysts say, “OK, we’re aware of it… But what’s the killer app?” It may not be what you think.

In the Beginning

Let’s go back to the beginning. In 2016, DMN was brand new. I had been posting about it, and I was invited to introduce it to the Decision Camp audience of decision management vendors and practitioners that year. As standards go, DMN was unusual: an executable modeling language designed for use by subject matter experts in a way that standards like BPMN were not. Executable logic would be defined graphically, using a set of standard tabular formats called boxed expressions, with a new business-friendly expression language called FEEL used in the table cells: Executable models without programming.

At that time, DMN was just a spec. There were no runtime implementations. But I also noticed that among tool vendors on the DMN 1.1 Task Force, which I had recently joined, there was no great urgency to implement it, and in my talk I complained about that. Of course, it was the things that made FEEL and boxed expressions business friendly that also made them difficult to implement: spaces in variable names, for example, and tables nested, potentially without limit, inside other tables.

But at that meeting, Mark Proctor of Red Hat told me, “Dont worry, my guy will have a DMN runtime in 6 weeks.” It actually took his guy, Edson Tirelli, more than 6 months, but it worked very well, and Red Hat made the FEEL parser and DMN runtime open source. So by 2017 we had an open source runtime, and there was little excuse for tool vendors not to fully implement DMN.

A few did, but most Decision Management vendors continued to resist FEEL and boxed expressions. In fact, they questioned the original premise that business users really wanted to go beyond defining requirements handed off to programmers. Or, if they did, that they had any interest in basing that on the DMN standard even though those had been central to the original DMN RFP.One vendor that did believe in the original premise, as I did, was Trisotech, and I soon moved my DMN training to their platform, which was aimed squarely at non-programmers.

Lets fast forward to today, 6 years later. The DMN standard continues to get better, every year a new version. But still, few tool vendors have implemented it beyond DRDs and decision tables. Interest in my DMN training has increased, but it is only now catching up to my BPMN training, which is focused on descriptive, that is non-executable, models. Its a little discouraging.Yet despite all that, I am more hopeful today than ever about the promise of DMNs key features FEEL and boxed expressions. And thats because they are the keystones of what I believe is a far larger opportunity.

What You Draw Is What You Execute

Back in 2010, the process modeling standard BPMN 2.0 promised business users, “What You Draw Is What You Execute.” I used to say that myself.

But not really. What makes a process model executable are the services implementing the tasks, and the process data and expressions used to orchestrate them. But for those things, BPMN has always required programming. That standard provides nothing like FEEL and boxed expressions for non-programmers. While the activity flow semantics can be defined by business users in diagrams, making those diagrams executable has always required Java programming. So diagrams created by business and handed off to programmers for implementation has been standard practice in Process Automation for a dozen years now.

A few years later, DMN set out to be different from that. If you are unfamiliar with OMG standards, they start with an RFP that lays out the objectives of the standard. And DMN was distinctly different from BPMN in that it wanted to enable business users to do more than create requirements handed off to programmers. To that end, the DMN standard itself defines a graphical Low-Code language for decision logic implementation: a set of standard tabular formats called boxed expressions and a friendly expression language for the table cells called FEEL. Although DMN allows tools to assert so-called Level 1 conformance without supporting FEEL and boxed expressions, FEEL and boxed expressions comprise the bulk of the DMN spec: the syntax, semantics, operational behavior and graphical formats.

Unfortunately, most decision management vendors especially those who formerly called themselves business rule engine vendors failed to adopt FEEL and boxed expressions, preferring the traditional arrangement in which business creates requirements handed off to IT for implementation in some proprietary language, and they continue to do so today although in so doing, they ignore the original promise of DMN.

Its important to understand that decision logic is more than decision tables. For example, it must refine raw input data into the variables that are used in the decision tables. FEEL and boxed expressions not only do that, but represent a complete Low-Code language suitable for any kind of business logic, not just that related to operational decisions. Even though FEEL and boxed expressions are defined within the DMN spec, their utility extends far beyond the boundaries of traditional decision management.

For example, over the past two years, the Trisotech platform has quietly been adding Automation capabilities to its BPMN models in a different way, a Low-Code way, leveraging FEEL and boxed expressions borrowed from DMN. By now that Business Automation platform is quite complete. So this goes beyond business users creating decision services. This is enabling them to create full-fledged Business Automation services themselves, graphically, so that What You Draw actually IS What You Execute.

Low-Code Business Automation

I use the term Business Automation to mean the combination of business logic automation and process automation, and DMN combined with BPMN offers a Low-Code approach to it based on standards.. If Subject Matter Experts learn to create decision models using DMN with FEEL and boxed expressions, its just a small additional step to do Low-Code Business Automation themselves: model, test, and cloud deploy as a REST service no programming required.

Low-Code means model-based software development, accessible to technically inclined business users, as well as to developers. The fact is, interest in Low-Code Business Automation is exploding; Gartner says 65% of all solution development will be Low-Code by 2024.Thats an addressable market larger by orders of magnitude than Decision Management. The key benefits: time to value model-driven development is simply faster than code – and avoiding the IT resource bottleneck, since subject matter experts can do much of it themselves.

That is why I see these DMN features FEEL and boxed expressions, which have frankly languished in the world of Decision Management – as the key to something new and exciting today, an opportunity that is much larger than Decision Management. Actually Low-Code Business Automation is DMNs killer app.

Why is this opportunity happening now? The fact is, over the past six years, the software world has changed completely:

Reason 1
the cloud

The cloud has revolutionized software, both tools and runtime. Large enterprises have largely overcome their resistance to putting critical data in the cloud. In addition to its well-known benefits to IT, the cloud makes Business Automation tools and runtime directly accessible to business users; they dont need to wait for IT to provision or maintain the tools or the runtime. The Trisotech platform, for example, is entirely cloud-based and business-user-focused. In one click BPMN and DMN models are compiled and deployed as REST services in the cloud.

Reason 2
REST APIs

Today you can find a public REST API for just about anything you need. You dont have to build it, someone has already done it. Its there, accessible to you via a simple web message. Aggregators like RapidAPI.com provide a centralized search, subscription, and SDK for thousands of public REST APIs. In addition, standards like OData allow you to generate REST APIs for your own applications and databases. For example, Skyvia Connect generates OData endpoints for over 100 apps and databases. Each endpoint exposes 5 REST operations per database table.And on the Trisotech platform, DMN itself generates a REST API for ANY business logic you create yourself, and BPMN similarly generates REST APIs for your executable process logic.So today REST APIs are everywhere. Business Automation is really just orchestration of services, some created by you, but mostly created by others.

Reason 3
Covid and its aftermath, the current labor shortage

It is nearly impossible today to hire skilled programming resources. So getting new projects off the ground just takes too long.But what if you could teach subject matter experts to create solutions themselves? Thats what the current excitement over Low-Code and No-Code tools is about. No-Code is used primarily for situational apps in HR and CRM. Im more interested in Low-Code logic equivalent to a DMN decision model that in combination with the cloud and REST APIs can automate critical core business activities in financial services and healthcare.

Example:
Vacation Request Process

Let’s see how Low-Code Business Automation with DMN and BPMN works, and then go back to key questions that have dogged Decision Management vendors from the start: Do business users really want to do this? And does basing the tools on standards make a difference?

Here is an example of a Vacation Request process as it would be modeled using my non-executable BPMN methodology called Method and Style. Not a core business process but illustrative. Its very simple. A Vacation Request is processed by decision logic that either automatically approves it, refuses it, or refers it to the manager. If ultimately approved, the requested days are deducted from the employees accrued vacation time.

And here is the executable version of it using Low-Code Business Automation: The BPMN is similar but the activities now are more fine-grained, the icons in the top left corner indicating specific capabilities of each task type. But the most obvious difference is the model now includes process data and data flow.

Decision Tasks

A BPMN decision task is bound to a DMN decision service.

Here you see the data input validation decision. DMN is actually an excellent language for data validation. A Collect decision table, hit policy C, produces a list of error messages. Data validation logic typically uses some of the lesser known FEEL functions and operators: generalized unary tests and the instance of operator, as you see here, also the get entries function, which lets you test all components of a structure at once, and match, for pattern matching with regular expressions.

Just to repeat, DMN is not only about decision tables. Its a Low-Code language for any business logic. For example, the Vacation Request specifies the employees requested start and return date, but the approval logic requires knowing the count of requested days. FEEL has excellent support for calendar arithmetic, but finding the count of requested days is a little complicated. We cant simply subtract the dates; we need to exclude weekends and holidays as well.

This boxed expression is called a context. It lets you break up a complex value expression into pieces using local variables something most other Low-Code languages cannot do. For example, here there are 5 context entries, each defining a local variable and its value expression, and the final result expression references those local variables. Note that the fourth context entry uses a decision table, an example of a table nested in another table. Context boxed expressions let you make complex logic business friendly without cluttering up the DRD with dozens of tiny decisions.

In the DMN model, a decision service defines the parameters called by the BPMN decision task. The decision service outputs are selected decision nodes in the DRD, and the tool calculates the required service inputs. Here the service Validate Vacation Request has input parameters Vacation Request and Original Employee Record, and output parameters Validation Errors and Employee Record.

In BPMN the Decision task Validate Vacation Request is bound to that decision service. The task inputs are by definition the decision service inputs, and the task outputs are the service outputs.In the process model, the dotted arrows, called data associations, represent mappings between the process variables, here called Vacation Request and Employee Record, and the task inputs and outputs. So from the diagram you see there are input mappings from Vacation Request and Employee Record and an output mapping to Validation Errors.

Since all data is represented as FEEL in the modeling environment, the mappings use boxed expressions and FEEL, similar to a boxed invocation in DMN. Here the mappings are essentially identity mappings, but any FEEL expression can be used.

Service Tasks

An activity with the gear icon, called a service task, calls an external REST service. Simply bind it to a REST operation in the models Operation Library and map process variables to the service inputs and outputs.

EmployeeVacationData is a MySQL database table on my website. It holds the employees accrued vacation time.A third party service, Skyvia Connect, introspects the database and generates a REST endpoint for it and a metadata file used to configure a Trisotech connector, exposing 5 database operations on the table using OData. We will use the Find, or query operation, to retrieve the record for the requesting employee.

We bind the service task Get Employee Data to the Find, or query operation of this endpoint in the Operation Library.Then we model mappings between process variables and the service input and output parameters using FEEL and boxed expressions. Here the data mapping for the input parameter $filter uses a FEEL expression to create a query string in the OData language.

Once were done, the whole Vacation Request process is itself deployed as a REST service in the Trisotech cloud. This Business Automation service can be called by any REST client app with the appropriate credentials.

Hopefully this gives you the flavor of how subject matter experts can create Business Automation services using FEEL and boxed expressions without programming!

Where Can We Use This?

What kind of Business Automation services can make use of this? Currently the Trisotech platform is used most heavily in integration-centric processes in healthcare and financial services. Digital mortgage underwriting is a great example, as there are many complex rules. You might say thats a straight decision management problem, but its really a process. You need to first get the data from various sources, validate the data, transform it to testable values, apply some decision logic, and finally format and store the output. Low-Code Business Automation can do all that in a single composite service.

I am currently working with a large accounting firm to create general ledger entries triggered by business events related to financial portfolio assets and trades. Its high-volume, involves many database tables, a lot of computation perfect for DMN and BPMN.With Low-Code, solutions like these are much faster to build, test, and deploy than with Java, and easier to maintain.

So who can create apps like these? I admit its not every business user or subject matter expert. Most of my BPMN Method and Style students probably could not do it very well. But anyone who can create executable decision models using FEEL and boxed expressions certainly can. You need to have some basic facility with data and expressions, and the patience to debug when things dont work right the first time. Thats just a subset of business users, but in most companies, subject matter experts who can do that still greatly outnumber programming staff. Microsoft used to call them citizen developers; now they are calling them software makers, a term I like better.

To turn subject matter experts into makers, I had to completely revise my DMN training. Originally I had a course DMN Basics that focused on DRDs and decision tables, since thats all most DMN tools supported and Its something every business user can understand. The problem is you cannot do anything useful with only that except to provide requirements handed off to others. In the revised training I focus on FEEL and boxed expressions. Those are the critical elements. And yes, subject matter experts who are not programmers can learn to understand and use them well. You just have to show them how. They feel empowered, because they ARE empowered. Once you fully understand DMN, Low-Code Business Automation is mostly a matter of learning how to find and orchestrate REST APIs.

DMN vs Proprietary Low-Code Languages

OK, youre saying there are already a large number of Low-Code Business Automation tools available. Thats true. Basically all of them use their own proprietary scripting for the Low-Code piece, just as many Decision Management vendors use their own proprietary rule languages, which they also say are business-friendly.

But FEEL and boxed expressions are not only non-proprietary, they are actually more business friendly and often more powerful. For example, FEEL and boxed expressions make DMN more powerful and more business-friendly than Power FX, Microsofts Low-Code expression language used in Excel formulas and Microsoft PowerApps. Without boxed expressions like contexts, Power FX must pack complex logic into a single literal expression. And Power FXs lack of infix operators makes these expressions deeply nested functions that are difficult to understand.

For comparison, here is an example published by Microsoft marketing showing how Power FX can extract the last word in a string. They were very proud of this So, lets see, Is that 4 or 5 right parens at the end?Below it is the equivalent in FEEL: The split operator tokenizes the input string based on the space character and [-1] means take the last item. Now you tell me, which is more powerful AND more business friendly?

So why are DMN users not demanding their tool vendors support FEEL and boxed expressions?

1

Its mostly lack of awareness. Most of what other folks write about DMN just talks about DRDs and decision tables. Possibly even many of you here at Decision Camp were not aware of what FEEL and boxed expressions can do.

2

Its also the need for training. You need to learn the language. Developers can generally learn from books and articles, but business users by and large cannot. They need training, a methodology that leads them through it step by step, and it must be hands-on using the tools. Thats an investment in money and time, definitely a barrier.

3

As everyone here knows, Automation is unforgiving. If the slightest thing in your model is wrong you will get an incorrect output or worse, a runtime error. So Automation requires debugging, and not all business users have the patience or discipline for that.

4

But a major reason also, probably the most important reason, is the determined resistance from Decision Management vendors. Even though DMN is a standard, most vendors on the RTF still do not support it beyond DRDs and decision tables. FEEL is too hard, they say. Business users dont want to create implementations, just requirements. They say it all the time. FEEL and boxed expressions are a distraction, not fundamental to DMN. We even heard this last year at Decision Camp.

The Larger Opportunity

That lack of support from incumbent vendors is why I have come to doubt that Decision Management is the best opportunity for DMN. A better opportunity is Low-Code Business Automation.

And this takes us all the way back to the original question:Do subject matter experts really want to create executable solutions themselves? Not all, obviously, but an increasing number of them do, and their employers today desperately need to acquire low-code tools and expand their pool of software makers. And even for professional developers, Low-Code is faster, easier to maintain, and more transparent to the business.

That is why I am becoming increasingly optimistic about engaging the Business Automation community with Low-Code Business Automation using BPMN and DMN. Its a far larger addressable market than Decision Management, its growing, and its perfectly matched to todays architecture based on the cloud and REST APIs.

But we know there are other Low-Code alternatives. So do they want to do it using standards, i.e. DMN in combination with BPMN? That is DMNs most obvious differentiator right now vs other Low-Code languages. Here are two reasons why they would.

So ideally,
what should happen now?

What started as an effort to enable subject matter experts to create decision logic themselves, while so far resisted within the digital decisioning community, could ultimately find a better path forward in Low-Code Business Automation.

For anyone interested in the tools and techniques used in this approach, there is a lot of information on methodandstyle.com. I offer online training in DMN, focused on FEEL and boxed expressions, and in Low-Code Business Automation, and there is a bundled offering at a discount. Each course includes 60-day use of the Trisotech platform and post-class certification.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Decision Camp 2022
DMN’s Killer App

Presented by

Bruce Silver, Principal Consultant at Trisotech and the founder and principal of MethodAndStyle.com and BPMessentials.

Presented at Decision Camp 2022

Read the post
View the Slides
Videos

Recorded webinars

View all videos
Attend a live webinar

Bruce Silver's blog post - A Standard for Low-Code Business Logic
Bruce Silver
Blog

Cloud Datastores Simplify Business Automation

By Bruce Silver

Read Time: 4 Minutes

Cloud datastores are a new feature of the Trisotech Automation platform. They are most useful in BPMN, but they can serve also as input data in DMN. Datastores are a standard BPMN element representing persisted storage, accessed by a process task via data association but available for external access as well.

In Trisotech’s implementation, each datastore is a single relational table. It acts like a database but requires none of the extra work involved in using something like OData. For example, to incorporate a relational database table in your model with OData, you need to host the database yourself, subscribe to an OData wrapper service like Skyvia Connect, and use a Service task for the database operation. Datastores are much more convenient: They are built into the Trisotech platform, so they require no external components and they are directly accessible like any process variable. If your table is a few thousand records or fewer, datastores could make your Low-Code Business Automation project a lot simpler. In this post we’ll see how they work.

In the process diagram, a datastore is represented by the can shape, unlike the dogeared page shape used to represent data objects, data inputs, and data outputs. Like data objects, cloud datastores are process variables based on a FEEL datatype, which is a collection of the table row type. But a datastore differs from a data instance that accesses it is complete. Second, a datastore does not belong to a single process. It belongs to the entire workspace, with access granted to its creator and selected other workspace members. That means it can be used to store data shared between a process and an external entity or between multiple processes.

In Trisotech, datastore content is typically created and maintained with the help of Excel on the desktop. Importing an Excel file in a management utility creates and populates the table. For example, here we have an Excel table of the available vacation days for each employee. In the management utility, importing this Excel file populates the datastore and adds it to the list of the user’s defined datastores. You can also modify datastore content via Excel using the utility. Authorized external entities can also do that via an API, enabling interaction with the process via shared data, a familiar pattern from Method and Style.

Binding a datastore shape in BPMN (or an input data in DMN) syncs the shape to the datastore content, indicated by a lock icon inside the can shape. Unlike a database table, which requires a Service task to perform a query, insert, or update operation, operations on a datastore use standard FEEL list functions and operators. They are typically performed by data associations, i.e., the input and output mappings of a Script task, Decision task, or User task, tasks which can perform actions in addition to the datastore read or write. To perform a datastore operation and nothing more, a Script task is simplest, as shown in the model below:

Here the Script task Get Available Days queries the datastore to find the accrued vacation time for the employee identified by the data input Vacation Request. This involves selecting the employee record from the datastore and then extracting the AvailableDays value from that record. Since both the input mapping and the script are FEEL expressions, you have several possible ways to do this: Get the entire datastore in the input mapping and extract AvailableDays for a particular employee in the script expression; select the employee record in the input mapping and extract AvailableDays in the script; or do it all in the input mapping with the script being just the identity mapping. With datastores there is no difference in performance, but I think it’s most “natural” to do it the second way, as shown here. The input mapping selects Employee record using a filter of the datastore, and the script extracts AvailableDays.

While querying a datastore usually involves a filter, inserting a record in the datastore uses the standard FEEL list functions, append, insert before, or insert after. Deleting a record uses the list function remove. Some of these require knowing a record’s position in the datastore, which you can get using the index of function. The more interesting operation is a record update. Let’s say in this example we want to reduce the employee’s AvailableDays value by the vacation days requested, as in the process diagram below.

Record updates like this use the list replace function, new in DMN 1.4. This function has two forms: list replace(list, position, newItem) and list replace(list, match, newItem). The first one requires knowing the position, as with insert before. In the second, the parameter match is a function definition, similar to the precedes function in sort. The first argument of match is a range variable representing any item in the list. As with precedes, it is most common to define match inline as an anonymous function using the keyword function.

In this example, the input mapping of Update Employee record is the same as before, a filter extracting the original Employee Record from the datastore. But this time, the script expression uses the context put function to modify the value of that record’s AvailableDays component, subtracting Vacation Request.requested days from the original value.

Then the output mapping uses list replace to replace the original employee record with the updated one. Here match is an anonymous function in which the range variable x stands for any record in the datastore and the function selects the one for which the EmployeeId value matches that of Employee record. The list replace function substitutes the script output Employee record – now updated – for the original datastore record.

Admittedly, datastore updates involve unfamiliar FEEL functions like context put and list replace, but learning those is a small price to pay for the extra convenience. Cloud datastores make data persistence easy. You should give them a try!

To learn how to do Low-Code Business Automation using both datastores and OData, check out my new training.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Call Public APIs Without Programming
Bruce Silver
Blog

Call Public APIs Without Programming

By Bruce Silver

Read Time: 2 Minutes

Interest in Business Automation is being accelerated by the thousands of public REST APIs available from countless service providers.

Most are available on a Freemium basis – free low-volume use for development, with a modest monthly fee for production use. The Trisotech Low-Code Business Automation platform lets you incorporate these services in your BPMN models without programming. You just need to configure a connection to them using the model’s Operation Library. With many public APIs, that configuration requires simply importing an OpenAPI file from the service provider. OpenAPI – sometimes called Swagger – is a file format standard for REST APIs. The problem is that most public APIs don’t provide one. Instead they provide documentation that you can use to configure the Operation Library entry yourself. If you’re not a developer, that sounds daunting, but it’s really not that hard. This post will show you how to do it.

A REST API is organized as an interface containing a set of operations. The interface specifies a base address, or server URL, and some means of authorization, such as an API key. Each operation performs a specific function, and is addressed by a path appended to the base address or endpoint. To invoke the operation, a client app sends an http message that contains the required information to a URL concatenating the base address and path. That information includes:

Upon receipt of the message, the service is executed and returns some response message, which includes:

Compared to the earlier generation of SOAP-based web services, REST APIs are simple and standardized. This is their great attraction!

In my Business Automation training, we make use of a real-time stock quote service from Rapidapi.com, a popular aggregator of public APIs. The service we use is called YH Finance, a clone of the old Yahoo Finance API. The documentation page looks like this:

The interface is huge, with operations collected in groups. The one we want is in the group market, with the path /market/v2/get-quotes. Selecting that from the panel on the left, we see the operation documentation in the center panel, which specifies the parameters of the operation, and where in the message they are placed: in the message header, message body, or query portion of the URL. Here two parameters are required in the http message header: X-RapidAPI-Host is the server URL, yh-finance.p.rapidapi.com. X-RapidAPI-Key is the requester’s personal authorization credential. Two more parameters are required in the query: region, enumerated text, and symbols, a comma-delimited string containing stock tickers.

This documentation is all we need to create the input side of the Operation Library entry. Manually add an interface YH Finance and an operation Get Quotes. In the interface properties enter the Server URL and the authorization type. In the operation properties enter the method, path, and under Inputs the required parameters except for the API key, which is assigned to an individual and entered separately when the process containing the service invocation is compiled and published. Clicking the pencil for each input lets you specify its datatype, a FEEL type you must create from the documentation. It’s actually quite easy!

Configuring the operation output is slightly trickier. Notice in the API documentation the button Test Endpoint in the center panel. Clicking that executes the operation with the default parameters shown, in this case a string listing three stocks.

The documentation page then shows the resulting output, as you see here: an outer element quoteResponse with two components, an array of result – one per stock – and error. Expanding result, we see it contains 97 components – wow, that’s quite a lot – of which we only need three: quoteType, bid, and ask. But we do not need to construct a FEEL type with all 97 components. As long as we respect the basic structure, we can omit the 94 result components we don’t need! So we can create FEEL type tTradeQuote, a structure with enclosing element quoteResponse containing a collection of result plus error, and the type of result contains just the three components we need. When the service is invoked, the json response message is mapped automatically to the FEEL tTradeQuote structure.

To incorporate this operation in your Business Automation model, simply click the gear icon on a Service task and bind the task to the Operation Library entry. The service inputs – symbols, region, and x-rapidapi-host – become the task inputs, and the service output quotes becomes the task output. Then, using the Low-Code data mapping explained in a previous post, you just need to map data objects in the process to the task inputs and output. When you Cloud Publish the process, you will need to enter your API key. And that’s it!

So just because your API provider doesn’t offer an OpenAPI file, don’t think it’s inaccessible to Low-Code Business Automation. It’s actually quite easy.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Data Flow in Business Automation
Bruce Silver
Blog

Data Flow in Business Automation

By Bruce Silver

Read Time: 4 Minutes

In BPMN Method and Style, which deals with non-executable models, we don’t worry about process data. It’s left out of the models entirely. We just pretend that any data produced or received by process tasks is available downstream. In Business Automation, i.e., executable BPMN, that’s not the case.

Process data is not pervasive, available automatically wherever it’s needed. Instead it “flows”, point to point, between process variables shown in the diagram and process tasks. In fact, data flow is what makes the process executable. With most BPMN tools, defining the data flow requires programming, but Trisotech borrows boxed expressions and FEEL from DMN to make data flow – and by extension, executable process design – Low-Code. This post explains how that works.

First, we need to distinguish process variables from task variables. Process variables are shown in the BPMN diagram, as data inputs, data outputs, and regular data objects. A data input signifies data received by the process from outside. When we deploy an executable process as a cloud service, a data input is an input parameter of the API call that triggers the process. Similarly, a data output is an output parameter of the API call, the response. A data object is a process variable created and populated within a running process.

Task variables are not shown in the process diagram. They are properties of the tasks, and the BPMN spec does not specify how to display them to modelers. What is shown in the diagram are data associations, dotted arrows linking process variables to and from process tasks. A data input association from a process variable to a task means a mapping exists between that variable and a task input. Similarly, a data output association from a task to a process variable means a mapping exists between a task output and the process variable.The internal logic of each task, say a decision service or external cloud service, references the task variables. It makes no reference to process variables.

On the Trisotech platform, the task inputs and outputs are shown in the Data Mapping boxed expressions in the task configuration. Task variable definitions depend on the task type. For Service tasks, Decision tasks, and Call Activities, they are defined by the called element, not by the calling task. For Script and User tasks, they are defined within the process model, in fact within the Data Mapping configuration.

A Service task calls a REST service operation. The Operation Library in the BPMN tool provides a catalog of service operations available to the Service tasks in the process. Operation Library entries are typically created by the tool automatically by importing an OpenAPI or OData file from the service provider, but they can also be created manually from the API documentation. Each entry in the Operation Library specifies, among other details, the service input and output parameters. When you bind a Service task to an entry in the Operation Library, the Service task inputs are the service’s input parameters and the Service task outputs are the service’s output parameters.

A Decision task calls a DMN decision service on the Trisotech platform. In the DMN model, you create the decision service, specifying the output decisions, which become the service outputs, and the service inputs, which could be input data or supporting decisions. When you bind a Decision task to the decision service, the service inputs become the Decision task inputs and the service outputs become the Decision task outputs.

A Call Activity calls an executable process on the Trisotech platform. When you bind the Call Activity to the called process, the data inputs of the called process become the task inputs of the Call Activity, and the data outputs of the called process become the task outputs of the Call Activity.

With Service tasks, Decision tasks, and Call Activities, the task inputs and outputs are given. They cannot be changed in the calling process. It is not necessary for process variables in the calling process to have the same name or data type as the task variables they connect to. All that is required is that a mapping can be defined between them.

Let’s look at some examples. Below we see the DRD of a decision model for automated approval of an employee vacation request, and below that the decision service definition, Approve Vacation Request.

In BPMN, the Decision task Approve Vacation Request is bound to this decision service.

Here you see the process variables Vacation Request (data input), Employee Record (data object), and Requested Days (data object). Vacation Request and Employee Record have data input associations to the Decision task. There is a data output association to Requested Days and two more to variables not seen in this view.

The Data Mapping dialog of Approve Vacation Request shows the mapping between the process variables and the task inputs and outputs. Remember, with Decision tasks, the task inputs and outputs are determined already, as the input and output parameters of the decision service. In the Input Mapping below, the task inputs are shown in the Mapping section, labeled Inner data inputs. The process variables with data input associations are listed at the top, under Context. The Mapping expression is a FEEL expression of those process variables. In this case, it is very simple, the identity mapping, but that is not always the case.

In the Output Mapping below, the task outputs are shown in the Context section, and the process variables in the first column of the Mapping section. The Output Mapping is also quite simple, except note that the task output Vacation Approval is mapped to two different process variables, Approval and Approval out. That is because a process data output cannot be the source of an incoming data association to another task, so sometimes you have to duplicate the process variable.

Not all data mappings are so simple. Let’s look at the Service task Insert Trade in the process model Execute Trade, below.

Insert Trade is bound to the OData operation Create trades, which inserts a database record. Its Operation Library entry specifies the task input parameter trade, type NS.trade, shown below.

Actually, because the ID value is generated by the database, we need to exclude it from the Input Mapping. Also, the process variable trade uses a FEEL date and time value for Timestamp instead of the Unix timestamp used in NS.trade. We can accommodate these details by using a context in the Input Mapping instead of a simple literal expression. It looks like this:

The context allows each component of the task input trade to be mapped individually from the process variables trade and Quote (which provides the price at the time of trade execution). As we saw with the Decision task, the Context section lists the available process variables, and the first column of the Mapping section lists the task inputs. Note also that the Price mapping uses the conditional boxed expression for if..then..else, new in DMN 1.4, designed to make complex literal expressions more business-friendly.

Finally, let’s look at the Script task in this process, Validate Trade Value. Script and User tasks differ from the other task types in that the modeler actually defines the task inputs and outputs in the Data Mapping. On the Trisotech platform, a Script task has a single task output with a value determined by a single FEEL literal expression. As you can see in the process diagram, the process variables with incoming data associations to Validate Trade Value are trade, Quote, and portfolio, and the task output is passed to the variable Error list. This task checks to see if the value of a Buy trade exceeds the portfolio Cash balance or the number of shares in a Sell trade exceeds the portfolio share holdings.

In the Script task Input Mapping, the modeler defines the names and types of the task inputs in the first column of the Mapping section. The mapping to those task inputs here are more complex literal expressions. The script literal expression references those task inputs, not the process variables. User tasks work the same way, except they are not limited to a single task output.

Hopefully you see now how data flow turns a non-executable process into an executable one. The triggering API call populates one or more process data inputs, from which data is passed via Input Mapping to task inputs. The resulting task outputs are mapped to other process variables (data objects), and so forth, until the API response is returned from the process data output. On the Trisotech platform, none of this requires programming. It’s just a matter of drawing the data associations and filling in the Data Mapping tables using FEEL.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Inspect Process Data with Attended Tasks
Bruce Silver
Blog

Inspect Process Data with Attended Tasks

By Bruce Silver

Read Time: 3 Minutes

Debugging executable processes can be a challenge because, unlike DMN models, you cannot test them in the Modeler. You need to compile and deploy them first, and problems are often reported as runtime errors.

Until fairly recently, to zero in on the problem you needed to isolate it in a small fragment of the process by saving various fragments as test processes, compiling and deploying them, and running them with reconstructed input values. But earlier this year, Trisotech made it a lot easier with a new feature called Attended tasks. Attended tasks are a Trisotech extension, not part of the BPMN spec, but very useful in practice. This post shows how they work.

An automated task, such as a Decision task or Service task, that is marked as Attended allows its input and/or output data to be inspected and possibly modified by a user, either before or after the task runs. Attended tasks originally were introduced not for debugging but to support customers in Healthcare. While highly interested in technology and automation, Healthcare providers often want to confirm the values provided as inputs to automated tasks in order to correct them for the latest values observed, as some of the data fetched may come from an earlier encounter or may need to be adjusted based on the latest live observations. On the output side, providers want to be able to overwrite an automated decision for various reasons. Based on their own experience, they may disagree with the automated results, or may believe that certain details about the case were not properly considered in the automated result. Trisotech was seeking a way to ensure that a clinical pathway could remain the same for review, whether or not pre and/or post validations were required. By using Attended tasks, the same clinical model can introduce or remove data validation without having to modify the model for all possibilities of the Healthcare context.

My own experience with Attended tasks is not in Healthcare is focused more generally on the debugging problem. Marking an automated task as Attended acts as a “breakpoint” in the automation where the user – typically the process designer in this case, not the end user – can inspect process data at design time to better understand unexpected results. To illustrate, let’s look at a variant of the Vacation Request process in the Trisotech EU-Rent repository.

Employee Vacation Days is a table from the HR system, listing each employee with their accrued vacation days. Data input Vacation Request specifies the requesting employee’s name, id, and proposed vacation start and return dates. Script task Fetch Vacation Information extracts the employee’s record from Employee Vacation Days and passes it to the Decision task Approve Vacation Request, which could either approve the request, refuse it, or refer it for human decision. That’s pretty simple. Nothing much could go wrong here, right? But when we test it from the Service Library, we see a problem. The Decision task Approve Vacation Request failed with a runtime error.

We can mark that Decision task as Attended to take a look at what is going on. When we do that, we see this dialog:

This dialog lets us specify where we want to put the breakpoint: before or after the input data mapping, and/or before or after the output data mapping. Since this task is failing, we want to inspect the data before the input data mapping, which is the output of our Script task. If we want, we can enter the name of a process user as a Resource, who will receive the data in an email and can return corrected values. You would use that when the Attended task is a permanent feature of the process, as in the Healthcare context. But when using Attended tasks simply for debugging, we can leave this Resources field blank, as the user will see the data in the Service Library test interface. So our configured Attended task dialog looks like this:

Now in the process diagram, the task is marked as Attended with a little checkbox.

When we Save, Cloud Publish, and test, we see this:

The process has paused at the Decision task input mapping. We can inspect the data input and then click the link Verify data inputs.

Now we see the cause of the problem. The data object Current Vacation Status – the output of our Script task – is null. Even though this example is simpler than most you will encounter, the root cause of the problem – some data object is null – is far and away the most common cause of BPMN runtime errors. In real-world process models, it usually boils down to pinpointing which data object has the problem, and Attended tasks let you do that.

The Script task is just a simple filter of the Employee Vacation Days table. If no entry in that table matches the employeeid value of the Vacation Request, Current Vacation Status will be null, and that’s exactly what happened. Most likely the employee entered the wrong value for his employeeid. That certainly could happen, even though our original model did not consider the possibility. But we still need to fix our process so that we don’t get a runtime error in that case. To do that, we can just test for null in a gateway and exit to the end state “Employee not found”.

Now the process runs with no runtime errors even when Vacation Request.employeeid is not found in the HR system. Thorough testing of executable BPMN models is a must, as you cannot assume that only “good” input data will be provided. If you haven’t considered some possibility, no matter how unlikely it seems, sooner or later it will occur. When that happens, Attended tasks make it a lot easier to track down and fix the problem.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

A Methodology for Low-Code Business Automation with BPMN and DMN
Bruce Silver
Blog

A Methodology for Low-Code Business Automation with BPMN and DMN

By Bruce Silver

Read Time: 5 Minutes

In recent posts, I have explained why anyone who can create rich spreadsheet models using Excel formulas can learn to turn those into Low-Code Business Automation services on the Trisotech platform, using BPMN and DMN, and why FEEL and boxed expressions are actually more business-friendly than PowerFX, the new name for Excel’s formula language. That’s the why. Now let’s discuss the how.

In recent client engagements, I’ve noticed a consistent pattern: A subject-matter expert has a great new product idea, which he or she can demonstrate in Excel. A business event, represented by a new row in Excel Table 1, generates some outcome represented by new or updated rows in Tables 2, 3, and so on. The unique IP is encapsulated in the business logic of that outcome. The challenge is to take those Excel examples and generalize the logic using BPMN and DMN, and replace Excel with a real database. This constitutes a Business Automation Service.

Business Automation Service

The Basic Pattern

In BPMN terms, a Business Automation service typically looks like this:

Each service is modeled as a short-running BPMN process, composed of Service tasks, Decision tasks, and Call Activities representing other short-running processes. The service is triggered by a business event, represented by a client API call. The first step is validating the business event. This typically involves some decision logic on the business event and some retrieved existing data. If valid, one or more Service tasks retrieve additional database records, followed by a Decision task that implements the subject matter expert’s business logic in DMN. In this methodology, DMN is used for all modeler-defined business logic, not just that related to decisions. The result of that business logic is saved in new or updated database table records, and selected portions of it may be returned in the service response. It’s a simple pattern, but fits a wide variety of circumstances, from enterprise applications to TurboTax on the desktop.

Implementation Methodology

Over the past year, I’ve developed a methodology for implementing that pattern.

It goes like this:

The first step is whiteboarding the business logic in Excel. In many cases, the subject matter expert has already done this! It’s important to create a full set of example business events, and to use Excel formulas – with references to cells in the event record – instead of hard-coded values. If you can do that, you can learn to do the rest. The subject matter expert must be able to verify the correctness of the outcome values for each business event.

Now, referring to the BPMN diagram, we start in the middle, with the Decision task labeled here Calculate new/updated table records. A BPMN Decision task – the spec uses the archaic term businessRule task – invokes a DMN decision service. So following whiteboarding, the next step in the methodology is creating a DMN model that generalizes the whiteboard business logic. I prefer to put all the business logic in a single DMN model with multiple output decisions in the resulting decision service, but some clients prefer to break it up into multiple Decision tasks each invoking a simpler decision model. That’s a little more work but easier for some stakeholders to understand. This DMN model is in one sense the hardest part of the project, although debugging DMN is today a lot easier than debugging BPMN, since you can test the DMN model within the modeling environment. The subject matter expert must confirm that the DMN model matches the whiteboard result for all test cases.

On the Trisotech platform, Decision tasks are synchronized to their target DMN service, so that any changes in the DMN logic are automatically reflected in the BPMN, without the need to recompile and deploy the decision service. That’s a big timesaver. Decision tasks also automatically import to the process all FEEL datatypes used in the DMN model, another major convenience.

Next we go back to the beginning of the process and configure each step. The key difference between executable BPMN models and the descriptive models we use in BPMN Method and Style is the data flow. Process variables, depicted in the BPMN diagram as data objects, are mapped to and from the inputs and outputs of the various tasks using dotted connectors called data associations. Trisotech had the brilliant idea of borrowing FEEL and boxed expressions from DMN to business-enable BPMN. In particular, the data mappings just mentioned are modeled as FEEL boxed expressions, sometimes a simple literal expression but possibly a context or decision table. If you know DMN, executable BPMN becomes straightforward. FEEL is also used in gateway logic. Unlike Method and Style, where the gate labels express the gateway logic, in executable BPMN they are just annotations. The actual gateway logic is boolean FEEL expression attached to each gate.

Database operations and interactions with external data use Service tasks. In Trisotech, a Service task executes a REST service operation, mapping process data to the operation input parameters and then mapping the service output to other process variables, again using FEEL and boxed expressions. The BPMN model’s Operation Library is a catalogue of service operations available to Service tasks. Entries in this catalogue come either from importing OpenAPI files and OData files obtained from the service provider or manually creating them from service provider documentation. It sounds difficult, but it’s actually straightforward, and business users can learn to do it.

Example

A Stock Trading App

Here is an example to illustrate the methodology. Suppose we want to create a Stock Trading App that allows users to buy and sell stocks, and maintains the portfolio value and performance using three database tables: Trade, a record of each Buy or Sell transaction; Position Balance, a record of the net open and closed position on each stock traded plus the Cash balance; and Portfolio, a summary of the Position Balance table. In Excel, a subject matter expert models those three tables. Each trade adds a row to the Trade table, which in turn adds two rows to the Position Balance table – one for the traded stock and one for Cash, with total account value and performance summarized in the Portfolio table. The only part of this that requires subject matter expertise is calculating the profit or loss when a stock is sold. When that is captured in the Excel whiteboard, translating that to DMN becomes straightforward. The BPMN Decision task executing this DMN service has the added benefit of automatically including the FEEL datatypes of the three tables in the process model.

Next we need to address data validation of a proposed trade. In a previous post, I explained how to do this in DMN. We need to ensure that all of the required data elements are present, and that they are of the proper type and allowed value. If we do not allow trading on margin or shorting the stock, we need to retrieve the account Portfolio record and make sure there is sufficient Cash to cover a buy trade or sufficient shares in the Portfolio to cover a sell trade. A full-featured trading app requires streaming real-time stock quotes, but in a simplified version you could use real-time stock quotes on demand via the Yahoo Finance API, which is free. As in the case of many public APIs, using it requires manually creating the Operation Library entry from service provider documentation, but this is not difficult. Any problems detected generate an error message, and a gateway prevents the trade from continuing.

Also as explained in previous posts, we can use the OData standard to provide Low-Code APIs to the database tables. OData relies on a data gateway that translates between the API call and the native protocol of each database, exposing all basic database operations as cloud REST calls. In its OData integration, Trisotech does not provide this gateway, but we can use a commercial product for that, Skyvia Connect. The Skyvia endpoint exposes Create, Retrieve, Update, and Delete operations on the three database tables, and Trisotech lets us map process variables to and from the operation parameters using boxed expressions and FEEL.

While this Low-Code methodology is straightforward, to become adept at it requires practice. Executable BPMN, even though it’s Low-Code, requires getting every detail right in order to avoid runtime errors. Debugging executable BPMN is an order of magnitude harder than DMN, because you need to deploy the process in order to test it, and instead of friendly validation errors such as you get in the DMN Modeler, you usually are confronted with a cryptic runtime error message. (Note: Trisotech is currently working on making this much easier.) So the modeler still must spend time on testing, debugging, and data validation up front – things developers take for granted but try the patience of most business users.

Using this methodology, it is possible for non-programmers to create a working cloud-based database app based on BPMN and DMN, something most people would consider impossible.

Learning the Methodology

It’s one thing to read about the methodology, quite another to become proficient at it. It takes attention to the small details – many of which are not obvious from the spec or tool documentation – and repeated practice. To that end, I have developed a new course Low-Code Business Automation with BPMN and DMN, where each student builds their own copy of the above-mentioned Stock Trading App. Students who successfully build the app and make trades with it are recognized with certification. Students need to know DMN already, including FEEL and boxed expressions, but the course teaches all the rest: data flow and mapping, Operation Library, OpenAPI and OData, configuration of the various BPMN task types for execution.

The course is now in beta. I expect some enhancements to the Trisotech platform that will be incorporated in the GA version around the end of the year. In the meantime, you can take a look at the Introduction and Overview here. Please contact me if you are interested in this course or have questions about it.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

What Does a Workflow Management System Do?

Workflow Automation software executes computer-driven flows (processes) of human and system tasks, documents, and information across work activities in accordance with flow paths based on business decisions. Workflow Management software also ensures processes – both internally across organizational boundaries, and externally for Customer/Client interactions – are optimized, repeatable and auditable while still being quick and easy to change.

While Robotic Process Automation (RPA) has been making inroads in automating tasks within processes, Workflow Automation software is far more powerful than RPA. However, the two are both compatible and synergistic. RPA bots can automate individual tasks within a business process, but they typically can’t connect those tasks together. Good workflow engines allow RPA tasks to be included as part of a process.

Workflow Design software and Workflow Process software are being effectively used in practically every industry, frequently serving as standard operating procedures software. Digital Workflow software can be especially effective and valuable in two industries: Healthcare and Financial Services.

Workflow Automation Software
in Healthcare

Request Demo
Business Workflow Automation Software

The Healthcare industry is large and diverse. While the focus in healthcare is usually on the front-line workers, the caregivers and providers, there are also multitudes of back-office workers.

Large-scale healthcare organizations are often in linked businesses, networking physicians and other providers, acute care hospitals, long term care facilities and insurance organizations. Using Trisotech’s Business Process Management software all these organizations can create secure standards-based software to help them store and retrieve data as well as standardize and automate workflows and decision making across all parts of their business.

Most healthcare providers want standard processes and decision methodologies that are centralized, easy to understand, automated through workflow engines, and quickly changed by SMEs without resorting to the need for heavy IT involvement. This, in turn, frees up IT resources to work on centralizing, consolidating and making available the latest technologies across organizational silos. This includes providing technologies to support standards like FHIR® for data storage and retrieval, Clinical Quality Language (CQL) and CDS Hooks for clinical decision support in real-time at the point of care. Trisotech’s Workflow Design software and Workflow Automation software supports all these standards and allows automated processes to be fast and easy to change as regulatory requirements and new medications and procedures evolve.

Using Trisotech’s workflow management software, healthcare organizations can develop evidence-based workflow and decision models that are human-readable, machine automatable, and embeddable in most medical encounter systems. While healthcare organizations can and do create their own automatable models, Trisotech also provides pre-built models including nearly 1,000 free customizable care pathways, clinical guidelines, and healthcare decision calculators in the BPM+ Health standard. This way, Trisotech’s process management software enables practitioners to stay updated, accelerate solution adoption and ensure greater consistency in care execution. Additionally, the comprehensive visual models offered by the Workflow Design software are readable by IT, providers and business people, serving as a guideline specification, the guideline logic, the guideline documentation, and the automation code for the workflow engine – all in a single visual artifact!

1,000
FREE customizable care pathways, clinical guidelines, and healthcare decision calculators

Moving to evidence-based practices is very desirable but also often difficult for healthcare organizations. Subject matter experts (SMEs) and clinicians constantly work with IT to translate large volumes of regulatory, new medication and procedure information into organizational policy. This is a consistent and expensive requirement for SMEs using their existing, often antiquated systems to keep information up to date. Trisotech’s Workflow Design software and Workflow Automation software solutions allow healthcare organizations to easily define and deploy evidence-based best practices that offer a consolidated view of the interactions and multiple touchpoints with patients, care pathways, and workflows at the point of care.

Back-office tasks such as pre-authorization, medical necessity determinations and off-label drug prescription approvals have a huge bearing on patient experiences. They are also time-consuming, expensive and highly manual activities. Utilizing Workflow Process software for these types of activities expedites decisions for waiting patients, allows for services to be rendered sooner, and increases ROI. Using Trisotech’s digital workflows paves the way for improving both the perceived and real quality measurements for any healthcare organization.

Healthcare Payers and Insurers are also leveraging Trisotech’s process management software to automate processes like claims processing and pre-authorization determinations, leading to more efficient decision making and significant cost savings. Trisotech’s Automated Workflow software can ensure the correct information is collected at the outset, help pay claims rapidly and organize case management for disputed or confusing exception claims. The workflow software also helps payers keep complete records in case of an audit. Health plan members benefit from a better experience as they can access the care they need with minimal delays and without surprises at the time of claim payments or billing.

More info on Healthcare

Workflow Automation Software
in Financial Services

Request Demo
Business Workflow Automation Software

Financial services make up one of the economy’s most significant and influential sectors. This sector is made up of Banking Services including Retail Banking, Commercial Banking and Investment Banking. Also included are Investment Services, Insurance Services and Tax and Accounting Services.

These businesses are composed of various financial firms including banks, finance companies, lenders, investment houses, real estate brokers, insurance companies, etc. Trisotech has customers using its Workflow Design software and Workflow Automation software in all of these businesses. The typical description of this sector is Financial Services, but it is really made up of both services and Financial Products like mortgages, investments, credit cards, insurance policies, etc. This means that it is not only a business-to-business (B2B) sector but also has a huge business-to-consumer (B2C) component. Marketing, selling and servicing these products is fertile ground for Trisotech’s Workflow Automation software.

Various forms of proprietary financial software have been in use for decades and the adoption of those early technologies now presents the industry with an increasing risk in the form of technical debt. Old technologies are being disrupted by newer cloud-based offerings which include standards-based business process management software that is far better suited to meet the rapidly changing personalization, self-service, risk and compliance needs of today’s marketplace. Indeed, improving client service by automating policies, accounts, investments, claims and more using digital workflows is a cornerstone of the digital transformation efforts in financial services. To simplify the complex process of digital transformation and in order to streamline their processes and decisions, financial enterprises should render organizational workflows and business decision logic into international standards-based visual diagrams and documents. When using Trisotech’s Digital Enterprise Suite, not only can those visual diagrams be shared by business people and technical people they can also be automated by Trisotech’s workflow engine directly from those visual diagrams.

Share
knowledge across your organization and free up your IT resources

Technology replacement is also very important because most organizations have their knowledge, policies and procedures embedded in large complex programs maintained by IT programming staffs. Today, “old school” practices like SMEs maintaining Excel spreadsheets for policies and rules (regulatory and organizational) that then must be “translated” by IT into traditional programming languages and proprietary rules systems are giving way to visual models incorporating standardized “decision services.” Using Trisotech’s Digital Enterprise Suite, modern workflows and decision services can be built and maintained by SMEs and turned into automated business processes by clicking a single button. This, in turn, frees up IT resources to work on centralizing, consolidating and making available additional more current technologies across organizational silos.

Challenges

Primary challenges the financial industry is facing today include rapid and often massive regulatory changes, privacy, security, and fraud prevention, surpassing or keeping up with the competition by exceeding customer expectations, and replacing old technologies with emerging technologies. Trisotech’s workflow software is already recognized as the reference implementation for many international standards such as BPMN, CMMN, and DMN. In the financial industry Trisotech is rapidly taking a leadership position with its implementation of the Mortgage Industry Standards Maintenance Organization (MISMO) standard and support of other standards like the Financial Industry Business Ontology (FIBO), common database connections and multiple AI techniques.

For Fintech organizations, Trisotech’s Workflow Design software and Workflow Automation software accelerate digital transformation by providing the ability to easily define, deploy and maintain improved decision-making and workflows supported by artificial intelligence and machine learning in a graphical environment. While Trisotech’s Digital Enterprise Suite is being used by customers for everything from retail credit card processing to insurance claims processing, one area, underwriting, has been of particularly high value to customers.

Underwriting is the process by which an institution takes on financial risk – typically associated with insurance, loans or investments. Underwriting means assessing the degree of risk for each applicant prior to assuming that risk. That assessment allows organizations to set fair borrowing rates for loans, establish appropriate premiums to cover the cost of insuring policyholders, and creating a market for securities by pricing investment risk.

Underwriters evaluate loans, particularly mortgages, to determine the likelihood that a borrower will pay as promised and that enough collateral is available in the event of default. In the case of insurance, underwriters seek to assess a policyholder’s financial strength, health and other factors and to spread the potential risk among as many people as possible. Underwriting securities determines the underlying value of the company compared to the risk of funding its capital acquisition events such as IPOs. All of these activities lend themselves to digital workflow software solutions.

For example, mortgage loan origination. By utilizing Trisotech’s Workflow Design software, customers are able to build standard operating procedures software for loan origination that encompass the organization’s specific underwriting policies. These workflows can be created and maintained by underwriting experts while complex mathematical models, AI and privacy and security requirements are taken care of by IT personnel. Trisotech provides for all of this in a single common visual model understandable by both the business people and the IT personnel while still maintaining separation of concerns through granular permissions. Once the visual model is complete, a single button click can automate the workflow and make it available to Trisotech’s workflow engine part of the Workflow Process software.

More info on Finance

What Is Workflow in Software?

Digital Enterprise Suite icon

The unique capabilities of Trisotech’s Automated Workflow software are rooted in its ability to simplify the complex process of digital transformation for all. In order to streamline their processes and decisions, enterprises must first know what those processes and decisions are. Discovering and validating them is the responsibility of business leadership not solely IT. Thus, a must-do activity of digital transformation is rendering organizational workflows and business decision logic into international standards-based visual diagrams and documents. Then, they can be shared by business people and technical people and automated directly from those visual diagrams.

Trisotech calls this process Business Services Automation. The Trisotech offering includes visual Workflow Design software and visual Workflow Automation software.

Trisotech Workflow Automation Solutions

Digital Modeling Suite icon

Workflow Design software

The Workflow Design software includes workflow automation software (BPMN), decision automation software (DMN) and case management automation software (CMMN) along with a larger suite of application tools that facilitate workflow discovery, promote organizational standards use and support workflow design life cycles. The software also supports AI and RPA integrations, full API support and the configuration and management of users, permissions and models.

Digital Automation Suite

Workflow Automation software

Trisotech’s process Workflow Automation software includes workflow engines that can directly execute the business process management models. These workflow engines are utilized through RESTful APIs, provide the highest levels of privacy and security and can be containerized and thus scalable on demand across a wide variety of public and private cloud configurations including high availability configurations.

Trisotech’s Workflow Automation software also provides a full rich visual configuration interface supporting server environment configuration, audit logs, debugging tools and management of running workflow instances. Trisotech’s digital workflow software is high in value, low in cost and backed by world-class technical support.

Put succinctly, Trisotech’s Digital Automation Suite (DAS) is an API-first, container-based scalable cloud infrastructure for business automation. It enables complex automation of business workflows, cases and decisions in a simple, integrated run-time environment. It allows organizations to leverage business automation as a source of competitive advantage, via high performance, flexible, and linearly scalable automation engines. The Digital Automation Suite also offers an outcome-driven orchestration of AI and other emerging technologies using international standards and a microservices architecture.

Trisotech

the
Innovator

View all

Sandy Kemsley's Blog - From Project to Program: Expanding Across the Organization
Sandy Kemsley
Blog

From Project to Program: Expanding Across the Organization

By Sandy Kemsley

Read Time: 6 Minutes

In the first two parts of the “From Project to Program” series, I covered how to do a post-implementation review of your first process automation project, and how to pick the next process automation project after you had the first one under your belt.

In this third and final part of the series, I’ll look at how to expand what you’ve done in the first couple of projects to an organization-wide program of process automation and improvement. There are a lot of pointers in here that I’ve written about in greater depth in earlier posts, so I’ll be referring back to those as I go.

There are a number of keys to success once you get beyond those first couple of projects:

Alignment

The first projects are often a bit of a learning experience, where you’re learning about how to use the tools and techniques of improving and automating processes. Often these can be departmental in nature, and you may not have had a chance to look at them in the large context of your business architecture and corporate goals. If you go back to the posts that I did on goal alignment and end-to-end processes, this will get you started on with vertical alignment between corporate goals and departmental KPIs, and horizontal alignment with end-to-end processes across business units to ensure customer satisfaction.

Start to align the KPIs of the processes that you’ve already automated with the corporate goals in your business architecture: are you performing activities or rewarding workers that don’t align with the top-level objectives? Reconsider what you’re doing in those processes based on that alignment, and adjust accordingly. Similarly, position the process that you’ve automated in the end-to-end process model for your entire value chain: does it serve the needs of the end-to-end process, or is it just locally optimized? Even if the other parts of the process aren’t automated yet, you want to be prepared to optimize the entire chain, not just your first projects’ processes. Keep in mind that you may be eventually designing and implementing loosely-coupled processes rather than a single end-to-end orchestration.

Measurement

The post-implementation review is a great place to get started with measuring the success (and failure) of your first process automation projects. To expand these concepts across the organization, you need to move from a one-time review of a project to ongoing monitoring and measurement of your processes. In the past, this was often done by just monitoring what was happening in your process automation system, but there can be an entire alphabet soup of different systems involved in your projects: business process management (BPMS), decision management (DM), case management (CM), custom/legacy systems, enterprise resource planning (ERP), customer relationship management (CRM) and more. Measuring your entire process automation will require gathering data from all of the systems involved into a common visualization: an enterprise process dashboard. Both vertical and horizontal alignment are important here, since you need to show the direct vertical alignment from measured KPIs to corporate goals, and show the performance of end-to-end processes.

Beyond the ongoing measurement of the process automation performance, you also need to consider the impacts to the organization on a larger scale. For example, how is automation and improved productivity changing your workforce, both in terms of skills and management? How is a more efficient end-to-end process shortening your time to market? Many of these measurements will need to be considered only have your have had your processes automated for some period of time and can look back on weeks or months of data.

Sharing Knowledge

Sharing ideas and technology are where a center of excellence (CoE) comes in, and I’ve previously written about the need for a business automation CoE rather than a separate CoE for each discipline or product. Implementing a CoE helps improve your organization’s process maturity, which in turn improves productivity, flexibility and innovation. I won’t go into everything that you want to include in your CoE, but in short it should include the following:

The point in time when you’re moving from your initial projects to an organization-wide program is the best time to start building your CoE using the resources from those projects.

As your first project matures and some of the earlier resources (such as designers/architects) are not required full-time on the project, you can bootstrap create the CoE using those resources. They should document what they have learned in the first project so far, particularly tool usage, best practices and methodologies. As a second project starts, it can use some of the same people resources as well as the content that they have created to get that project off to a faster start. As the first project ends, more resources can contribute material and knowledge to the CoE, and can be move through to the second project. By the time that you’ve completed two projects you will have the core of your CoE repository in place, as well as some of your governance procedures and resources for skills development.

Sharing Costs

Knowledge isn’t the only thing that will be shared as you roll out a process automation program across the enterprise: you also need to consider how to share costs of the technology and resources. Some people will remain in the CoE for a longer period of time, but provide services to individual projects: this resources usage needs to be measured and accounted for in some way, so that the cost of the CoE is spread across the enterprise rather than just the first two projects that bootstrapped it.

Sharing technology costs used to be a complex calculation with on-premise systems, since you would have to consider costs for hardware, software and data center resources. This is now much easier with cloud-based systems that can measure usage by each project/department, although there will still be overhead for some amount of administration. There may also be on-premise costs for bridging components that work between the cloud-based systems and your legacy systems.

Evangelizing Success

Once you’ve experienced some degree of success with your first projects, you might think that everyone else in the organization will be knocking at your door to work with them next. That’s probably an optimistic view: most organizations have a certain amount of inertia that resists change, with barriers ranging from regulations to technology to culture. In order to roll out process automation across the enterprise, you will need to evangelize to gain new “converts”. This is essentially an internal marketing role, where you communicate the successes of the early projects, and help other parts of the organization to understand what they can expect in terms of benefits as well as roadblocks along the way.

A good evangelist is someone who benefited the most from the initial projects, for example, an operational manager who has a view of both individual team member metrics as well as the departmental overview. Recruiting an internal resource like this to “sell” the concept across the organization is essential, because they will speak from a position of deep understanding: not just an all-positive sales spin, but what they achieved and what had to happen to make it work. They will have negative things to say about the technology and the change management, but if they saw an overall success, then it will be a compelling story.

Documenting an internal case study will be an important tool in communicating success to other parts of the organization. This may be something that one of the product vendors would assist with in exchange for being able to use it publicly, or it could be purely internal. Be sure to include the vertical and horizontal alignment factors, what you learned from your post-implementation review, the continuous improvement benefits of the enterprise process dashboard, and the resources available in the CoE.

Once you have an evangelist (or two) who want to share the information, and a documented case study, you can spread the word across the organization using a road show (virtual or in-person) or open house.

In summary, moving from your initial projects to an organization-wide program of process automation is more than just doing more of the same: you need to consider how to leverage what you’ve already learned to build something that is greater than the sum of the parts.

Follow Sandy on her personal blog Column 2.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ blog

View all

Sandy Kemsley's Blog - From Project to Program: Picking-Your-Next-Project
Sandy Kemsley
Blog

From Project to Program: Picking Your Next Project

By Sandy Kemsley

Read Time: 7 Minutes

In my last post, I went through the details of what to look for in a post-implementation review (PIR) after your first intelligent automation project:

Basically, your PIR helps you to figure out what worked and what didn’t work, both in a technology and organizational sense, and provide input into determining your path forward. No one expects your first project to be 100% successful, but you are definitely expected to learn from your mistakes and correct those on subsequent projects.

Next up is to figure out where to go from here.

You may want to build on your first project’s success, or use those best practices for a different business area. But how do you decide?

A good place to start is with the third outcome from your PIR, namely, the alignment of the first project with your core processes and capabilities. This will highlight some of the areas that can also benefit from a similar treatment, and help you identify your second project; usually this falls into one of three categories:

1.
Add functionality to the first project.

When you were able to avoid scope creep in the first project and implement a somewhat limited “minimum viable product” – big enough to be relevant, but small enough to be manageable – you probably have some work that you want to do on that first implementation. This almost always includes integration with other systems, and some degree of automation of manual steps.

Let’s say, for example, that your first project automated the general process within a department and linked to one or two of the more modern systems. Although this would reduce some of the non-value-added manual steps, as well as providing better oversight and analytics about the business area, there are older legacy systems that you did not integrate in the first project. Workers will receive their assigned tasks via the new system, but still need to copy and paste information between that system and the legacy system. The goal is not to eliminate workers from a process if they are adding value, but to reduce the amount of error-prone “swivel-chair integration” that they are doing by automating the integration with all of the systems involved. This leaves the workers free to focus on solving the problems at hand, not on whether they copied and pasted the current information.

Survey the actual users of the system to figure out the best candidates for integration and automation, based on what’s taking them the most time without adding a lot of value. Management may also have some new ideas on what metrics that they need in the new processes. Don’t get caught in the trap of just implementing what was specified before you started implementing the first project, since new ways of working and new ideas will emerge. It’s also important not to try to do everything at once: your goal should be to have ever-evolving processes across your organization, not a one-and-done implementation.

Often the additional integration requires input from resources that were not used on the first project, such as having the mainframe developers or third-party service providers create an API that you can use for automating those interfaces, or adding robotic process automation (RPA) to the technology mix. This can add costs and complexity, so you want to be sure that you are prioritizing the integrations that will add the most value overall.

2.
Expand the reach of the first project to cover your end-to-end process.

If your first project took on a small portion of a larger end-to-end process, consider leaving the first implementation as it is, and extend the same capabilities to other participants in the overall process.

In the previous scenario, your first project automated a general departmental process and provided some integration and automation. However, handoffs to related departments were still done manually by emailing spreadsheets. In this case, the greater benefit for the second project may be to extend the automated process to the other departments so that the work is moved between departments in a standardized fashion. Most of the work being done at each step will still be done in the same way, but the work assignment, delivery and management is now automated. This provides improvements in compliance and control, and also provides the infrastructure for adding more functionality to the individual departments in later stages.

The users of the first project can provide input into where you should be expanding the implementation, but also consider input from management, process analysts, the other departments to be impacted, and even customers to identify which expansions could most improve the customer experience.

One of the often-overlooked benefits of this approach is that process analytics from this expanded implementation provide valuable insights into the operation of the end-to-end process. This can help to identify more radical improvements, such as reorganizing work between (not just within) departments, and highlight areas for increased automation.

3.
Take on a completely unrelated business area that is in need of improvement.

If there are other areas in your organization that are under stress, you can shift gears and apply the learning from the first project to a completely different area. The first project needs to be in a stable state, providing sufficient improvement to the business area that there will be little resistance to continuing in the current state until (possibly) much later.

Moving to a different business area for the second project can be a bit of a fire-fighting approach, and you need to be careful that you’re not just chasing after the department head who is yelling the loudest. Taking a step back and looking at the potential projects in the context of your core end-to-end processes often helps to see which are really the most important for the near term, and which can wait until you have increased the resources available.

In this case, you need to leverage as much as possible from what you learned in the first project: designing with your goals in mind, best practices and design patterns for using the technology, and how to manage change. These can be difficult to achieve without at least the start of a center of excellence (CoE) to collect your best practices and related information.

In general, I would recommend one of the first two approaches before taking on a different business area as your second project, since you really need at least two projects to fully understand the new technology and how it is best applied to your business. However, by increasing your resources at the same time and supporting them with a fledgling CoE, you can start to work on projects for different business areas soon after you start that second project.

That brings me to the topic for my next post: how to use what you learned in that first project, and the beginning of subsequent projects, to bootstrap a CoE. A CoE doesn’t initially need to be a project or department in itself, and the creation of it should not delay projects, but it can start simply with a collection of information that supports growing project teams. Next time, I’ll discuss how to get your CoE kicked off, and how to grow it to support all of your intelligent automation projects.

Follow Sandy on her personal blog Column 2.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ blog

View all

Sandy Kemsley's Blog - From Project to Program: Post-Implementation Review
Sandy Kemsley
Blog

From Project to Program: Post-Implementation Review

By Sandy Kemsley

Read Time: 5 Minutes

I’ve covered a lot of business architecture topics in past posts, including vertical and end-to-end alignment, goals and incentives, different model types, process mining, and centers of excellence. In this post, I’m going to start to bring some of those themes together to look at best practices for moving from your first intelligent automation project to a successful organization-wide program.

Let’s assume that you’ve deployed your first project. If you did this right, you used “minimum viable product” principles to choose something that was big enough to be relevant, but small enough to be manageable, and you resisted scope creep along the way. It may include a broad mix of technologies, such as business process management, case management, decision management, robotic process automation, machine learning and more; or maybe you used the project to test out only one or two technologies. It may span multiple departments, or just a single team.

How do you take what you learned from this first project, whether successful or not, and leverage it in subsequent projects?

Moving from a single project to a broader program involves a few steps:
1.

Perform a post-implementation review (PIR) on your first project to figure out what worked, and what didn’t work. This is a critical step, but many organizations don’t take the time to do it, or they aren’t sure what should be in a PIR.

2.

Decide on the next project(s) that will follow the first one, which could include adding functionality or expanding the reach of the first project, or taking on a completely different business process.

3.

Gather the reusable concepts, skills and artifacts from the first project to bootstrap a center of excellence (CoE) that can be leveraged by subsequent projects.

Taken together, these steps can maximize your probability of success with an intelligent automation program, even if the first project wasn’t 100% successful.

For the remainder of this post, I’m going to focus on the first step, the post-implementation review. In my next posts, I’ll dig into how to identify good candidates for subsequent projects, and how to bootstrap a CoE from an initial project.

Post-Implementation Review (PIR)

A post-implementation review – which may begin before production deployment, but needs to include a review of the production environment — measures the success (and failure) of the project. This should be done with any significant IT project, but is especially important for your first intelligent automation project since it involves both new technologies and new ways of working. The PIR includes the following activities:

Measure the achievement of goals and KPIs that were identified at the beginning of the project.

Your intelligent automation systems should have metrics built in to assist with the measurements of those goals and KPIs. Often these are represented as return on investment (ROI) calculations: “hard” ROI such as improving productivity and reducing compliance penalties, and “soft” ROI such as improving customer satisfaction and reducing time to market. Generally, hard ROI measures are related to cutting costs and may be evident almost immediately on implementation, while soft ROI measures are related to increasing revenues and may require months before they show results.

Review the change management required to implement the project.

Change management can include departmental reorganization, for example, if an intelligent automation project centralizes a common function across several business units. Change management also includes the impact on individual team members, such as skills retraining and worker acceptance of the new methods and systems. Change management often lags behind system implementation, and can have a negative impact on the ability to meet goals.

Align this first project in the context of your end-to-end processes and organizational capabilities.

The first intelligent automation project within an organization should cover some part of a core end-to-end process, and improve one or more core capabilities.

Identify the ideas that can be reused in other projects.

Without even considering reusable code components, there can be reuse of concepts such as project methodology and solution design that can greatly assist future projects.

It’s important to take a critical look at the project during the PIR in order to understand what not only what you achieved, but what things didn’t go as well as expected. I often end up performing PIRs for my clients’ projects, and there are two main things that go wrong: the technology, and the organization.

Technology often doesn’t live up to expectations, due to vendors overhyping their products, and teams doing a bad job of designing and implementing it. That doesn’t mean that you’re going to throw it out and start over: it’s usually a matter of understanding the best practices and limitations of the technology products, and ensuring that the implementation teams understand the business requirements as well as the technology capabilities. Your PIR should identify both inappropriate usage of the technology, and mismatches between the business needs and what was built. The biggest red flag: when there is a much higher degree of customization than expected.

In almost every PIR, I also see problems caused by organizations that are unwilling to change in order to adapt to new ways of working. New technology, especially those that fall into the intelligent automation category, almost always change the way that individual tasks are done, and can cause sweeping changes to organizational structure and business models. Organizations that simply try to shoehorn new intelligent automation technology into their existing processes, organizational structure and workforce are doomed to fail. Not everything will need to be changed, but you will have to open to the idea of change in order to take the best advantage of new automation technologies. The biggest red flag: when the old way of doing things is used as the business requirements for the new project.

This should give you some ideas of what to include in a thorough post-implementation review for your first intelligent automation project. Next time, I’ll walk you through how to take what you learned in the PIR and use that to identify the best candidate for your next project.

Follow Sandy on her personal blog Column 2

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ blog

View all

Sandy Kemsley's Blog - Building Incentives Into Processes
Sandy Kemsley
Blog

Building Incentives Into Processes

By Sandy Kemsley

Read Time: 6 Minutes

Building Incentives Into Processes

Around 2011, I started looking at how business processes that involved people were becoming more flexible and collaborative, since many of the predictable parts of the process were being automated, which left the messier ad hoc problem solving to people. What I noticed is that there were problems with the adoption of systems used to handle these more flexible tasks. In some cases, the processes weren’t designed for the flexibility required for collaboration, because organizational culture tended to lock down worker environments rather than allowing them to adapt to the situation at hand. Also causing adoption problems was that workers had few incentives to focus on business goals when they were just rewarded for getting their bit of the work off their desk as quickly as possible.

In this time of process disruption, when we have the opportunity to retool all of our processes, how do we align our intelligent automation systems with incentives and business goals?

Remote work changes how work is done, and intelligent process automation is helping by sending people the right work at the right time, letting them collaborate with their colleagues, all while maintaining security and compliance. This changes how work is managed, since old-style offices that relied on a manager walking around to make sure that everyone looks busy just aren’t a good fit with remote work. And it changes how employee performance is measured: if your employees are rewarded on results rather than the number of widgets that they process, you don’t need to be watching them every minute.

adding incentives into your processes

I’ve written previously about the need for both vertical alignment, so that each of your top-level business goals can be directly traced to one or more operational activities, and horizontal alignment across the entire customer journey process. When workers are performing knowledge work, the performance metrics need to take into account that knowledge work is best when it is goal-driven, collaborative and adaptable. Measuring how well a worker performs a task that contributes to a business goal is much more important than measuring how many minutes they spent in each desktop application each day. This means understanding how their actions roll up to business goals, whether they are using a reasonable degree of collaboration to perform their actions, and whether they are using “outside the box” problem solving techniques or even changing the entire business process to adapt to an unusual circumstance.

Your intelligent automation systems need to not only provide this type of flexible environment for knowledge workers, but also measure how they’re using it to improve the quality of their decisions and work.

Sandy Kemsley's Blog - Building Incentives Into Processes - trusting their employees to perform quality work, especially for remote workers

In general, I recommend against over-monitoring workers’ behaviors, which can create false incentives. Some companies have been sold on the idea that measuring each click and keystroke is more important than trusting their employees to perform quality work, especially for remote workers; this is almost always an indicator that there’s a misalignment between business goals and employee metrics. To quote Peter Drucker, “what is measured, improves”, so if you measure keystrokes and time spent in different applications as a proxy for productivity, that’s what people will spend their time on even if it doesn’t contribute to business goals. Instead, measure the goals achieved by workers, such as problems resolved, time to resolution, and customer satisfaction with the result. This doesn’t mean that you’re not concerned with efficiency and productivity, but you also need to be measuring the quality of decision making and problem solving as it relates to higher-level business goals and customer satisfaction.

All of this may require changes to organizational culture. If you have a strictly hierarchical and rule-following management style in your company, people are less likely to collaborate and use their own judgement in solving problems. Work may be driven by rigid processes that don’t allow people to solve problems in the most effective way.

I wrote a paper back in 2014 on how incentives need to change for knowledge workers, which included the observation that although most companies and management understand that dynamic, goal-driven processes can result in big improvements, they just don’t have the organizational culture to support that type of work. Even if executives think that they want collaboration across silos, managers’ rewards may be based on efficiency performance targets, which means that the managers will enforce more transactional incentives for their teams.

It’s accepted by now that intelligent process automation needs to be designed with agility and to allow collaboration directly within their work platform as part of the process. An agile operating model allows processes to change on the turn of a dime, and a collaborative operating model assumes that people will work together to get things done.

When you’re designing metrics for these agile, collaborative processes, consider what should be measured at the team level as well as the individual level to reward people for working together on problems. If your management is worried that this will mean that people spend too much time socializing and not enough time solving problems, then your metrics are wrong. If you have metrics that are driven by higher-level business goals, then people will collaborate with others when it’s the best thing to solve a problem, not just because they’re feeling social. Having vertical alignment between business goals and individual metrics ensures that everyone, right from the CEO down to the front-line workers, are responsible in some way for achieving the business goals.

The key to designing metrics and incentives is to figure out the problems that the workers are there to solve, which are often tied in some way to customer satisfaction, then use that to derive performance metrics and employee incentives. Don’t forget that incentives can be both financial and non-financial. Some of the measurements will end up on people’s performance reviews and may contribute to a pay raise or promotion, but others are about personal satisfaction in doing a job well, or in helping others to achieve their goals.

Read Sandy Kemsley’s blog post: Making experience matter by building the right incentives into processes

Are you looking to align automation with incentives and business goals for your organization?

Request your personalized demo of the Digital Enterprise Suite today.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ blog

View all

Sandy Kemsley's Blog - Discovering Processes In Unusual Places
Sandy Kemsley
Blog

Discovering Processes In Unusual Places

By Sandy Kemsley

Read Time: 5 Minutes

Since 2008, I’ve spent many September days kicking off the fall conference season by attending the International Conference on Business Process Management: the BPM research conference hosted by a different university each year, and featuring academic papers and tutorials presented by professors, graduate students and researchers. This year, the conference was held virtually (we all missed that trip to Sevilla) and I attended many of the sessions even though they started at 4am in my time zone. Although it’s not quite the same as being there in person, the advantage is that all presentations were recorded and are available for replay.

I like hearing the new research ideas, and although some of them will probably never make it beyond the walls of academia, I always see a few interesting presentations that I know will impact commercial solutions in years to come. This year was no exception: I saw three presentations that brought new insights to process mining, and some ideas about how it might evolve in the future. In my August post, I provided an overview of process mining and why I think that this type of data-driven analysis is a critical skill for business process analysts.

Process mining traditionally works on structured log files from transactional systems, such as ERP systems, where activities within a process are well-defined but the order of the activities is unknown. What if, however, the activities are not well-defined, and are embedded in the content of email messages rather than transactional systems? That’s exactly the problem that Marwa Elleuch of Orange Labs (Paris) is tackling, along with several colleagues, and she presented their paper on “Discovering Activities from Emails Based on Pattern Discovery Approach” at the conference. In short, they are using pattern-discovery techniques to find activities and related business data within the text of email messages, and using these to create a structured event log that can then be analyzed using traditional process mining tools. Any one email (or chain of emails) can be related to multiple activities and multiple instances/cases; in fact, a single sentence in an email could relate to multiple activities and instances.

She showed their research approach (shown in the diagram above) and their paper in the conference proceedings covered the mathematical details of their work. To test their approach, they used a large public domain set of email messages: a half-million messages collected from Enron during the US Federal Energy Regulatory Commission’s investigation. The results look promising, although this is still early days and more research needs to follow: compared to manual review of the emails, their algorithms were able to discover a good percentage of the activities and related business/instance data. The same methods could be applied to other unstructured communications such as text or chat messages. It’s not perfect, but consider that it’s doing unsupervised activity discovery without prior knowledge, this has incredible potential.

Most of the companies that I work with have so many of their internal processes buried in email, performed in an ad hoc manner; in fact, when I visit a large operations area for the first time, I tend to look for “email and spreadsheets” as the location of many uncontrolled processes. Imagine if you could have an algorithm track through your company email messages and suggest processes to be standardized and automated? That would bring the power of process mining to a previously manual, time-consuming analysis task. It would be interesting to see what the addition of AI techniques to these researchers’ pattern discovery techniques could do to gather even more information from the unstructured text of email messages.

Another process discovery-related presentation I found interesting was “Analyzing a Helpdesk Process through the Lens of Actor Handoff Patterns”, based on research by Akhil Kumar of Pennsylvania State University and a colleague. In this case, they’re using the structured logs of known activities and handoffs between people within an IT help desk operation, but doing classical process modelling on these activities doesn’t provide a lot of answers: the process model is simple, but potentially contains a number of loops as a help ticket is directed back to someone who touched it earlier in the process. They looked at the patterns of handoffs in order to better understand the type of collaboration that was occurring, and use machine learning to predict incident resolutions times. This is an interesting type of analysis for case management scenarios such as help desks, which may be able to predict time-to-resolution for cases very soon after they are initiated, just based on the pattern of handoffs between case participants. I see future potential to combine this with analysis of the issue complexity based on other instance data to provide a much more accurate resolution time; this, in turn, leads to greater customer satisfaction since a customer that understands the timeline to expect in their case is less likely to complain about the resolution time, even if the time is long.

I finished up my look at process mining and discovery methods at the conference by attending a tutorial “Queue Mining: Process Mining Meets Queueing Theory” presented by the three researchers involved: Avigdor Gal of Technion (Israel Institute of Technology), Arik Senderovich of University of Toronto, and Matthias Weidlich of Humboldt-Universität zu Berlin. This was a really interesting look at how to expand process mining – which typically looks at what happens in a single instance/case of a process – to consider how the queue of instances impacts the outcomes. They looked at situations such as emergency medical waiting rooms, where queue congestion (too many people waiting) could cause some people to abandon the queue altogether: this provides insights into why some instances end early, or take another path that can’t be explained by the individual instance data alone.

All of this fascinating research is just that: research. There are no commercial solutions that include these techniques and algorithms (yet), but I am predicting that we will see the outcome of some of this research in future commercial products. If you’re just starting to use process mining in your organization – possibly based on recommendations from my previous post – or are experienced at process mining techniques, rest assured that there is a lot of interesting research going on in this field, and you can expect many new capabilities in years to come.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ blog

View all

What is CDS-Hooks?

CDS Hooks is both a free open-source specification and an HL7® published specification for user-facing remote clinical decision support (CDS). CDS Hooks can use FHIR® to represent patient information and recommendations but is architecturally an independent specification. The CDS Hooks specification is licensed under a Creative Commons Attribution 4.0 International License.

HL7 Fast Healthcare Interoperability Resources (FHIR) Logo

Note: HL7®, and FHIR® are the registered trademarks of Health Level Seven International and their use of these trademarks does not constitute an endorsement by HL7.

Learn More

The four basic components of CDS Hooks are:

  • Service – A decision support service that accepts requests containing patient information, and provides responses
  • Hook – A defined point within the client system’s workflow with well-known contextual information provided as part of the request
  • EHR – An electronic health record, or other clinical information system that consumes decision support in the form of services
  • Card – Guidance from decision support services is returned in the form of cards, representing discrete recommendations or suggestions that are presented to the user within the EHR

In 2015, The SMART team launched the CDS Hooks project to trigger third party decision support services. Today, Clinical Decision Support (CDS) Hooks is a Health Level Seven International® (HL7) specification managed by the HL7 Clinical Decision Support (CDS) Workgroup. CDS Hooks provides a way to embed near real-time functionality in a clinician’s workflow when an EHR (Electronic Health Record) system notifies external services that a specific activity occurred within an EHR user session. For example, services can register to be notified when a new patient record is opened, a medication is prescribed, or an encounter begins. Services can then return “cards” displaying text, actionable suggestions, or links to launch a SMART app from within the EHR workflow. A CDS service can gather needed data elements through secure Fast Healthcare Interoperability Resources® (FHIR®) services. Using FHIR services allows CDS Hooks to provide interoperability between healthcare systems operating on different platforms.

SMART – Substitutable Medical Applications, Reusable Technologies
The SMART Health IT project is run out of the Boston Children’s Hospital Computational Health Informatics Program. Through co-development and close collaboration, SMART and FHIR have evolved together since the SMART standard enables FHIR to work as an application platform. The SMART project’s most recognized work is the “SMART on FHIR” application programming interface (API) which enables an app written once to run anywhere in a healthcare system.
SMART on FHIR (Fast Healthcare Interoperability Resources)
SMART on FHIR defines a way for health apps to securely connect to EHR systems. A SMART on FHIR application or service executes via SMART specifications on a FHIR system, extending its functionality through the use of clinical and contextual data. Together with the FHIR models and API, the SMART on FHIR specification components include authorization, authentication, and UI integration.

Who Uses CDS Hooks?

CDS Hooks support is built into all the major EHR products including EPIC, Cerner, Allscripts and athenahealth. Indeed, every type of healthcare organization from large-scale HMOs like UnitedHealth Group, Anthem, Kaiser, and Humana; to PPOs like Clover Health; to teaching hospitals like the University of Utah, Stanford Hospital, Mayo Clinic, and North Shore University Hospital (Northwell); to professional organizations like the American College of Emergency Physicians (ACEP) and the American College of Obstetricians and Gynecologists (ACOG) have access to these technologies.

Standards-Based Implementation of CDC Recommendations use the CDS Hooks interoperability framework for the integration of the guideline recommendations into EHR systems.

In the United States, SMART support is specifically referenced in the 21st Century Cures Act of 2016. The 21st Century Cures Act requires a universal API for health information technology, providing access to all elements of a patient’s record accessible across the SMART API, with no special effort. CDS Hooks are typically implemented as SMART applications.

What is CDS Hooks Used For?

CDS Hooks enables the creation of standard places within the EHR workflow where the EHR can issue a notification of an event that is happening. This notification can be received by an external (typically SMART) application which returns pertinent information to the EHR for display to the EHR user. In FHIR-speak, the information returned is a “card.” Cards can be of three types:

  • Information card – Conveys text that may be useful for the clinician
  • Suggestion card – Provides suggestions for the clinician
  • App Link card – Links to reference materials or apps

Because FHIR is an open-source standard, new hooks can be created by any interested party. Today, there are six standard “hooks,” which have been defined as part of Version 1 of the standard:

  1. patient-view – Patient View is typically called only once at the beginning of a user’s interaction with a specific patient’s record
  2. order-select – Order Select fires when a clinician selects one or more orders to place for a patient (including orders for medications, procedures, labs and other orders). If supported by the CDS Client, this hook may also be invoked each time the clinician selects a detail regarding the order
  3. order-sign – Order Sign fires when a clinician is ready to sign one or more orders for a patient, (including orders for medications, procedures, labs and other orders).
  4. appointment-book – Appointment Book is invoked when the user is scheduling one or more future encounters/visits for the patient
  5. encounter-start – Encounter Start is invoked when the user is initiating a new encounter. In an inpatient setting, this would be the time of admission. In an outpatient/community environment, this would be the time of patient-check-in for a face-to-face or equivalent for a virtual/telephone encounter
  6. encounter-discharge – Encounter Discharge is invoked when the user is performing the discharge process for an encounter where the notion of ‘discharge’ is relevant – typically an inpatient encounter

Prescription writing is an example you can use to visualize the way this technology works. In the EHR the event of writing the prescription triggers a CDS Hooks (order-select). The hook can be received by a patient engagement application. The application then returns a Suggestion card recommending that the patient be provided educational materials, enrollment into a support program and given an offer for copay assistance. An Information card indicating drug/drug or drug/disease adverse reactions might also be supplied.

The CDS Hooks Standard

The CDS Hooks Standard

The CDS Hooks specification and Implementation Guide describes the RESTful APIs and interactions to integrate Clinical Decision Support (CDS) between CDS Clients (typically EHRs but possibly other health information systems) and CDS Services. All data exchanged through the RESTful APIs is sent and received as JSON structures and transmitted over channels secured using HTTPS (Hypertext Transfer Protocol (HTTP) over Transport Layer Security (TLS).

This specification describes a “hook”-based pattern for invoking decision support from within a clinician’s workflow. The API supports:

  • Synchronous, workflow triggered CDS calls returning information and suggestions
  • Launching a user-facing SMART app when CDS requires additional interaction

User activity inside the clinician’s workflow, typically in the organization’s EHR, triggers CDS hooks in real-time. Examples of this include:

  • patient-view when opening a new patient record
  • order-select on authoring a new prescription
  • order-sign on viewing pending orders for approval

When a triggering activity occurs, the CDS Client notifies each CDS service registered for the activity. These services must then provide near-real-time feedback about the triggering event. Each service gets basic details about the clinical workflow context (via the context parameter of the hook) plus whatever service-specific data are required. This data is often provided by the FHIR prefetch functionality of the CDS Hooks Discovery URL.

Each CDS service can return any number of cards in response to the hook. Cards convey some combination of text (Information card), alternative suggestions (Suggestion card), and links to apps or reference materials (App Link card). A user sees these cards, one or more of each type, in the EHR user interface, and can interact with them as follows:

  • Information card: read the informational text
  • Suggestion card: read a specific suggestion(s) for which the CDS Client renders a button that the user can click to accept. Clicking automatically populates the suggested change into the clinician’s EHR user interface
  • App Link card: click a link to an app (typically a SMART app) where the user can supply details, step through a flowchart, or do anything else required to help reach an informed decision

A good way to get a single-page technical overview of CDS Hooks including Discovery, Request and Response specs is to view the CDS Hooks “cheat sheet”.

Download Cheat Sheet

Trisotech and CDS Hooks

Trisotech CDS Hooks option generates CDS Hooks compliant end points for Trisotech automation services.

Learn More
Digital Automation Suite logo

CDS Hooks Discovery Endpoint

Trisotech modeling and automation platforms support the CDS Hooks 1.0 standard for processing and data prefetch in the Workflow and Decision Management modelers. Once the desired actions are modeled, publishing the models to the Service Library for automation purposes automatically provides a CDS Hooks Discovery endpoint URL for that service. That endpoint describes the CDS Service definition including the prefetch information required to invoke it according to the CDS Hooks 1.0 standard. Currently, this endpoint implements the standard CDS Hooks discovery endpoint for Patient View (patient-view).

CDS Hooks FHIR Prefetch

Using the CDS Hooks endpoint features requires preparation through entries in model inputs. Each input that will be prefetched from FHIR needs to have a custom attribute called FHIR that contains the query for the prefetch. For example, in a Workflow Data Input shape the custom attribute to prefetch the current patient information might look like this: FHIR: Patient/{{context.patientId}}

If you were using a decision model the DMN Input shape to prefetch a specific-code observation from the current patient might have a custom attribute like this: FHIR: Observation?subject={{context.patientId}}&code=29463-7

If your model is properly configured with the FHIR custom attributes and published to the Trisotech Service Library, an HTTP “GET” on the provided endpoint will result in a JSON description of the service. Each CDS Service is described by the following attributes:

Field Optionality Type Description
hook REQUIRED string The hook this service should be invoked on
title RECOMMENDED string The human-friendly name of this service
description REQUIRED string The description of this service
id REQUIRED string The {id} portion of the URL to this service which is available at {baseUrl}/cds-services/{id}
prefetch OPTIONAL object An object containing key/value pairs of FHIR queries that this service is requesting that the EHR prefetch and provide on each service call. The key is a string that describes the type of data being requested and the value is a string representing the FHIR query.

Also, the prefetched data can be “POST”ed at the base URL (removing the /cds-services from the discovery URL) to execute the service.

CDS Card FEEL Templates

Trisotech modeling also supplies modeling templates for all three types of CDS Cards. Information, Suggestion and App Link card templates are available in the FEEL language format for inclusion in Decision and Workflow models.

CDS Card Automatic Default Generation and Explicit Card Mapping

A CDS Card is automatically generated from the service unless your service is designed to explicitly output cards. By default, an Information card that has the name of the service as its summary will be automatically generated. Each service output will be added to the card detail in the format Key: Value.

For explicit mapping needs, if your model/service outputs a data type that is consistent with the CDS Card definition (required fields: summary, indicator, and source), only these outputs will be transformed to cards. It should be moted that your model/service can have one or many outputs that are CDS Cards. An output can also be a collection of cards. The data type of the output is what is considered to determine if explicit mapping is used for a service.

CDS Hooks Sandbox Testing

The CDS Hooks community provides a publicly available sandbox for services testing and all Trisotech modeling/service CDS Hooks features are tested and demonstrated using the CDS Hooks sandbox.

CDS Hooks Continuous Improvement

Trisotech is continuously improving CDS Hooks features including plans to add additional Version 1 standard hooks such as order-select, encounter-start, etc.

Trisotech

the
Pioneer

View all

Learn how it works

Request Demo

Confirm your budget

Request Pricing

Discuss your project

Request Meeting
Graph