Dr. John Svirbely's blog post - Orchestrating Generative AI in Business Process Models
Dr. John Svirbely, MD
Blog

Orchestrating Generative AI in Business Process Models

By Dr. John Svirbely, MD

Read Time: 2 Minutes

Generative AI is spreading fast and constantly becoming more powerful. Its uses and roles in healthcare are still uncertain. Although it will be disruptive, it is unclear what it will change or what will be replaced as the technology evolves.

The use of Generative AI poses several challenges, at least for now. In some respects, it behaves like a black box. It may be unable to give the sources for what it produces, so it is hard to judge the reliability of its sources. It can be hard to validate depending on how it is used. These factors may make doctors, patients, and regulators nervous about its use in a sensitive area like healthcare. If a claim of malpractice is made involving it, then it may be hard to defend its mysterious behavior.

Generative AI and Business Process Models

A business process model can access Generative AI simply by adding a connector to a task, which is done by a simple drag and drop. Because it is now part of a process, you can control when and how it is called.

Since there may be several possible paths through the model, you can have different calls that are appropriate for each path. Orchestrating the output provides an opportunity to give an individualized solution for a specific situation. Orchestration of Generative AI can make it less of a black box.

Since the calls to Generative AI can be tightly constrained and since you know exactly where it is being used and what the inputs are, the appropriateness of its explanation can be judged in context. This can make validation a bit less daunting.

Illustrative Example

A common problem in healthcare is the need to communicate health information to patients. Not only may the patient and family not understand what the provider is saying, but also the provider may misunderstand the patient. The need to communicate better has created a need for access to human translators around the clock. This raises other problems, as the translator may not understand the nuances of medical terms. It can also be quite expensive since you need to have multiple translators on call.

In Figure 1 there is a portion of a BPMN model for the diagnosis of anemia. A DMN decision model first determines whether a patient has anemia, and, if so, its severity. It may be desirable to inform the patient quickly and easily about these findings. The problem of translation can be approached by taking the outputs of the decision and sending them as inputs to Generative AI (in this case OpenAI, indicated by the icon in the top left corner), along with the patient’s preferred language and education level. The Generative AI then takes these inputs and instructions and generates a letter tailored to the patient.

Figure 1

Generating narrative text is a strength for Generative AI. If known inputs and appropriate constraints are placed on it, then it can reproducibly generate a letter to inform a patient of the diagnosis in language that the patient can understand. Performance can be validated by periodic review of various outputs by a suitably qualified person. This can simply but elegantly solve problems in a cost-effective manner.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Dr. John Svirbely's blog post - Going from Zero to Success using BPM+ for Healthcare. 
                Part I: Learning Modeling and Notation Tools
Dr. John Svirbely, MD
Blog

Going from Zero to Success using BPM+ for Healthcare.

Part I:
Learning Modeling and Notation Tools

By Dr. John Svirbely, MD

Read Time: 3 Minutes

Welcome to the first installment of this informative three-part series providing an overview of the resources and the success factors required to develop innovative, interoperable healthcare workflow and decision applications using the BPM+ family of open standards. This series will unravel the complexities and necessities for achieving success with your first clinical guideline automation project. Part I focuses on how long it will take you to reach cruising speed for creating BPM+ visual models.

When starting something new, people often ask some common questions. One is how long will it take to learn the new skills required. This impacts how long it will take to complete a project and therefore costs. Learning something new can also be somewhat painful when we are set in our old ways.

Asking such questions is important, since there is often a disconnect between what is promoted online and the reality. I can give my perspective based on using the Trisotech tools for several years, starting essentially from scratch.

How long does it take to learn?

The simple answer – it depends. A small project can be tackled by a single person quite rapidly. That is how I got started. Major projects using these tools should be approached as team projects rather than something an individual can do. Sure, there are people who can master a wide range of skills, but in general most people are better at some things than others. Focusing on a few things is more productive than trying to do everything. A person can become familiar with the range of tools, but they need to realize that they may only be able to unlock a part of what is needed to automate a clinical guideline.

The roles that need to be filled to automate a clinical guideline with BPM+ include:

1 subject matter expert (SME)

2 medical informaticist

3 visual model builder

4 hospital programmer/system integrator

5 project manager

6 and of course, tester

A team may need to be composed of various people who bring a range of skills and fill various roles. A larger project may need more than one person in some of these roles.

The amount of time needed to bring a subject matter expert (SME) up to speed is relatively short. Most modeling diagrams can be understood and followed after a few days. I personally use a tool called the Knowledge Entity Modeler (KEM) to document domain knowledge; this allows specification of term definitions, clinical coding, concepts maps and rule definitions. The KEM is based on the SVBR standard, but its visual interface makes everything simple to grasp. Other comparable visual tools are available. The time spent is quickly compensated for by greater efficiency in knowledge transfer.

The medical informaticist has a number of essential tasks such as controlling terminology, standardizing data, and assigning code terms. The person must understand the nuances of how clinical data is acquired including FHIR. These services cannot be underestimated since failures here can cause many problems later as the number of models increase or as models from different sources are installed.

The model builder uses the various visual modelling languages (DMN, BPMN, CMMN) according to the processes and decisions specified by the SME. These tools can be learned quickly to some extent, but there are nuances that may take years to master. While some people can teach themselves from books or videos, the benefits of taking a formal course vastly outweigh the cost and time spent. Trsiotech offers eLearning modules that you can learn from at your own pace.

When building models, there is a world of difference between a notional model and one that is automatable. Notional models are good for knowledge capture and transfer. A notional model may look good on paper only to fail when one tries to automate it. The reasons for this will be discussed in Part 3 of this blog series.

The hospital programmer or system integrator is the person who connects the models with the local EHR or FHIR server so that the necessary data is available. Tools based on CDS Hooks or SMART on FHIR can integrate the models into the clinical workflow so that they can be used by clinicians. This person may not need to learn the modeling tools to perform these tasks.

The job of the project manager is primarily standard project management. Some knowledge of the technologies is helpful for understanding the problems that arise. This person’s main task is to orchestrate the entire project so that it keeps focused and on schedule. In addition, the person keeps chief administrators up to date and tries to get adequate resources.

The final player is the tester. Testing prior to release is best done independently of other team members to maintain objectivity. There is potential for liability with any medical software, and these tools are no exception. This person also oversees other quality measures such as bug reports and complaints. Knowing the modeling languages is helpful but understanding how to test software is more important.

My journey

I am a retired pathologist and not a programmer. While having used computers for many years, my career was spent working in community hospitals. When I first encountered the BPM+ standards, it took several months and a lot of prodding before I was convinced to take formal training. I have never regretted that decision and wish that I had taken training sooner.

I started with DMN. On-line training takes about a month. After an additional month I had enough familiarity to become productive. In the following 12 months I was able to generate over 1,000 DMN models while doing many other things. It was not uncommon to generate 4 models in one day.

I learned BPMN next. Training online again took a month. This takes a bit longer to learn because it requires an appreciation of how to design a process so that it executes optimally. Initially a model would take me 2-3 days to complete, but later this dropped to less than a day. Complex models can take longer, especially when multiple people need to be orchestrated and exception handling is introduced.

CMMN, although offering great promise for healthcare, is a tough nut to crack. Training is harder to arrange, and few vendors offer automatable versions. This standard is better saved until the other standards have been mastered.

What are the barriers?

Most of the difficulties that I have encountered have not been related to using the standards. They usually arise from organizational or operational issues. Some common barriers that I have encountered include:

1 lack of clear objectives, or objectives that constantly change.

2 lack of commitment from management, with insufficient resources.

3 unrealistic expectations.

4 rushing into models before adequate preparations are made.

If these can be avoided, then most projects can be completed in a satisfactory manner. How long it takes to implement a clinical guideline will be discussed in the next blog.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Instance Alignment in BPMN
Bruce Silver
Blog

Instance Alignment in BPMN

By Bruce Silver

Read Time: 3 Minutes

One of the most common mistakes beginners make with BPMN stems from lack of clarity as to what exactly BPMN means by a process. A BPMN process is a defined set of sequences of activities, performed repeatedly in the course of business, starting from some triggering event and leading to some final state. The key word here is “repeatedly”. The same process definition is followed by each instance of the process. Not all instances follow the same sequence of activities, but all follow some sequence allowed by the process definition. That’s not Method and Style, that’s from the spec. The spec just doesn’t say that very clearly.

Each process instance has a defined start and end. The start is the triggering event, a BPMN start event. The end occurs when the instance reaches an end state of the process instance, which in Method and Style is an end event. It helps to have a concrete idea of what the process instance represents, but I have found in my BPMN Method and Style training that most students starting out cannot tell you. Actually it’s very easy: It is the handling of the triggering event, which in Method and Style is one of only three kinds: a Message event, representing an external request; a Timer event, representing a scheduled recurring process; or a None start event, representing manual start by an activity performer in the process, which you could call an internal request. Of these three, Message start is by far the most common. That request message could take the form of a loan application, a vacation request, or an alarm sent by some system. The process instance in that case is then essentially the loan application, the vacation request, or the alarm. In Method and Style, it’s the label of the message flow into the process start event. With a Timer start event, the instance is that particular occurrence of the process, as indicated by the label of the start event.

Here is why knowing what the process instance represents is important. The instance of every activity in the process must have one-to-one correspondence with the process instance! Of course, there are a few exceptions, but failure to understand this fundamental point leads to structural errors in your BPMN model. And those structural errors are commonplace in beginner models, because other corners of the BPM universe don’t apply that constraint to what they call “processes”.

Take, for example, the Process Classification Framework of APQC, a well-known BPM Architecture organization. It is a catalog of processes and process activities commonly found in organizations. But these frequently are not what BPMN would call processes. Even those that qualify as BPMN processes may contain activities that are not performed repeatedly on instances or whose instances are not aligned with the process instance. Here is one called Process Expense Reimbursements, listing five activities.

But notice that two of the five (8.6.2.1 and 8.6.2.5) are not activities aligned one-to-one with the process instance. That is, they are not performed once for each expense report. That means that if we were to model 8.6.2 Process Expense Reimbursements in BPMN, activities 8.6.2.1 and 8.6.2.5 could not be BPMN activities in that BPMN process. So where do they go? They need to be modeled in separate processes… if they can be modeled as BPMN processes at all! Take 8.6.2.1Establish and communicate policies and limits. For simplicity, let’s assume that establishing and communicating have one-to-one correspondence, so they could be part of a single process. How does an instance of that process start? It could be a recurring process – that’s Timer start – performed annually. Or it could be triggered occasionally on demand – Message or None start. The point is that 8.6.2.1 needs to be modeled as a process separate from Process Expense Reimbursements. The result of that process, the policy and limits information, is accessible to Process Expense Reimbursements through shared data, such as a datastore.

Activity 8.6.2.5 Manage personal accounts is not a BPMN activity at all. It cannot be a subprocess, because there is no specified set of activity sequences from start to end. To me it is an instance of a case in CMMN, not an activity in this BPMN process.

All this is simply to point out that instance alignment is a problem specific to BPMN because other parts of BPM do not require it.

Since “business processes” in the real world often involve actions that are not one-to-one aligned with the main BPMN process instance, how do we handle them? We’ve already seen one way: Put the non-aligned activity in a separate process – or possibly case. Communication of data and state between the main process and the external process or case is achieved by a combination of messages and shared data.

Repeating activities are another way to achieve instance alignment.

When instance alignment requires two BPMN processes working in concert, it is often helpful to draw the top level of both processes in the same diagram. This can clarify the relationship between the instances as well as the coordination mechanism, a combination of messages and shared data. You can indicate a one-to-N relationship between instances of Process A and Process B by placing a multi-instance marker on the pool of Process B.

An example of this we use in the BPMN Method and Style training is a hiring process. The instance of the main process is a job opening to be filled. It starts when the job is posted and ends when it is filled or the posting is withdrawn. So it qualifies as a BPMN process. But most of the work is dealing with each individual applicant. You don’t know how many applications you will need to process. You want processing of multiple applicants to overlap in time, but they don’t start simultaneously; each starts when the application is received. So repeating activities don’t work here. One possible solution is shown below.

Here there is one instance of Hiring Process for N instances of Evaluate Candidate, so the latter has the multi-participant marker. Hiring Process starts manually when the job is posted and ends when either the job is filled or the posting expires unfilled after three months. Each instance of Evaluate Candidate starts when the application is received, and there are various ways it could end. It could end right at the start if the job is already filled, since before the instance is routed to any person, the process checks a datastore for the current status of the job opening. It could end after Screen and interview if the candidate is rejected. If an offer is extended, it could end if the candidate rejects the offer, or successfully if the offer is accepted. And there is one more way: Each running instance could be terminated in a Message event subprocess upon receiving notice from Hiring Process that the posting is either filled or canceled. While not perfect, this BPMN model illustrates instance alignment between multiple processes working in concert, including how information is communicated between them via messages and shared data.

There is yet another way to do it… all in a single process! It uses a non-interrupting Message event subprocess, and is an exception to the rule that all process activities must align one=to-one with the process instance. It looks like this:

Now instead of being a separate process, Evaluate Applicant is a Message event subprocess. Each Application message creates a new instance of Evaluate Applicant. You don’t know how many will be received, and they can overlap in time. As before, each instance checks the datastore Job status. Since everything is now in one process, we can no longer use messages to communicate between Evaluate Applicant and the main process. Here we have a second datastore, candidates, updated by Evaluate Applicant and queried by Get shortlist to find newly passed applicants. Instead of an interrupting event subprocess to end the instance, we use a Terminate event after notifying all in-process candidates.

If you are just creating descriptive, i.e., non-executable, BPMN models, you may wonder why instance alignment matters. It certainly can make your models more complicated. But even in descriptive models, in order for the process logic to be clear and complete from the printed diagrams alone – the basic Method and Style principle – the BPMN must be structurally correct. If it is not, the other details of the model cannot be trusted. If you want to get your whole team on board with BPMN Method and Style, check out my training. The course includes 60-day use of Trisotech Workflow Modeler, lots of hands-on exercises, and post-class certification.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Sandy Kemsley’s Vlog - AI and BPM
Sandy Kemsley Photo
Vlog

AI and BPM

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog to talk about the latest Hot Topic in BPM: Artificial Intelligence.

Now, I’ve been at a couple of conferences in the last month, and I’ve had a few briefings with vendors and there’s a lot of interest in this intersection between AI and BPM. But what does that mean? What exactly is the intersection? And it’s not just one answer, because there’s several places where AI and Process Management come together.

Now, the dream, or the nightmare in some people’s opinion, is that AI just takes over processes it figures out what the process should be then it automates and executes all the steps in the process. The reality is both less and more than that. So, let’s look at the different use cases for AI in the context of BPM. Let’s start at the beginning with process Discovery and design and there’s quite a bit that AI can do in this area as an assist of Technology. Now, at this point it might not be possible to have an AI completely designed processes without human intervention, but it is possible to have AI act as sort of a co-pilot for authoring process models or finding improvements to them.

There’s a couple different scenarios for this.

First of all, you could have a person just describe the process that they want to have in broad terms, and have generative AI create a first impression of that, or a first version of that process model for them. So, then the human designer can make changes directly or add additional information, the AI could then make refinements to the process model and so on. Now, the challenge with using generative AI in this scenario is that you need to ensure that the training data is relevant to your situation. This might mean that you need to use private AI engines and data sources that are trained on your own internal data or on data that’s specific to your industry in the very least in order to ensure some reasonably good results.

Now, the second process modeling scenario is when there are logs of processes in place, like we would use for process mining, and we’ve talked about process mining in in a previous podcasts. Now, in that case, there are possibilities for having AI look at the log data and then other enterprise and domain data and using process mining using other search-based optimization suggest improvements to the process. So, for example adding parallelism at certain points, or automating certain steps or decisions, or having some activities be required for regulatory or conformance reasons. Again, there needs to be some checks and balances on the training data that’s used for the AI to ensure that you’ve included processes and regulations that pertain to your business.

Now, in both of these cases, there’s the expectation that a person who’s responsible for the overall process operation, like the process owner, might review the models that are created or revised by the AI before they’re put into production. It’s not just an automated thing where the AI creates a model or modifies a model and it’s off and running. Now, we can look at using similar types of a AI and algorithms that you would for process improvement that are based on process mining and other domain knowledge, we can also use those in the scenario where AI acts again as a co-pilot, but for the people that are doing the human activities in a process, so the knowledge workers. Now they can ask complex questions about the case that they’re working on, they can be offered suggestions on the next best action, they can have guard rails put in place so that they don’t make decisions at a particular activity that would violate regulations or policies.

Now, we already see a lot of decision management and machine learning applied in exactly this situation. So, a knowledge worker just needs a little bit of an assist to help them make complex decisions or perform more complex activities. And adding AI to the mix means that we can have even more complex automation and decision-making that can support knowledge workers as they do their job. So, the ultimate goal is to ensure that the knowledge workers are making the best possible decisions at activities within processes, even if the environment is changing maybe regulations are changing, or procedures are changing. And then also to support less skilled knowledge workers so that they can become more familiar with the procedures that are required because they have a trusted expert, namely the AI, by their side coaching them on what they should be doing next.

Now, the last scenario for AI in the context of processes, is to have a completely automated system or even just completely automated activities within a process that used to be performed by a person. So the more times that an activity is performed successfully, there’s data collected about the context the domain knowledge that all that go behind that decision, the more likely it is that AI can be trained to make decisions and do activities of the same complexity and with the same level of quality as a human operator. We also see this with AI chatbots. We’re seeing these a lot now, that where they interact with other parties processes like providing customer service information. Now, previous previously a knowledge worker might have interacted with a customer maybe on a phone or by email, we’re seeing a lot of chatbots in place now for customer service scenarios. Now, a lot of them are pretty simple they don’t really deserve to be called AI. They’re just looking for simple words and providing some stock answers but what generative AI is starting to give us in this scenario, is the ability to respond to more complex questions from a customer and leave the human operators free to handle situations that can’t be automated or rather can’t be automated yet.

Now, currently I don’t think we need to worry about AI completely taking over our business processes. There’s lots of places where AI can act as a co-pilot to assist designers and knowledge workers to do the best job possible. But it doesn’t replace their roles: it just gives them an assist. Now, a lot of Industries don’t have all the skilled people that they need in both of these areas for designers, for knowledge workers or it takes a long time to train them so letting the people who are there be more productive, is a good thing. So, using AI to make the few skilled resources we have more productive is something that’s beneficial to the industry it’s beneficial to customers. Now, as I noted earlier, the ability of AI to make these kinds of quality decisions and perform the types of actions that are currently being done by people, it’s going to be heavily reliant on the training data that’s used for the AI. So, you can’t just use the public chat, like chat GPT, for interacting with your customers. That’s not going to work out all that well. Instead, you do want to be training on some of your own internal data as well as some industry specific data.

Now, where we do start to see people being replaced is where AI, is used to fully automate specific activity, specific decisions, customer interactions within a process. However this is not a new phenomenon. Process automation has been replacing people doing repetitive activities for a long time. So, all that we’re doing by adding AI, is increasing the complexity of the type of activity that can be fully automated. The idea that we’re automating some activities is not new, this has been going on a long time. So, the bar has been creeping up: we went from simple automation to more complex decision management, machine learning and now, we have full AI in its current manifestation. So, we just need to get used to that idea that it’s another step in the spectrum of things that we’re doing by adding intelligence into our business processes.

Now, are you worried about your own job? You could go and check out willrobotstakemyjob.com or just look around at what’s happening in your industry. If you’re adding value through skills and knowledge that you have personally that’s very difficult to replicate, you’re probably going to be able to stay ahead of the curve and you’ll just get a nice new AI assistant who’s going to help you out. If you’re doing the same thing over and over again however, you should probably be planning for when AI gets smart enough to do your job as well as you do.

That’s all for today. You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

Sandy Kemsley’s Vlog - Future-Proofing Your Business With BPM
Sandy Kemsley Photo
Vlog

Future-Proofing Your Business With BPM

By Sandy Kemsley

Video Time: 8 Minutes

Hi, I’m Sandy Kemsley of column2.com. I’m here for the Trisotech blog with a new topic now that we’ve finished my series on best practices in business automation application development. Today, I’m going to talk about future proofing your business with process automation technologies such as BPM.

Now, a little over three years ago, everything was business as usual. Organizations were focused on new competitors, new business models, new regulations, but it wasn’t a particularly disruptive time. Then the pandemic happened, and that was disruptive! So, supply chains, customers, business models, everything changed dramatically in some cases.

Now, that’s not news of course not by now, but it’s become obvious that it’s not enough to have shifted to some new way of doing things in a response to a one-time event. Companies have to become more easily adaptable to frequent disruptions, whether they’re technological, societal or environmental, or they’re just not going to exist anymore. In many cases this means modernizing the very technological infrastructure that supports businesses.

So how is your business going to adopt a change? Both the changes that have already happened and the unknown changes in the future? There’s a lot that falls under the umbrella of modernization, and you need to look at whether you’re doing just enough to survive or if you’re taking advantage of this disruption to thrive and actually outgrow your competition.

I see three ways that companies have been reacting to disruption:

1

So you can support your existing business which is basically adding the minimum amount of technology to do the same things that you were doing before. This is purely a survival model, but if you have a unique product or service or very loyal customers, that might be enough for you.

2

You can improve your business by offering the same products or services but in a much better way. This will give you better resilience to future disruptions, it improves customer satisfaction and it shifts you from just surviving to thriving.

3

You can innovate to expand the products and services that you offer or move into completely new markets. This is going to let you LeapFrog your competition and truly thrive not just as we emerge from the pandemic, but in any sort of future disruption that we might have.

more than
managing your business processes

So I mentioned BPM, but this is about more than just managing your business processes. There’s a wide variety of technologies that come into play here and that really support future proofing of your business: process and decision automation, intelligent analysis with machine learning and AI, content and capture, customer interactions with intelligent chatbots, and Cloud infrastructure for Access anywhere anytime…

So you have to look at how to bring all of those together, and just understanding how all those fit, is like an entire day’s lecture all in one, but you probably have a bunch of those that you’re using already. Let’s look at a few kind of examples of this support/improve/innovate spectrum that I’ve been talking about though and we’re dealing with instruction and then just what it means for future proofing your business. So, supporting your existing business, a matter of just doing what you can to survive, and hoping that either you can keep up or that things will go back to normal. Now basically you’re doing the same business that you always were, but with maybe a bit of new technology to support some new ways of doing things:

  • Your employees might be working from home so you needed some new Cloud technology or Network Technology to help with this.
  • You probably also need some new management techniques, in order to stay productive and motivated even though your your Workforce is highly distributed geographically.
  • You also need to handle changing customer expectations. So you have to have some amount of digital interactions and if you’re dealing with physical goods you might be looking at new ways of handling delivery.
  • Your supply chain processes need to become flexible, this is one of the things that we really saw during the pandemic is there were a lot of broken supply chains, so you want to be able to change suppliers or change other channels in the event of disruption.

But let’s just go a little bit beyond surviving disruption that you might do by sort of mandating together something to support your existing model. The next step to is to look at disruption as an opportunity to thrive. So you want to still be in the same business but embrace new technologies and new ways of doing things. So this really pushes further into looking at customer expectations: adding in self-serve options if you don’t already have them, and then coupling that with intelligent automation of processes and decisions. So, once you’ve added intelligence to your business operations to let them be done mostly without human intervention, now a customer can kick off a transaction through self-service and see a complete almost immediately by intelligent automation, same business – better way to do it, more efficient, faster, more accurate, better customer satisfaction.

Now, this is also going to be helped by having proper business metrics that are oriented towards your business goals. With more automation data is going to be captured directly,, regarding how your operation is working, and then that’s going to feed directly into the metrics. Those metrics then you can use to guide knowledge workers so that they know what they should be doing next. Also to understand how customer satisfaction is and how you can improve it.

So this lets you move past your competition, while keeping your previous business focus. So given that there’s two companies, you and your competitors, who are offering the same products or Services if one does only that survival support that I talked about previously and one does more intelligent improvements focused on customer satisfaction, who do you think is going to win?

Now, the third stage of responding to disruption and adapting to change is innovation. You’ll continue to do process and operational improvements through performance monitoring, data-driven analytics, but also move into completely new business models. So maybe you repackage your products or services and then you sell them to completely different markets, so you might move from commercial to Consumer markets or vice versa or you sell into different geography or different industries because now you have more intelligent processes you have this always-on elastic infrastructure. Here again, you’re just moving past your competition by not only improving your business but actually expanding into new markets, taking on new business models that are supported by this technology-based Innovation.

So it’s the right application of technology that lets you do more types of business and more volume without increasing your employee headcount. Without automation and flexible processes you just couldn’t do that, and without data-driven analytics you wouldn’t have any understanding of the impact that such a change would have on your business or whether you should even try it. So you need to have all of that: you need to have the the data that supports the analytics and you need to have the right type of technology that you’re applying to have more intelligent operations business operations, and this was going to allow you to move from just surviving to thriving to innovation.

Now, a lot of change here. The question that all of you need to be asking yourself now is not is this the new normal but really why weren’t we doing things this way before? There’s just a lot of better ways that we could be doing things and we’re now being pushed to take those things on.

That’s all for today. Next month I’m going to be attending the academic BPM conference in the Netherlands, and there’s always some cool new ideas that come up so watch for my reports from over there!

You can find more of my writing and videos on the Trisotech blog or on my own blog at column2.com. See you next time.

Blog Articles

Sandy Kemsley

View all

All Blog Articles

Read our experts’ll blog

View all

What is FEEL?

FEEL (Friendly Enough Expression Language) is a powerful and flexible standard expression language developed by the OMG® (Object Management Group) as part of the Decision Model and Notation (DMN™) international standard.

Request a demo

Low-code/No-Code/Pro-Code

It is a valuable tool for modeling and managing decision logic in many domains, including healthcare, finance, insurance, and supply chain management. FEEL is designed specifically for decision modeling and execution and to be human-readable to business users, while still maintaining the expressive power needed for complex decision-making. Its simplicity, expressiveness, domain-agnostic functionality, strong typing, extensibility, and standardization make FEEL a valuable tool for representing and executing complex decision logic in a clear and efficient manner. Organizations using FEEL enjoy better collaboration, increased productivity, and more accurate decision-making.

What Are Expression Languages?

FEEL is a low-code expression language, but what is the difference between Expression Languages, Scripting Languages, and Programming Languages. They are all different types of languages used to write code, but they have distinct characteristics and uses.

Expression Languages

Expression languages are primarily designed for data manipulation and configuration purposes. They are focused on evaluating expressions rather than providing full-fledged programming capabilities. Expression languages are normally functional in nature, meaning that at execution the expression will be replaced by the resulting value. What makes them attractive to both citizen developers and professional developers is that they are usually simpler and have a more limited syntax compared to general-purpose programming languages and/or scripting languages. Due to their simplicity, expression languages are often more readable and easier to use for non-programmers or users who don’t have an extensive coding background. FEEL is a standard expression language.

Scripting Languages

Scripting languages provide abstractions and higher-level constructs that make programming using them easier and more concise than programming languages. They are usually interpreted rather than compiled, meaning that the code is executed line-by-line by an interpreter rather than being transformed into machine code before execution. Popular examples of scripting languages are Python, JavaScript, and Ruby.

Programming Languages

Programming languages are general-purpose computer languages designed to express algorithms and instructions to perform a wide range of tasks and create applications. They offer extensive features and capabilities for developing complex algorithms, data structures, and user interfaces. They offer better performance compared to scripting languages due to the possibility of compiling code directly into machine code. Examples of programming languages include C++, Java, and C#.

Is FEEL Like Microsoft Power FX (Excel Formula Language)?

FEEL and Power FX are both expression languages used for data, business rules, and expressions, but in different contexts. Power FX is a low-code programming language based on Excel Formula Language, tailored for Microsoft Power Platform, with some limitations in handling complex decision logic. As soon as the business logic gets a bit tricky, Power FX expressions tend to become highly complex to read and maintain. On the other hand, FEEL is a human-readable decision modeling language, designed for business analysts and domain experts, offering a rich set of features for defining decision logic, including support for data transformations, nested decision structures, and iteration. FEEL provides clear logic and data separation, making it easier to understand and maintain complex decision models.

While Power FX has a visual development environment in the Microsoft Power Platform, FEEL is primarily used within business rules and decision management systems supporting DMN and process orchestration platforms. FEEL is a language standard across multiple BPM and decision management platforms, providing interoperability, while Power FX is tightly integrated with Microsoft Power Platform services. For further comparison, Bruce Silver’s articles FEEL versus Excel Formulas and Translating Excel Examples into DMN Logic.

FEEL Benefits for Technical people and for Business people.

Technical Benefits of FEEL

Decision focus language

FEEL is designed specifically for decision modeling and business rules. It provides a rich set of built-in functions and operators that are tailored for common decision-making tasks. This decision focus nature makes FEEL highly expressive and efficient for modeling complex business logic.

Expressiveness

FEEL supports common mathematical operations, string manipulation, date and time functions, temporal logic and more. This expressiveness enables the representation of complex decision rules in a concise and intuitive manner.

Decision Table Support

FEEL has native support for decision tables, which are a popular technique for representing decision logic. Decision tables provide a tabular representation of rules and outcomes, making it easy to understand and maintain complex decision logic.

Strong typing and type inference

FEEL is a strongly typed language, which means it enforces strict type checking. This feature helps prevent common programming errors by ensuring that values and operations are compatible.

Boxed Expression Support for FEEL

Boxed expressions allow FEEL expressions and statements to be structured visually including:

  • If, then, else statements
  • For, in, return statements
  • List membership statements
  • … and more.

These visual constructs, along with autocomplete make creating, reading, and understanding complex expressions easy to model and debug.

Flexibility and modularity

FEEL supports modular rule definitions and reusable expressions, promoting code reuse and maintainability. It allows the creation of decision models and rule sets that can be easily extended, modified, and updated as business requirements change. This flexibility ensures agility in decision-making processes.

Testing and Debugging

FEEL expressions can be tested and debugged independently of the larger application or system. This enables users to validate and verify decision logic before deployment, ensuring accuracy and reliability. FEEL also provides error handling and exception mechanisms that help identify and resolve issues in decision models.

Execution efficiency

FEEL expressions are designed to be executed efficiently, providing fast and scalable performance. FEEL engines often use optimized evaluation algorithms and data structures to ensure high-speed execution of decision logic, even for complex rule sets.

Integration FEEL

can be easily integrated with other programming languages and platforms. Many decision management systems and business rules engines provide support for executing FEEL expressions alongside other code or as part of a larger application. This enables seamless integration of decision logic via services into existing IT architectures and workflows.

Extensibility

FEEL can be extended with domain-specific functions and operators to cater to specific industries or business domains. These extensions can be defined to encapsulate common calculations, business rules, or industry-specific logic, enabling greater reusability and modularity.

Interoperability

FEEL also enables the sharing and reuse of decision models across different organizations and applications.

Business Benefits of FEEL

Standardization and Vendor-neutrality

FEEL is a standardized language within the OMG DMN standard, which means it has a well-defined specification and is supported by various software tools and platforms. Standardization ensures interoperability, as FEEL expressions can be used across different DMN-compliant systems without compatibility issues. FEEL is designed to be portable across different platforms and implementations.

Business-Friendly

FEEL focuses on capturing business rules and decision logic in a way that is intuitive and natural for business users. This allows subject matter experts and domain specialists to directly participate in the decision modeling process, reducing the dependency on IT teams and accelerating the development cycle.

Simplicity and Readability

FEEL has a syntax that is easy to read and understand – even for non-technical users like subject matter experts and citizen developers. It uses natural language constructs including spaces in names and common mathematical notation. This simplicity enhances collaboration between technical and non-technical stakeholders, facilitating the development of effective decision models.

Ease of Use

FEEL is supported by various decision management tools and platforms. These tools provide visual modeling capabilities, debugging, testing, and other features that enhance productivity and ease of use. The availability of modeling and automation tooling support simplifies the adoption and usage of FEEL.

Decision Traceability

FEEL expressions support the capture of decision traceability, allowing users to track and document the underlying logic behind decision-making processes. This traceability enhances transparency and auditability, making it easier to understand and justify the decisions made within an organization.

Decision Automation

FEEL has well-defined semantics that support the execution of decision models. It allows the evaluation of expressions and decision tables, enabling the automated execution of decision logic. This executable semantics ensures that the decision models defined in FEEL can be deployed and executed in a runtime environment with other programs and systems.

Compliance and Governance

FEEL supports the definition of decision logic in a structured and auditable manner. This helps businesses ensure compliance with regulatory requirements and internal policies. FEEL’s ability to express decision rules transparently allows organizations to track and document decision-making processes, facilitating regulatory audits and internal governance practices. FEEL includes several features specifically tailored for decision modeling and rule evaluation. It supports concepts like ranges, intervals, and temporal reasoning, allowing for precise specification of conditions and constraints. These domain-specific features make FEEL particularly suitable for industries where decision-making based on rules and constraints is critical, such as healthcare, finance, insurance, and compliance.

Decision Analytics

FEEL provides the foundation for decision analytics and reporting. By expressing decision logic in FEEL, organizations can capture data and insights related to decision-making processes. This data can be leveraged for analysis, optimization, and continuous improvement of decision models. FEEL’s expressive capabilities allow for the integration of decision analytics tools and techniques, enabling businesses to gain deeper insights into their decision-making processes.

Trisotech FEEL Support

Most comprehensive FEEL implementation

Trisotech provides the industry’s most comprehensive modeling and automation tools for DMN including support for the full syntax, grammar, and functions of the FEEL expression language. To learn more about basic types, logical operators, arithmetic operators, intervals, statements, extraction and filters supported by Trisotech see the FEEL Poster.

FEEL Boxed Expressions

Boxed Expressions are visual depictions of the decisions’ logic. Trisotech’s visual editor makes the creation of Boxed Expressions and FEEL expressions easy and accessible to non-programmers and professional programmers alike.

FEEL Functions

FEEL’s entire set of built-in functions are documented and menu selectable in the editor. The visual editor also offers supports for the Trisotech-provided custom FEEL functions including functions for Automation, Finance, Healthcare, and other categories.

Autocompletion

The Trisotech FEEL autocompletion feature proposes variable and function names including qualified names as you type when editing expressions thus saving time and improving accuracy.

FEEL as a Universal Expression Language

Trisotech has also expanded the availability of the international standard FEEL expression language to its Workflow (BPMN) and Case Management (CMMN) visual modelers. For example, FEEL expressions can be used for providing Gateway logic in BPMN and If Part Condition expressions on sentries in CMMN.

FEEL Validation and Debugging

Trisotech provides validation for FEEL and real-time full-featured debugging capabilities. To learn more about testing and debugging read the blog Trisotech Debuggers.

Additional Presentations and Blogs

You can also watch a presentation by Denis Gagne on using FEEL as a standards-based low-code development tool and read a blog by Bruce Silver about how using FEEL in DMN along with BPMN™ is the key to standards-based Low-Code Business Automation.

OMG®, BPMN™ (Business Process Model and Notation™), DMN™ (Decision Model and Notation™), CMMN™, (Case Management Model and Notation™), FIBO®, and BPM+ Health™ are either registered trademarks or trademarks of Object Management Group, Inc. in the United States and/or other countries.

Trisotech

the
Pioneer

View all

Trisotech's blog post - Invoking AWS Lambda functions
Trisotech Icon
Blog

Invoking AWS Lambda functions

By Trisotech

Read Time: 2 Minutes

You have created your own code in an AWS Lambda function in Python, Java, JavaScript or C# and want to integrate it with an automated workflow created in the Trisotech Digital Enterprise Suite?

This simple Python function says hello from the region it is deployed in with HTTP 200 as plain text.

AWS Lambda Function can be easily deployed but it is important to make sure that access to this function is restricted. This is where authentication comes into the picture. In this case it is going to be based on an API key that is assigned and provided to the consumers of this function. To be able to use API key-based authentication, the function needs to have a trigger based on API Gateway.

Adding API gateway is done via adding a trigger to the function. It is important to use the REST-API as the type and API key as security mode.

With API Gateway as trigger, this function will get assigned URL to invoke this function from outside. It will be in following format:

https://{xxxxxxxxxx}.execute-api.{region}.amazonaws.com/default/DESSample

where xxxxxxxxx and region are replaced with actual value based on AWS environment.

Additionally, there will be one API key created automatically and more API keys can be created in the configuration of the API Gateway. The API key needs to be provided as an HTTP header (named x-api-key) when invoking the function.

Trisotech Digital Modelling and Automation suites trivialize the orchestration of externally defined code with its low code approach if it’s exposed though a standard REST API that can be described using the OpenAPI (Swagger) standard.

The integration is done through a BPMN service task that invokes the lambda function.

The lambda function is referenced using the Operation Library that allows to define where the lambda function can be accessed, its parameters and its security constraints. Clicking on the service task gear will allow you to create a new Interface and Operation.

Name your Interface and configure it (using the pen icon) with:

  • Server URL – what is the server URL where this interface can be found
  • Security
    • Type: API Key
    • In: header
    • Name: x-api-key

Security section defines the mechanisms to authenticate when invoking the service. In this case, it uses API Key as defined in the API Gateway for the AWS Lambda function.

Name your Operation and configure it (using the pen icon) with:

  • Method – HTTP method that the request will use.
  • Path – context path in the server URL for the AWS Lambda function.
  • Outputs – data to be retrieved from the function.
    • result

This integration allows to invoke lambda functions written in any language, but also more complex services exposed through a REST API opening an infinite world of orchestration and integration using your existing or newly created functions and services.

Trisotech also offer an Automation Cookbook as part of the Digital Automation Suite that contains a lot of other recipes to integrate systems and with its automation capabilities.

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Standardizing BPMN Labels
Bruce Silver
Blog

Standardizing BPMN Labels

By Bruce Silver

Read Time: 4 Minutes

In my BPMN Method and Style training, I show the following BPMN and ask students, “What does this diagram say?”

You really cannot tell. It says something happens and then either the process ends or something else happens. Not very informative. But if you run Validation against the rules of the BPMN spec… no errors!

As far as the spec is concerned, it’s perfect. But if the goal is to communicate the process logic, it’s useless. If we run Validation against the Method and Style rules, we get this:

Now we get 6 errors, and they all pertain to the same thing: labels. Compare that process diagram with one containing labels and other “optional” elements:

Now the diagram says something meaningful. Beyond labels, the BPMN spec considers display of task type icons, event triggers, message flows, and black-box pools to be optional. Their meaning is defined in the spec, but modelers may omit them from the diagrams. Labels – their meaning, placement, or suggested syntax – are not discussed at all. They are pure methodology.

Obviously, to communicate process logic intelligibly through diagrams, labels and similar methodological elements are necessary. That was the main reason I created my own methodology, called Method and Style, over a decade ago, which includes rules about where labels are required, what each one means, and where corresponding diagram elements must be labeled identically. The best BPMN tools, like Trisotech Workflow Modeler, have Method and Style Validation built in, and thousands of students have been trained and certified on how to use it.

Here are some of the rules related to labeling:

  • Activities should be labeled (Verb-object) to indicate an action.
  • A Message start event should be labeled Receive [message name].
  • A Timer start event should be labeled with the frequency of occurrence.
  • The label of a child-level page should match the name of the subprocess.
  • Two activities in the same process should not have the same name unless they are identical.
  • A boundary event should be labeled.
  • An Error or Escalation boundary event on a subprocess should be labeled to match the throwing Error event.
  • A throwing or catching intermediate event should be labeled.
  • If a process level has multiple end events, each end event should be labeled with the name of the end state (Noun-adjective).
  • Each gate of an XOR gateway should be labeled with the name of an end state of the previous activity. If 2 gates, the gateway may be labeled as end state plus ? and the gates labeled yes and no.
  • Gates of an AND gateway should not be labeled.
  • Non-default gates of an OR gateway should be labeled.
  • Two end events in a process level should not have the same name. If they mean the same end state, combine them; otherwise give them different names.

One reason why the task force developing the BPMN 2.0 standard didn’t care about labels is that their focus was primarily on model execution, which depends heavily on process data. While it is possible to suggest process data and data flow in the diagrams, this is something that – in most tools – is defined by Java programmers, and as a consequence is omitted entirely in descriptive, i.e. non-executable, models. In order to reveal model meaning in descriptive models in the absence of process data, Method and Style introduced the concept of end states.

The end state of an activity or process just means how did it complete, successfully or in some exception state. End state labels, typically in the form Noun-adjective, serve as a proxy for the process data used to determine branching at gateways. For example, in the diagram above, the gate labels Valid and Invalid are by convention the end states of the preceding activity Validate request. An executable version of this process would not necessarily have a variable with those enumerated values, but the labels suggest the process logic in an intuitive way.

I’m not going to review the other Method and Style conventions and rules here. There is plenty of information about them on my website methodandstyle.com, my book BPMN Quick and Easy, and of course my BPMN Method and Style training. Method and Style makes the difference between BPMN diagrams that communicate the logic clearly and completely and those that are incomplete and ambiguous. BPM project team managers tell me that before Method and Style, their BPMN diagrams could be understood only by the person that created them. Standardization of the meaning and usage of diagram labels changed all that.

CMMN, the case management standard, has the same issue. For example, a plan item’s entry and exit criteria, modeled using sentries and ON-part links, require labels on both the ON-part (a standard event) and the IF-part (a data condition) to make sense, but labeling is not required or even suggested by the spec. Without labels, all we can tell is that some entry or exit condition exists. My book CMMN Method and Style suggests several labeling conventions for those diagrams, as well.

The Trisotech platform provides a key benefit to Method and Style practitioners: Method and Style rules about labels and other conventions can be checked in one click. Style rule validation not only improves model quality but reinforces the rules in the modeler’s mind. In the training, certification exercises must be free of any validation errors.

If your BPM project depends on shared understanding of BPMN diagrams, you need to go beyond the spec with labeling standards. You need Method and Style.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - DMN's Killer App
Bruce Silver
Blog

Interrupting Events in Automation

By Bruce Silver

Read Time: 2 Minutes

In my BPMN Method and Style training, we use examples like the one below to illustrate the difference between interrupting and non-interrupting boundary events:

Here an Order process with four subprocesses could possibly be cancelled by the Customer at any time. As you can see, a single physical Cancellation message from Customer is modeled as multiple message flows. That’s because the Cancellation message is caught by four different message boundary events, representing four different ways the message is handled depending the state of the process when Cancellation occurs. Note that if the message is received during Prepare invoice, i.e., after Ship order is complete, the Order process cannot be terminated, and explanatory information is instead sent to the Customer. And this is a fine way to show, in descriptive models – i.e., non-executable BPMN – the behavior expected with exceptions like Customer Cancellation.

But as I have begun to get more involved with Automation models – executable BPMN – I am discovering that modeling exception handling in this way is not ideal. The BPMN spec says that an interrupting boundary event terminates the activity immediately. As far as the process engine is concerned, the instance just goes away, along with all its data. A user performing some task inside a cancelled subprocess is unaware of this until the task is complete; the Performer gets no notification that their work on this Order is for naught. That’s the main problem. There is also the problem that some actions performed in the activity prior to cancellation may need to be undone. That can be done on the exception flow, so the main difficulty, as I see it, is notifying the user performing some task when the Cancellation occurs.

Let’s focus on the first subprocess, Enter order. In this simplified example, the Order is first logged into the Order table, then inventory is checked for each Order item, reserving the Order quantity for each one. If some items are out of stock, we need to Update Order table. An Order Acknowledgment message is sent to the Customer containing the OrderID, a unique identifier in the system. The Customer would need to provide this in any Cancellation message. These are all automated activities, occurring very fast. Then a User task Check credit authorizes the purchase for this Customer. That could take a while, so if the Customer decides to cancel shortly after submitting the order, it’s likely going to occur during Check credit. And if that happens, we don’t want the user performing Check credit to waste any more time on this Order, as would happen with an interrupting boundary event.

So instead we model it as a non-interrupting event subprocess. We could do it with a non-interrupting boundary event on the subprocess, but this way I think is cleaner. Now if the Cancellation message is received, before we terminate Enter order we do a bit of cleanup, including Update Order table to show a status of “Cancelled by Customer” and notifying the Check credit performer by email to stop working on this instance, after which we terminate the subprocess with an Error end event. The exception flow from this Error event in the parent level allows additional exception handling for the Cancellation.

The required exception handling requires some additional information about the process instance. We need to know who is the task performer of Check credit. This is platform-dependent, but on the Trisotech Low-Code Automation platform you can use a FEEL extension function to retrieve the current task performer. In a more complex subprocess, we may also need to know its state when the cancellation occurred, which tasks have been completed and may need to be undone. For this, Trisotech has provided additional extension functions as well. Cleaning up when a process instance is cancelled in-flight can be messy, but it’s still within the reach of Low-Code Business Automation.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Modelling the Preoperative Surgical Journey

An introduction to Business Process Management for Healthcare (BPM+ Health)

Presented at the BCS Health and Care by

John Svirbely, MD, CMIO, Trisotech
Denis Gagne, CEO & CTO, Trisotech

This webinar provides an introduction to BPM+ using the Preoperative Surgical Journey as an example.

Speakers demonstrate visual modelling and automation for the Preoperative Surgical Journey based on the three open standards that make up BPM+.

BPM+ Health is a multidisciplinary initiative, with high levels of participation from clinicians, to improve the quality and consistency of healthcare delivery. It is achieving this by applying business process modelling standards to clinical best practices, care pathways and workflows directly at the point of care.

Further information on BPM+ Health can be found at https://www.bpm-plus.org.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Bruce Silver's blog post - BPMN's Magic Event Type
Bruce Silver
Blog

BPMN’s Magic Event Type

By Bruce Silver

Read Time: 3 Minutes

Occasionally in my BPMN Method and Style training, a student will submit a Certification exercise containing a Conditional event. I have always rejected that. Conditional events are not part of Method and Style, and that’s because I have always considered them to mean “some magic occurs”.

According to the spec, a Conditional event may be used either as the Start event of a process or event subprocess, a Catching Intermediate event, or a Boundary event, triggered when its Boolean expression attribute becomes true.But what data can that expression reference? This is the problem, because the spec does not say. In fact, the only place where the BPMN 2.0 spec speaks to it is in the context of a process Start event, where no instance data yet exists!

For that, it says:
This type of event is triggered when a condition such as S&P 500 changes by more than 10% since opening, or Temperature above 300C become true. The condition Expression for the Event MUST become false and then true before the Event can be triggered again. The Condition Expression of a Conditional Start Event MUST NOT refer to the data context or instance attribute of the Process (as the Process instance has not yet been created). Instead, it MAY refer to static Process attributes and states of entities in the environment. The specification of mechanisms to access such states is out of scope of the standard.

Here you should interpret the term “out of scope” to mean undefined, or to use my word, “magic.”

If you were a generous sort, you might infer that for other Conditional events, the expression references the “data context”, i.e. what we call process data. And it turns out that this is exactly how Trisotech has implemented Conditional events. Because non-executable processes, our focus in Method and Style, do not define process data, I will continue to avoid Conditional events there. But in Business Automation, they enable some event-triggered behaviors that are otherwise more complicated to model.Recall from previous posts that in Trisotech BPMN, process data and expressions are defined using FEEL, the Low-Code language defined in the DMN spec. The condition expression of a Conditional event is thus a FEEL Boolean expression referencing instance data.A Conditional event is triggered when the value of one or more process variables change such that the condition expression becomes true.

So when can that occur?

Except for cloud datastores, which may be updated from outside the process, the value of a process variable is changed only when its incoming data association occurs, i.e. at the completion of the activity or event at the source of that connector. Thus it is always possible that Conditional event behaviors could alternatively be modeled more conventionally using gateways, sometimes in combination with Signal events. For example, suppose we have an established decision service Validate Input, but starting in 2023 we want to perform additional validations under certain conditions. We could model this using a non-interrupting Conditional event subprocess, as shown below.

However, we could model the same logic more conventionally using an OR gateway leading to a regular subprocess.

In the BPMN Method and Style training we discuss two ways a process can receive information from outside: either via a message or shared data. In Business Automation, using shared data is probably more common. If that shared data used a cloud datastore (as opposed to an external app or database), you might think you could wait for a condition on the datastore to become true. Unfortunately, the Conditional event cannot be used for this. At least on the Trisotech platform today, the datastore update must be caused by a data association from within the process, not externally. For example, the scenario below, in which a Service request process waits for payment via a separate Payment process, does not work with Conditional events. Changes to the datastore Orders are not visible to the Conditional event.

But we could put the Payment process inside our main process using a call activity, which would allow the wait-for-Condition to work:

Although one could debate the point, on balance this solution is probably better than the more conventional alternative using a Signal event thrown by Payment process.

If you are willing to have Payment process throw a Signal, throwing a Message is even simpler.

Conditional boundary events have the same basic issue. Because the condition cannot be triggered from outside the process, it must be caused by a parallel thread of execution within the process. (In the absence of parallel flow, a simple gateway testing the condition is always going to be simplest.) So again, the modeler’s choice is between a Conditional boundary event testing shared process data and Signal throw-catch. And again, I believe that in this case the Conditional event solution is better.

Bottom line, while Conditional events appear useful on the surface, their utility is in practice limited to scenarios in which actions on one parallel thread of a process control actions on another thread.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

BPM+ Virtual Coffee
5 Min Intro to BPMN

A short introduction to the Business Process Model and Notation (BPMN)

Presented by

Denis Gagne, CEO & CTO at Trisotech

Good day everybody and welcome to our BPM+ Health Virtual Coffee session. The topic for today is a five-minute introduction to BPMN. So, without any further ado, let me go ahead and do this quick introduction. BPMN stands for Business Process Model and Notation and it’s a standard published by the Object Management Group, which is a Standard Development Organization. BPMN is an open standard, meaning that anybody can go and access the specification. The URL is here. You can go and download this specification.

BPMN offers a single visual process knowledge artifact for both humans and machine. On the one hand, it offers a visual story of what needs to be done, and on the other hand, for machines, it offers a portable execution semantic for automation. So, it enables this duality for us.

Why BPMN matters?

Because BPMN offers an unambiguous format for modeling processes. It also offers a file format that can be interchanged between the different vendor products, and it provides a common and readily transferable set of skills that are learned by subject matter experts. I’ve been arguing for many years now that this third point here is the most important. The BPMN standard is being taught by many different organizations. There are plenty of books, I think there’s over 100 books on BPMN. People can easily learn BPMN and, whenever someone from your organization leaves, and he is a BPMN expert, the next BPMN expert can be found in the market. So, this is a very important factor for all organizations.

What is BPMN?
  • BPMN is fundamentally a language for describing operations. It basically prescribes the next task to do.
  • It’s a mean to an end. Meaning that we should model for a purpose. Whether your purpose is for documentation or for automation, this will drive how you use BPMN.
  • It’s a tool not a solution. Meaning that you require domain knowledge and modeling skills to properly create BPMN models.

What is very nice about BPMN is that there are only four basic shapes that you need to know:

  • Whenever you see a circle in BPMN it means an event. There are various types of circles but they’re all events.
  • Whenever you see a rounded rectangle shape that means an activity. Again, there are different types of activities, but whenever you see a rounded rectangle, it’s an activity.
  • Whenever you see a diamond, it is about routing. It is a routing gateway, and a little note here is that a routing gateway is not the decision in itself, the decision needs to happen before the gateway. The gateway only depicts the possible routes and the logic for taking those routes.

There is also a whole set of markers and decorators that are used for more expressiveness in BPMN. But if you know how to read these four shapes, you know how to read BPMN, or if you know how to write or draw these four shapes, you know how to draw BPMN.

Going back to one of the first point presented, it is a visual story about what needs to be done next. We have a single artifact that is both for the subject matter expert and for the automation. The shape at the top here is a pool, meaning/representing some external participant. The start event, which is a circle, is the event that triggers the instance. The rounded squares or rectangles are tasks to be completed. The gear marker here says that this is a service task. The arrow shows us what to do next. The next task is a PMML or Predictive Model task (an AI kind of task). Next, we have a CQL task. Next, here we have a little table marker (meaning that this is a decision task), and as I mentioned before, the decision is taken prior to the routing behavior. The x in the gateway mean it is an exclusive choice between the two routes. We then have a case task and a sub process with the plus marker meaning that there is a further breakdown of this activity that is defined in more details. The dog ear document shape represents the data coming in. Where the dotted arrow is a data flow, and the dash line with an open arrowhead is an external communication via message flow. The double circle is an intermediate event, and again another type of circle, a think border circle is an end event. So, circles are events, rounded rectangles are activities, diamond shapes are routings, and the arrows are depicting the flow. BPMN has a token semantic. That means that you have to think of a token traversing a path, and then when the token gets to a routing point, if it is an exclusive choice like here, it will take only one path all the way to the end. So, that’s how the semantic of execution of BPMN works.

Now BPMN, as I mentioned in the introduction, is both for documentation and automation. What’s the difference? Basically, most of the time you are modeling for documentation. Your goal is then to make the diagram as unambiguous and clear at specifying the logic of the process. We do recommend the BPMN Method and Style approach promoted by Bruce Silver. There is a series of books on Method and Style that will help you make sure that your BPMN model captures everything that you wanted to communicate. When you are looking at automation, automation requires explicit specification of the data and their types. That may not be needed when you are doing documentation, but certainly needed when you are doing modeling for automation. Then the BPMN engine will orchestrate the next task to be completed. Getting back to the notion of a traversing token I was mentioning before, the engine basically moves the token for you and either get the next activity completed by some service, or offers it to a use, or group of users, to complete. There is a nice blog post here that discusses the difference between modeling for documentation and modeling for automation. And that is basically it for my introduction to BPMN in five minutes.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Bruce Silver's blog post - Executable BPMN vs Method and Style
Bruce Silver
Blog

Executable BPMN vs Method & Style

By Bruce Silver

Read Time: 4 Minutes

For many years my work has focused on non-executable BPMN using a set of conventions called Method and Style.

In the past year or two I have turned my attention to Low-Code Business Automation, based on executable BPMN. When I wrote my book BPMN Method and Style – over a decade ago! – I imagined that harmonizing executable BPMN with Method and Style would be a natural thing, but that never happened. Now, with a good bit of experience with Business Automation under my belt, I understand why. In this post we’ll look at the differences between non-executable and executable BPMN modeling.

Data

The most obvious difference between the two concerns data. Executable models explicitly define process variables (data objects) and show the data flow in the diagram. Non-executable models do neither. The reason for that is fundamental. The BPMN 2.0 spec does not specify a language for defining process data and expressions, and for years most BPMN tools have used Java. That made executable BPMN accessible to programmers only. But actually, most BPMN modelers are business users, not programmers. They are just looking to document and improve their business processes, not automate them. Method and Style was created with those users in mind.

BPM teams spend countless hours trying to document how their processes work and capture that knowledge in BPMN, but too often the resulting models are understandable only to the person that created them. The challenge for non-executable BPMN is revealing the process logic clearly and completely from the diagrams… to someone who doesn’t already know how it works. You need a bit of data to do that, and Method and Style provides it implicitly using the concept of end states. The end state of an activity or process names a possible way it could end, either successfully or in some exception state. With processes and subprocesses, we use a separate end event labeled to indicate the end state. Method and Style says that an activity – task or subprocess – with multiple end states must be followed by an XOR gateway with gates labeled to match each end state. These labels reveal, at a basic level, the conditions under which any instance follows a particular path through the diagram.Executable BPMN works differently. Here the goal of the model is executability on a BPMN engine, and this requires explicit definition of process and task variables. You can still label gates and end events as end states, but these labels are just annotations; what counts in executable routing is a boolean condition on process variables attached to each gate.

Executable BPMN distinguishes process variables from task variables. Process variables are defined by the modeler and visualized in the diagram as data objects, data inputs, and data outputs. Task variables do not appear in the diagram. They are properties of each task, and for some task types are not defined by the modeler at all but by an invoked REST API. The modeler must map process variables to and from the task variables. On the Trisotech platform, both process and task variables are converted to FEEL data, so the data mapping is Low-Code, using FEEL and boxed expressions borrowed from DMN.

A final data-related difference between Business Automation and Method and Style is the need for data validation and error handling in executable models. To avoid runtime errors, executable BPMN cannot assume data inputs are complete or valid. In Method and Style we don’t worry about that.

Task Types

A second key difference is task types. In Method and Style we pay almost no attention to task types, except to distinguish human from automated tasks. Often modelers don’t assign a task type at all. In executable BPMN, task types are essential, as each task type performs a specific function. A task without an assigned task type does nothing.

  • A Service task executes an external REST API catalogued in the model’s Operation Library, as described previously.
  • A Decision task (in the tool called a Business Rule task) executes modeler-defined business logic in the form of a DMN decision service.
  • A Script task executes simple modeler-defined business logic – a single FEEL literal expression.
  • A User task routes a task form to a human performer and returns its output to the process.
  • A Call Activity executes modeler-defined process logic in the form of a BPMN process service.

Each task type has its own configuration details, but all use the same Low-Code data mapping. In a Service task, Decision task, or Call Activity, the task inputs and outputs are defined by the invoked API, not by the calling process. In a Script or User task, the task inputs and outputs are defined by the process modeler.

In executable models, Service, Script, and User tasks are extremely fine-grained. While in Method and Style an automated task (typically represented as a Service task) can be imagined to perform a complex action containing multiple discrete operations, in executable BPMN a Service task is limited to a single service operation. That means that a complex automated function is more likely implemented as a Call Activity, a process involving a sequence of Service tasks. User tasks are similar. While in Method and Style a User task is imagined to perform complex human activities involving multiple system interactions, an executable User task on the Trisotech platform is a single task form and involves no system interactions. Again, a complex human task would be modeled as a process invoked by a Call Activity.

Black-Box Pools and Messages

In Method and Style we use message flows from and back to a black-box pool to represent the process request and final status response. Executable BPMN doesn’t have a clear implementation of black-box pools. Instead we use data inputs and data outputs. On the Trisotech platform, processes are deployed as REST services, so a process request is the same as a call to the Execute operation of that service. A data input represents the data provided in that request, and a data output represents data returned to the requesting client upon completion. The identity of the requesting client – the black-box pool in Method and Style – is immaterial. Any client with the appropriate bearer token can request the service.

I have previously described a mechanism in Trisotech by which a message flow from a black-box pool to a start event may substitute for a data input and a message flow to a catching message event can trigger process behavior. Outgoing message flows to a black-box pool are more of a problem, because unlike Method and Style, where these mean any communication to the external entity, a process cannot really send a “message” to an abstract entity. Instead, a Send task or throwing message event can be configured to send an email to a user. It’s not really the same thing, and in my own processes I tend to use a Service task from the Trisotech Connectors Library to send emails.

A Common Language Nevertheless

Despite these differences, which are not insignificant, BPMN remains a common language shared by modelers documenting processes and those automating them. Below you see two versions of a Vacation Request process. The first is based on Method and Style, with black-box pools and message flows, end state labels, and simple distinction between human and automated tasks.

The second is executable BPMN, with data inputs and outputs, data input validation, explicit dataflow, and distinction between Decision tasks, Service tasks (Email), and Script tasks. It looks a lot different, and the modeling skills to create it is different as well… but not out of reach for many business users. You can easily imagine how a Method and Style BPMN model could be used as business requirements for the Business Automation model. And that’s the whole point.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Standards-Based Low-Code Business Automation

Presented by
Bruce Silver of MethodandStyle.com

This webinar will explore the benefits of using visual model-driven standards to rapidly create business automations. In particular, we will explore how using BPMN for Automation differs from the traditional Method and Style way of using BPMN for Documentation. This innovative approach uses of the Friendly Enough Expression Language (FEEL) open standard as the Low Code glue that brings it all together for both coders and non technical users.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Object Management Group meet and greet
BPMN™ in Action!

Presented by
Trisotech logo

Denis Gagne, CEO & CTO

Object Management Group® (OMG®) cordially invites Business Process Modeling practitioners and interested parties to attend this free innovative and informative meet and greet. Refreshments will be served while leading software vendors demonstrate live the iterative elaboration and interchange of a Business Process Model and Notation™ (BPMN) model using their respective tools that implement the BPMN standard. Get answers to your BPMN questions in a relaxed social setting. This is the perfect opportunity to come and meet some of the creators and innovators supporting this most widely adopted business process standard.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Bruce Silver's blog post - Call Public APIs Without Programming
Bruce Silver
Blog

Call Public APIs Without Programming

By Bruce Silver

Read Time: 2 Minutes

Interest in Business Automation is being accelerated by the thousands of public REST APIs available from countless service providers.

Most are available on a Freemium basis – free low-volume use for development, with a modest monthly fee for production use. The Trisotech Low-Code Business Automation platform lets you incorporate these services in your BPMN models without programming. You just need to configure a connection to them using the model’s Operation Library. With many public APIs, that configuration requires simply importing an OpenAPI file from the service provider. OpenAPI – sometimes called Swagger – is a file format standard for REST APIs. The problem is that most public APIs don’t provide one. Instead they provide documentation that you can use to configure the Operation Library entry yourself. If you’re not a developer, that sounds daunting, but it’s really not that hard. This post will show you how to do it.

A REST API is organized as an interface containing a set of operations. The interface specifies a base address, or server URL, and some means of authorization, such as an API key. Each operation performs a specific function, and is addressed by a path appended to the base address or endpoint. To invoke the operation, a client app sends an http message that contains the required information to a URL concatenating the base address and path. That information includes:

  • The http method: POST, GET, PUT, PATCH, or DELETE, indicating whether the operation is reading, writing, modifying, or deleting data.
  • The authorization credential
  • Input parameters for the operation, required and optional, typically specified as json

Upon receipt of the message, the service is executed and returns some response message, which includes:

  • A standard http response code. A code of 200 indicates success, 404 means the base URL was not found, 401 not authorized, 500 either a bad request or internal server error, etc.
  • Output values for the operation, separately for each response code, again specified as json

Compared to the earlier generation of SOAP-based web services, REST APIs are simple and standardized. This is their great attraction!

In my Business Automation training, we make use of a real-time stock quote service from Rapidapi.com, a popular aggregator of public APIs. The service we use is called YH Finance, a clone of the old Yahoo Finance API. The documentation page looks like this:

The interface is huge, with operations collected in groups. The one we want is in the group market, with the path /market/v2/get-quotes. Selecting that from the panel on the left, we see the operation documentation in the center panel, which specifies the parameters of the operation, and where in the message they are placed: in the message header, message body, or query portion of the URL. Here two parameters are required in the http message header: X-RapidAPI-Host is the server URL, yh-finance.p.rapidapi.com. X-RapidAPI-Key is the requester’s personal authorization credential. Two more parameters are required in the query: region, enumerated text, and symbols, a comma-delimited string containing stock tickers.

This documentation is all we need to create the input side of the Operation Library entry. Manually add an interface YH Finance and an operation Get Quotes. In the interface properties enter the Server URL and the authorization type. In the operation properties enter the method, path, and under Inputs the required parameters except for the API key, which is assigned to an individual and entered separately when the process containing the service invocation is compiled and published. Clicking the pencil for each input lets you specify its datatype, a FEEL type you must create from the documentation. It’s actually quite easy!

Configuring the operation output is slightly trickier. Notice in the API documentation the button Test Endpoint in the center panel. Clicking that executes the operation with the default parameters shown, in this case a string listing three stocks.

The documentation page then shows the resulting output, as you see here: an outer element quoteResponse with two components, an array of result – one per stock – and error. Expanding result, we see it contains 97 components – wow, that’s quite a lot – of which we only need three: quoteType, bid, and ask. But we do not need to construct a FEEL type with all 97 components. As long as we respect the basic structure, we can omit the 94 result components we don’t need! So we can create FEEL type tTradeQuote, a structure with enclosing element quoteResponse containing a collection of result plus error, and the type of result contains just the three components we need. When the service is invoked, the json response message is mapped automatically to the FEEL tTradeQuote structure.

To incorporate this operation in your Business Automation model, simply click the gear icon on a Service task and bind the task to the Operation Library entry. The service inputs – symbols, region, and x-rapidapi-host – become the task inputs, and the service output quotes becomes the task output. Then, using the Low-Code data mapping explained in a previous post, you just need to map data objects in the process to the task inputs and output. When you Cloud Publish the process, you will need to enter your API key. And that’s it!

So just because your API provider doesn’t offer an OpenAPI file, don’t think it’s inaccessible to Low-Code Business Automation. It’s actually quite easy.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Learn how it works

Request Demo

Confirm your budget

Request Pricing

Discuss your project

Request Meeting
Graph