Stefaan Lambrecht's blog post - The Case-Decision-Process Tripod Methodology
Stefaan Lambrecht
Blog

The Case-Decision-Process Tripod Methodology

A Model-Driven Approach to Business Fulfillment

By Stefaan Lambrecht

Read Time: 5 Minutes

Business fulfillment — the process of delivering on customer requests through coordinated business activities — lies at the core of every organization’s value creation. Whether the request concerns an insurance claim, a loan, an HR recruitment request, or a customer order, each represents a commitment that must be managed from initiation through delivery and closure.

Traditional IT development and ERP systems have long attempted to automate fulfillment processes through rigid workflows and predefined business logic. However, in a modern service economy characterized by variability, human judgment, and dynamic business rules, these traditional approaches often prove inflexible, slow to change, and costly to maintain.

The Case-Decision-Process (CDP) Tripod methodology addresses this challenge through a model-driven architecture based on three complementary open standards:

Together, these standards provide a unified framework for modeling, simulating, and automating complex business fulfillment lifecycles — enabling agility, transparency, and consistent decision-making across the enterprise.

Business Fulfillment as a Case

In the CDP methodology, every customer request is represented as a case, a central construct that encapsulates all data, tasks, decisions, and processes required to fulfill that request.

A case is identified by a unique identifier and acts as a container for the complete lifecycle of the business object it handles — for example, a claim, a loan, a service ticket, or an order. The case structure provides an end-to-end view, grouping all related data, tasks, and decisions, while enabling full traceability across the lifecycle.

This lifecycle is organized into stages, which define the logical phases of business fulfillment:

Unlike traditional process workflows, these stages are event-driven, not sequential. Several stages may be active concurrently, and their activation or completion depends dynamically on data and context, evaluated by decision logic modeled in DMN.

The Tripod Architecture

The Case-Decision-Process methodology rests on three technical pillars: CMMN, DMN, and BPMN. Each standard contributes a distinct capability, and together they form a cohesive architecture for intelligent business fulfillment.

Case Handling with CMMN

The Case Management Model and Notation (CMMN) standard provides a declarative framework for modeling and executing event-driven, knowledge-intensive business services.

Unlike procedural process models, CMMN does not define strict control flows. Instead, it defines a case plan — a set of possible tasks, stages, and milestones that can be triggered dynamically based on conditions or external events.

CMMN is ideally suited to model business fulfillment because:

Through CMMN, organizations gain the ability to simulate, test, and automate complex fulfillment lifecycles while preserving flexibility and human oversight.

Decision Logic with DMN

The Decision Model and Notation (DMN) standard provides the intelligence behind the case.

It separates decision logicrom process control, allowing organizations to model and execute decisions independently.

In a CDP-based fulfillment model, a central DMN decision service continuously evaluates the case data to determine:

For example, in an insurance claim scenario, a DMN model might evaluate:

If any condition fails, the DMN model can immediately terminate the case — allowing the organization to fail fast and minimize unnecessary effort.

By externalizing business rules in DMN, decision transparency and auditability are achieved, and business users can modify policies without altering core process logic.

Process Flows with BPMN

While CMMN manages the overall case lifecycle, and DMN governs decisions, BPMN (Business Process Model and Notation) defines structured, repeatable process flows that can be embedded as process tasks within a CMMN model.

Typical examples include:

BPMN processes can be triggered conditionally by events in the case and can return results that update the case data.

This hybrid modeling approach — integrating CMMN, DMN, and BPMN — enables both structured automation and adaptive case handling within a single execution environment.

Example: Automotive Insurance Claim

An automotive insurance claim illustrates the methodology in action:

  1. Case Creation — A customer submits a claim; a case is instantiated with a unique identifier.
  2. Eligibility Stage — A DMN model evaluates coverage, policy validity, and customer identity. If any condition fails, the case is closed immediately.
  3. Acceptance Stage — The insurer assesses whether conditions justify proceeding with claim handling.
  4. Solution Development Stage — If accepted, either an off-the-shelf solution (standard payout) or a custom solution (repair arrangement) is developed.
  5. Solution Delivery Stage — The agreed solution is executed, possibly through BPMN workflows (e.g., payment authorization).
  6. Closing Stage — Administrative and accounting actions finalize the case.

Throughout the process, CMMN orchestrates case states, DMN governs decision logic, and BPMN executes transactional workflows — forming a cohesive, event-driven fulfillment cycle.

Comparison to Traditional IT and ERP Approaches

By decoupling case management, decision logic, and process orchestration, the CDP methodology supports faster adaptation, greater transparency, and lower maintenance costs than traditional IT development or ERP implementations.

Benefits and Outcomes

Organizations adopting the Case-Decision-Process methodology typically achieve:

About the Methodology

The Case-Decision-Process Tripod methodology provides an integrated framework for documenting, modeling, simulating, and automating business fulfillment cases using international standards. It can be applied across industries and domains — from insurance and banking to HR, logistics, and manufacturing — to modernize fulfillment operations and enable true digital agility.

The Case-Decision-Process Tripod offers a model-driven, standards-based alternative to traditional IT system design.

By uniting CMMN, DMN, and BPMN, it provides an executable architecture that reflects how real-world business fulfillment actually operates — dynamically, contextually, and collaboratively.

This methodology enables organizations to bridge the gap between business intent and system execution, delivering on customer expectations faster and more intelligently — without the rigidity and cost of legacy approaches.

Follow Stefaan Lambrecht on his website.

Blog Articles

Stefaan Lambrecht

View all

All Blog Articles

Read our experts’ blog

View all

Stefaan Lambrecht's blog post - Turn Your Legacy ERP into a Business Execution Platform
Stefaan Lambrecht
Blog

Turn Your Legacy ERP into a Business Execution Platform

Using Decision/Case/Process Modeling & Execution

By Stefaan Lambrecht

Read Time: 5 Minutes

From “Best Practice” to “Adaptive Practice”

Traditional ERP systems were designed around stability and standardization, not adaptability. Vendors like SAP, Oracle, and JD Edwards codified “best practices” — in reality, average practices that could be applied across industries.

While this provided efficiency and compliance, it came at the cost of differentiation. Any company trying to do something innovative or non-standard hit the walls of ERP rigidity — requiring custom IT code, long projects, and high development and maintenance costs (the legacy trap).

The future ERP shouldn’t impose “best practices”; it should enable continuous adaptation to how your business actually operates — and evolves.

Model-Driven Execution: The CMMN–BPMN–DMN Tripod

The proposed shift to CMMN (Case Management Model and Notation), BPMN (Business Process Model and Notation), and DMN (Decision Model and Notation) is profound.

Together, they create a semantic model of the business — not just documentation, but executable logic that an engine can run directly.

And most importantly, the models are developed by the business, not IT. The business owners and experts return to the steering wheel of their own business organization.

This is the cornerstone of a “business-defined, engine-driven architecture” — where models become the application itself.

Closing the Business–IT Gap

In traditional ERP, business analysts describe needs → IT interprets → developers code → business tests → IT fixes → repeat.

In model-driven architectures, the models are the requirements — and also the implementation.

That means:

Composable Capability Architecture

A next-gen ERP wouldn’t be a monolith. Instead, it would be a composable set of capabilities:

The ERP becomes an application fabric, where processes, rules, and data are dynamically orchestrated rather than hard-coded.

This allows an organization to tailor specific processes (say, a unique claims process in insurance or a niche procurement flow in manufacturing) without breaking the overall system.

This is a paradigm shift: from code-first to model-first, or even model-only.

Implications for Governance, Transparency, and Agility

Integration with AI and Data Layers

In a modern context, AI and analytics could plug into this architecture seamlessly:

In this way, the ERP becomes a living system, learning and adapting in real-time.

The Result – A Business Execution Platform

The next generation of ERP won’t be “enterprise resource planning” in the old sense — it will be a Business Execution Platform:

It’s a move from hard-coded best practices → to configurable, executable models → to self-evolving intelligent systems.

Follow Stefaan Lambrecht on his website.

Blog Articles

Stefaan Lambrecht

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - DMN: Beyond Decision Tables
Bruce Silver
Blog

DMN: Beyond Decision Tables

By Bruce Silver

Read Time: 4 Minutes

If you are using DMN, of course you are using decision tables. Every decision management platform, DMN or not, has some form of decision table.

Business users like them because the meaning of the decision logic is obvious from the table structure without any previous knowledge about how it works. That’s because diagrams and tables are much more intuitive to non-programmers – i.e., subject matter experts – than script or program code. There is still a bit of script in the table cells – in our case, FEEL – but the expressions are usually simple.

The creators of DMN a decade ago took this basic fact and carried it a step further: In order to make the full power of DMN accessible to subject matter experts, the language will include additional table structures with simple FEEL inside the table cells, instead of requiring complex FEEL expressions with functions nested inside other functions. They called those table structures boxed expressions. Today there are nine of them, including literal expression, which just means a FEEL expression in text form, and decision table.

From the outset, boxed expressions were put in DMN with the expressed intent to make the full power of the language more accessible to non-programmers. Today that may seem strange, since very few DMN tools outside of Trisotech support them, other than decision tables. Why is that? Two reasons:

1

Most DMN tools are not designed to expose executable logic directly to subject matter experts. Those tool vendors assume subject matter experts merely want to create business requirements in the form of DRDs, and leave the implementation to programmers. That’s antiquated thinking.

2

Implementing boxed expressions is challenging for the tool vendor. The basic structure of most boxed expressions is a two-column table in which the first column is a variable name and the second column is its value expression. But that value expression does not have to be a literal expression; it could be any boxed expression at all, for example, a decision table. That means the second column must be able to expand to a nested table, and there is no limit to the degree of nesting. From a graphics perspective, that’s not easy!

The fact is, except for decision table and one other boxed expression type, boxed expressions are simply graphical conveniences. The same logic could be written as a FEEL literal expression, a more complex one. But non-programmers have an easier time creating complex logic using simple FEEL inside the boxed expression cells than in constructing complex FEEL expressions.

Let’s look at some examples.

Invocation

An Invocation boxed expression represents a call by a decision to a logic function, typically a user-defined function such as a BKM or decision service, but in principle also a FEEL built-in function. Decision logic that is reused is typically packaged as a function with defined parameters. The Invocation maps values to each parameter and receives in return the function result.

For example, a commonly reused function in mortgage lending is the Loan Amortization Formula, in which the monthly payment is based on three parameters: p, the loan amount; r the interest rate as a decimal value; and n, the number of months in the term of the loan. It’s just an arithmetic expression, but it’s hard to remember so best encapsulated as a BKM.

A decision can get the monthly payment for any mortgage loan by supplying values of p, r, and n through an invocation. With formulas like these, it’s a good idea to explain the parameters in the Description field of the BKM Details.

The Invocation boxed expression is a two-column table in which the first column lists the parameters p, r, and n, and the second column lists expressions of decision variables for each parameter.

Here the decision Loan Payment is defined as an invocation of the BKM Loan Amortization Formula. The BKM parameters and their datatypes are listed in the first column. The second column provides their value as simple FEEL expressions of decision variables. Because the decision variable Loan rate pct is expressed as a percent, it must be divided by 100 when passed to parameter r.

We could just as easily written this invocation as a slightly more complicated FEEL expression:

Loan Amortization Formula(Loan amount, Loan rate pct/100, 360)

With this so-called literal invocation, the names of the parameters do not appear, but the order of the supplied value expressions must match the order of the parameters in the BKM definition. With FEEL built-in functions, it is much more common to use literal invocation, but they can be called using the Invocation boxed expression just as well.

Context

Similar in appearance to Invocation, the Context boxed expression is a two-column table in which each row, called a context entry, contains a name in the first column and a value expression in the second. Again, the second column could be any boxed expression type, including another context, so graphically these tables can be quite complex. The Context boxed expression really should be viewed as two separate boxed expression types, depending on whether or not the final row of the table, called the final result box, is left empty.

Context with Non-Empty Final Result Box

A Context with an expression in the final result box is the one boxed expression type besides decision table that has no literal expression equivalent. It is probably the most useful of the lesser-known boxed expressions, because it allows you to break down the complex logic of a decision into smaller pieces using local variables, i.e. names used only within the scope of the decision. Each context entry names a local variable, and the expressions of any subsequent context entry may refer to those names, as may the final result box expression.

Here is an example from the DMN training:

Total Price = Price after discount + Tax + Shipping cost

The logic is moderately complex but we really just care about the Total Price, not the details, so we can encapsulate those in local variables, the context entries, values not returned in the decision. Note the expression column for Shipping cost is a decision table, not a simple FEEL expression. The final result references the context entries, a much simpler expression.

Context with Empty Final Result Box

When the final result box is left empty, the Context boxed expression creates a structured value, in which each context entry represents a component of that data structure. Building on the previous example, suppose instead of just the Total Price, a number, we want to create an Invoice, a data structure of type tInvoice containing the components shown below:

The Context boxed expression for this is nearly identical to the previous one, except now Total price is a context entry and the final result box is empty.

Just to reinforce the point, the Context boxed expression is a simpler way to define logic that could alternatively be done using a literal expression. A FEEL context literal uses a JSON-like syntax to create data structures, which are comma-separated expressions enclosed in curly braces. For the example above, it would look like this:

{Price before discount: Quantity*Unit Price, Price after discount: if Quantity>6 then 0.9*Price before discount else Price before discount, Tax: if is Taxable=true then 0.09*Price after discount else 0, Shipping cost: if Shipping Type=”Standard” and Price after discount<100, then 9.95 else if Shipping Type=”Standard” and Price after discount>=100 then 0 else 17.95, Total price: Price after discount + Tax + Shipping cost}

Function Definition

A Function Definition boxed expression is only used in a context entry. It represents a user-defined function, just like a BKM, except it is used only within the context, i.e., it can be called only by a subsequent context entry or final result box, not from outside.

Referring back to the Loan Amortization Formula, we could make that a Function Definition inside a context if we weren’t concerned about reuse. It simplifies the final result without using a BKM. This boxed expression had the function name in the first column and the function definition — including comma-separated parameters enclosed in parentheses — in the second column.

Relation and List

A Relation boxed expression defines a FEEL table, a list of identical structures. The column names represent the components of the structure. Relations are often populated with literal values but the table cells may be any FEEL expression. On the Trisotech platform a Relation can be populated by uploading from Excel, a real convenience.

One common use for a Relation containing literal values is a static lookup table modeled as a zero-input decision, i.e., having no incoming information requirements. For example, in the table below, you can query this table using a filter expression:

Loan Table[pointsPct < 1.0].lenderName

returns a list of mortgage lenders who charge less than 1% points on the loan.

The List boxed expression is rarely used. It is simply a list of FEEL expressions arranged in a single column, such as shown below:

If the list contains literal values, it may be uploaded from Excel. One reason it is rarely used is the equivalent literal expression, a comma-separated list enclosed in square brackets, is usually easier to create:

[in1, in2, in3, upper case(in1) + “ “ +lower case(in2) + “ “ + in3]

Conditional

The Conditional boxed expression is a graphical representation of the FEEL if..then..else operator. It simply separates the if, then, and else clauses into separate rows of the table.

The expressions in the right-hand column are not restricted to literal expressions. They could, in fact, be another Conditional expression, implementing an “else if” clause. In that case, the boxed expression may be easier to read than the equivalent literal expression.

Iterator

For..in..return

The Iterator boxed expression is a graphical representation of the FEEL for..in..return or some/every..in..satisfies operators. The syntax of for..in..return is

for [name of dummy variable] in [some list] return [expression of dummy variable and other inputs]

Here the dummy variable stands for a single item in the list, and the operator returns a list of values where the return expression acts on that item. For example, returning to our list of mortgage products, suppose we want to generate a list of formatted monthly payment strings for each product, based on a specific requested loan amount.

Implementing Formatted Monthly Payments as a literal expression may be difficult for some users to follow:

for x in Loan Table 30 return string(“$%,.2f”, payment(Requested amount*(1+x.pointsPct/100) + x.fees, x.rate/100, x.term))

Using the Iterator boxed expression for Formatted Monthly Payments, we can write

Here the return expression uses the Invocation operator twice, first to call the FEEL built-in function string(), which allows a formatting mask, and to call the BKM payment(), which is the Loan Amortization Formula. With complex return expressions, the Iterator boxed expression often provides more readable logic.

This is just one form of iteration, called iteration over item value. When the in expression is a list of integers, the dummy variable represents a position in the list, and iteration is over item position. The boxed expression works in exactly the same way.

Some/every..in..satisfies

You can also use this boxed expression to test membership in a list.

some [dummy variable] in [list] satisfies [boolean expression]

returns true if any member of a list satisfies some requirement, and

every [dummy variable] in [list] satisfies [boolean expression]

returns true only if every member of the list satisfies the requirement. The Iterator boxed expression works in exactly the same way when some or every is entered in the left column of the first row.

Filter

An expression in square brackets immediately following a list expression models a filter of the list. When the bracketed expression is a Boolean, the filter selects list items for which the expression is true. When the bracketed expression is an integer, the filter selects the list item at that integer position.

The Filter boxed expression puts the list in the first row and the filter expression, enclosed in square brackets, in the second row. For example, instead of the

literal expression [1, 3, 7, 11][item> 5]

you can use the Filter boxed expression

Here the list to be filtered is a list of literal values, and the filter expression is a Boolean, returning the list [7, 11].

I personally do not use this boxed expression much, as I am usually using filters with tables, following the filter expression with .[columnName], or extracting an item from a singleton list by appending the filter [1], and you cannot do either of those with this boxed expression.

The Bottom Line

If your use of DMN stops at DRDs and decision tables, you owe it to yourself to go further. The creators of DMN created these boxed expressions for you, since breaking complex logic apart using standardized table formats makes them easier to create and easier to understand than long, complex FEEL expressions. It’s really not so hard. If you want to learn more about how to do it, check out the book DMN Method and Style 3d edition or our DMN Method and Style training. You’ll be glad you did!

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - DMN List and Table Basics<
Bruce Silver
Blog

DMN List and Table Basics

By Bruce Silver

Read Time: 5 Minutes

Beginners in DMN often define a separate variable for every simple value in their model, a string, for example, or a number. But it’s often far better to group related values within a single variable, and the FEEL language in DMN has multiple ways to do that:

Defining DMN variables as structures, lists, and tables rather than exclusively simple types usually makes your models simpler, more powerful, and easier to maintain.

Datatypes

Trisotech Decision Modeler makes it easy to define structured datatypes. You simply provide a name and type for each component, as you see here:

There are many ways to create a list. The simplest is the list operator, square brackets enclosing a comma-separated list of expressions, such as [1,2,3]. The datatype of a list variable is called a collection, and best practice is to name the type Collection of [item type]. For example, the datatype of a table of IncomeItems with columns IncomeType and MonthlyAmount would be specified like this:

List Functions

FEEL provides a wide variety of built-in functions that operate of lists and tables. Some are specific to lists of numbers and date/times, but most apply to any list item type. Examples are shown in the table below:

Filters

To select an item from a list or table, FEEL provides the filter operator, modeled as square brackets immediately following the list or table name, containing a filter expression, either an integer or a Boolean. If an integer, the filter returns the item in that position. For example, myList[1] returns the first item in myList. A Boolean expression returns all items for which the expression is true. For example, Loan Table[ratePct<4.5] returns the collection of all Loan Table items for which the component ratePct is less than 4.5.

A Boolean filter always returns a collection, even if you know it can return at most a single item. If each Customer has a unique id, Customer[id=123456] will return a list containing one Customer, and Customer[id=123456].Name returns a list containing one Name. A list containing one item is NOT the same as the item on its own. It is best practice, then, to append the filter [1] to extract the item value, making the returned value type Text rather than Collection of Text: Customer[id=123456].Name[1].

It is quite common that a filtered list or table is the argument of a list function, for example, count(Loan Table[ratePct<4.5]) returns the count of rows of Loan Table for which ratePct is less than 4.5.

If a Boolean filter expression finds no items for which the expression is true, the returned value is the empty list [], and if you try to extract a value from it you get null. For example, if the table Customer has no entry with id=123456, then Customer[id=123456] is [] and Customer[id=123456].Name[1] is null. Because of situations like this, DMN recently introduced B-FEEL, a dialect of FEEL that better handles null values, and you should be using B-FEEL in your models. Without B-FEEL, concatenating a null string value with other text returns null; with B-FEEL the null value is replaced by the empty string “”.

Iteration

A powerful feature of FEEL is the ability to iterate some expression over all items in a list or table. The syntax is

for [range variable] in [list name] return [output expression]

Here range variable is a name you select to stand for a single item value in the list. For example, if LoanTable is a list of mortgage loan products specified by lenderName, rate, pointsPct, fees, and term, we can iterate the amortization formula over each loan product to get the monthly payment for each loan product:

for x in LoanTable return payment(RequestedAmt*(1+x.pointsPct/100), x.rate/100, x.term)

Here x is the range variable, meaning a single row of LoanTable, and payment is a BKM containing the amortization formula based on the loan amount, loan rate, and term to get the monthly mortgage payment. To simplify entry of this expression, Trisotech provides an iterator boxed expression.

In addition to this iteration over item value, FEEL provides iteration over item position, with the syntax

for [range variable] in [integer range] return [output expression]

For example,

for i in 1..10 return i*i

returns a list of the square of integers from 1 to 10.

Testing Membership in a List

FEEL provides a number of ways to check whether a value is contained in some list.

some x in [list expression] satisfies [Boolean expression]

returns true if any list item satisfies the Boolean test, and

every x in [list expression] satisfies [Boolean expression]

returns true only if all list items satisfy the test.

The in operator is another way of testing membership in a list:

[item value] in [list expression]

returns true if the value is in the list.

The list contains() function does the same thing:

list contains([list expression], [value])

And you can also use the count() function on a filter, like this:

count([filtered list expression])>0

Set Operations

The membership tests described above are ways to check whether a single value is contained in a list, but sometimes we have two lists and we want to know whether some or all items in listA are contained in listB. Sometimes we want to consider comparing setA to setB, where these sets are deduplicated and unordered versions of the lists. If necessary, we can use the function distinct values() to remove duplicates from the lists.

Intersection

ListA and listB intersect if any value is common between them. To test intersection, we can use the expression

some x in listA satisfies list contains(listB, x)
Containment

To test whether all items in listA are contained in listB we can use the every operator:

every x in listA satisfies list contains (listB, x)
Identity

The simple function listA=listB returns true only if the lists are identical at every position value. Testing whether the deduplicated unordered sets are identical is a little trickier. We can use the FEEL union() function to concatenate and deduplicate the lists, in combination with the every operator:

every x in union(listA, listB) satisfies (list contains(listA, x) and list contains(listB,x))

Sorting a List

We can sort a list using the FEEL sort() function, which is unusual in that the second parameter of the function is itself a function. Usually it is defined inline as an anonymous function, using the keyword function with two range variables standing for any two items in the list, and a Boolean expression of those range variables. The Boolean expression determines which range variable value precedes the other in the sorted list. For example,

sort(LoanTable, function(x,y) x.rate<y.rate)

sorts the list LoanTable in increasing value of the loan rate. Here the range variables x and y stand for two rows of LoanTable, and row x precedes row y in the sorted list if its rate value is lower.

Replacing a List Item

Finally, we can replace a list item with another using Trisotech’s list replace() function, another one that takes a function as a parameter. The syntax is

list replace([list], [match function], [newItem])

The match function is a Boolean test involving a range variable standing for some item in the original list. For example, EmployeeTable lists each employee’s available vacation days. When the employee takes a vacation, the days are deducted from the previous AvailableDays to generate an updated row NewRecord. We use the match function to select the employee matching the EmployeeId in another input, VacationRequest.

Now we want to replace the employee record in that table with NewRecord:

list replace(EmployeeTable, function(x, Vacation Request) x.employeeid=VacationRequest.employeeId, NewRecord)

Here the range variable x stands for some row of EmployeeTable, and we are going to replace the row matching the employeeid in VacationRequest. So in this case we are replacing a table row with another table row, but we could use list replace() to replace a value in a simple list as well.

The Bottom Line

In real-world decision logic, your source data is often specified using lists and tables. Rather than extracting individual items in your input data and processing one item at a time, it is generally easier and faster to process all the items at once using lists and tables. FEEL has a wide variety of functions and operators that make it easy, and the Automation features of Trisotech Decision Modeler make it easy to execute them.

Want a deeper dive into the use of lists and tables? Check out our DMN Method and Style training.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - BPM+ : The Enduring Value of Business Automation Standards
Bruce Silver
Blog

BPM+ : The Enduring Value of Business Automation Standards

By Bruce Silver

Read Time: 5 Minutes

While it no longer has the marketing cachet it once enjoyed, the fact that the foundations of the Trisotech platform are the three business automation standards from the Object Management Group – BPMN, CMMN, and DMN – remains a key advantage, with enduring benefits to users. In this post we’ll discuss what these standards do, how we got them, and their enduring business benefits.

In the Beginning

In the beginning, lets say the year 2000, there were no standards, not for business automation, not for almost anything in tech, really. The web changed all that. I first heard about BPMN in 2003. I had been working in something called workflow, automating and tracking the flow of human tasks, for a dozen years already, first as an engineering manager and later as an industry analyst. There were many products, all completely proprietary. But suddenly there was this new thing called BPM that promised to change all that.

Previously, in addition to workflow, there had been a completely separate technology of enterprise application integration or EAI. It was also a process automation technology but based on a message bus, system-to-system only, no human tasks, and again all proprietary technology. It was a mess.

Now a new technology called web services offered the possibility to unify them based on web standards. In the earlier dotcom days, the web just provided a way to advertise your business, maybe with some rudimentary e-commerce. Now people realized that the standard protocols and XML data of the web could be used to standardize API calls in the new Service-Oriented Architecture (SOA). A Silicon Valley startup called Intalio proposed using web services to define a new process automation language called BPML that unified workflow and application integration in this architecture. But they went further: Surprisingly, solutions using this language would be defined graphically, using a diagram notation called BPMN. The goal was engaging business users in the transition to SOA, and BPMN offered the hope of bridging the business-IT divide that had made automation projects fail in the past.

Enterprise application software vendors liked it too. Functionality that previously was embedded in their monolithic applications could be split off into microservices that could be invoked as needed and sequenced using the protocols of the world wide web. This was a game-changer. Maybe you could make human workflow tasks into services as well.

As it evolved, BPMN 1.0 took on the look of swimlane flowcharts, which were already used widely by business users, although without standard semantics for the shapes and symbols. Now with each shape bound to the semantics of the execution language BPML, BPMN promised “What You Draw Is What You Execute.” Intalio was able to organize a consortium of over 200 software companies called BPMI.org in support of this new business automation standard.

But in fact, they were too early, since the core web services standard WSDL was not yet final, and in the end BPML did not work with it. So BPML was replaced by another language BPEL that did. While the graphical notation BPMN remained popular with business for process documentation and improvement, it was not a perfect fit with BPEL for automation. There were patterns you could draw in BPMN that did not map to BPEL. So no longer could it claim “what you draw is what you execute.” BPMI.org withered and ultimately was acquired by OMG, a true standards organization.

Even so, BPMN remained extremely popular with business users. They were not, in fact, looking to automate their processes, merely to document them for analysis and potential improvement. Still the application software vendors and SOA middleware vendors pushed for a new version of BPMN that would make the diagrams executable. But it was not until 2010 that BPMN 2.0 finally gave BPMN diagrams a proper execution language. I had been doing BPMN training on version 1.x since 2007, and somehow I wound up on the BPMN 2.0 task force, representing the interests of those business users doing descriptive modeling.

BPMN 2.0 was widely adopted by almost all process automation vendors. Later OMG developed two other business automation standards, CMMN for case modeling and DMN for decision modeling, all based on a similar set of principles. There are three standards because they cover different things. They were developed at different times for different reasons. So let me briefly describe what they do, their common principles, and why they remain important today.

BPMN

BPMN was the first. BPMN describes a process as a flowchart of actions leading from some defined triggering event to one of several possible end states. Each action typically represents either a human task or an automated service, for the first time unifying human workflow and EAI. The diagrams include both the normal “happy path” and branching paths followed by exceptions. It is easy to see the sequence of steps the process may follow, but every instance of the process must follow some path in the diagram. That is both its strength and its basic limitation.

BPMN remains very popular because the diagrams are intuitive. Users understand them in general terms without training, and what you draw is what you execute. BPMN can describe many real-world processes but not all of them.

There are actually two distinct communities of BPMN users. It is interesting that they interact hardly at all. One uses BPMN purely for process description. They have no interest in automation, just documentation and analysis. Those BPMN users simply want to document their current business processes with hope of improving them, since the diagrams, if made properly with something like Method and Style, more clearly express the process logic than a long text document. For them, the value of BPMN is it generally follows the look of traditional swimlane flowcharts.

The other community is interested in process automation, the actual implementation of new and improved processes. For them, the value of BPMN is what you draw is what you execute. It is what allows business teams to better specify process requirements and generally collaborate more closely with development teams. This reduces time to value and generally increases user acceptance.

As a provider of BPMN training, my interactions have been mostly with the first community. In a prior life as an industry analyst, they were exclusively with the second community. So I have a good sense of the expectations of both groups. And while I believe the descriptive modeling community is probably still larger, it is now declining, and heres why.

In a descriptive modeling team, there are typically meetings and workshops where the facilitator asks how does the process start? What happens next? And so forth. Thats good for capturing the happy path, the normal activity flow to a successful end state, which is the main interest. But BPMN diagrams also contain gateways and events that define exception paths, since, remember, every instance of the process must follow some path drawn in the diagram. In reality, however, you are never going to capture all the exceptions that way. 

So over the past decade, a lot of process improvement efforts have shifted away from descriptive process modeling in this way to something called process mining, instrumenting key steps of the process as it is running however they are achieved and correlating the timestamps to create a picture of all the paths from start to end including, importantly, that small fraction of instances that take much too long, cost way too much, and are responsible for complaints to customer service. Dealing with those cases leads to significant process improvement, and automating them requires accommodating them somehow.

Some process mining tools map their findings back to BPMN, but BPMN demands a reason some gateway condition or event to explain each divergence from the happy path. This points out the key weakness of BPMN as an automation language, the difficulty in accommodating all the weird exceptions that occur in real world scenarios. Something more flexible is needed.

CMMN

CMMN was created to handle those processes that BPMN cannot. It focuses on a key difference from BPMN, work taking an unpredictable path of over its lifetime, emphasizing event-triggered behavior and ad-hoc decisions by knowledge workers.

A case in CMMN is not described by a flowchart, where you can clearly see the order of actions, but rather a diagram of states, called stages, each specifying actions that MAY be performed or in some cases MUST be performed, in order to complete some milestone or transition to some other state. That allows great flexibility, a better way than BPMN, in fact, to handle real-world scenarios in which it is impossible to enumerate the all the possible sequences of actions needed.

This comes at a cost, however. While what you draw is still what you execute, what you draw is not easy to understand. CMMN diagrams are not flowcharts but state diagrams. They are not intuitive, even if you understand the shapes and symbols well. Each diagram shape goes through various lifecycle states Available, Enabled, Active, Completed or Terminated – and each change of state is represented by a standard event that triggers state transitions in other shapes. So in the end the diagram, when fully expanded down to individual tasks, shows actions that MAY be performed or MUST be performed, and the events that trigger other shapes.

In practice, CMMN is almost always used in conjunction with BPMN, as the specific actions described, typically a mix of automated services and human interaction, are better modeled as BPMN processes. So typically in a business automation solution, CMMN is at the top level, where the choice of what actions to perform is flexible and event-driven. Many of the CMMN actions are modeled as process tasks, meaning enactment of a BPMN process. Organizing the solution in this way has the benefit of CMMNs flexibility to handle the events and exceptions of real-world end-to-end scenarios, and BPMNs business-understandable automation of the details of the required actions.

Above is an example from my book CMMN Method and Style, based on social care services in a European country. The stage Active Case contains 4 sub-stages without any entry conditions. That means they can be activated at any time. The triangle marker at the bottom means manually activated; without that they would activate automatically. And notice within each stage are a mix of human tasks, with the head and shoulders icon, and process tasks, with the chevron icon. At the bottom are 4 more tasks not in a substage that deal with administrative functions of the case. They potentially can occur at any time when Active Case is in the active state, but they have entry conditions the white diamonds meaning they must be triggered by events, either a user interaction or a timer. This diagram does not include any file item events receipt or update of some document or data which also triggers tasks and stage entry or exit. As you can see, the case logic is very flexible almost anything could be started at any time but at this level not too revealing as to the details of the actions. Those are detailed within the BPMN processes and tasks.

Here is another CMMN example that illustrates the flexibility needed for what seems like a simple thing like creating a user account in a financial institution. Yes, that is a straightforward process BPMN but it lives inside a wider context, which is managing the users status as a client. A new client has to be initialized, set up thats another BPMN process and then you can add one or more accounts as long as the client remains an Active User. You can deactivate the user thats another process which automatically terminates the Manage Active User stage, thats the black diamond, and triggers the Manage Inactive User stage thats the link to the white diamond labeled exit, exit being the lifecycle event associated with the Terminated state of Manage Active User. 

The only thing that happens in the Manage Inactive User stage is you can reactivate the user, another process. And when that is complete, since there is nothing else in that stage, the stage is complete, meaning the link to the white diamond in Manage Active User labeled complete triggers activation of that stage. The hashtag marker on both stages means after completion or termination they can be retriggered; without that marker they could not. And finally, the process task Delete user has no entry conditions, so it can be activated manually at any time, and when complete terminates the case, the black diamond. So what seemed initially like a simple process, Add User Account, in a real world application expands to a case handling the whole lifecycle of that client.

DMN

DMN describes not actions but business logic, understandable by non-programmers but deployable nevertheless as an executable service. DMN is a successor to business rule engine technology from the 1990s, better aligned with today’s microservice-oriented architecture. In DMN, a decision service takes data in and returns data out. It takes no other actions. Like BPMN and CMMN, design is largely graphical.

The diagram you see here, called a Decision Requirements Diagram, shows the dependencies of each rectangle, representing a decision, on supporting decisions and input data connected to it via incoming arrows called information requirements. The value expression of each decision can reference only those direct information requirements. Even if they are unable to model the value expressions, business users are able create these diagrams, which serve as verifiable business requirements for the executable solution.

But DMN was designed to let non-programmers go further, all the way in fact to executable decision services. Instead of program code, DMN provides standard tabular formats called boxed expressions. These are what define the expressions that compute the value of each decision.

Decision tables, like the one shown above, are the most familiar boxed expression type, but there are many more, such as Contexts, like the one shown below, where rows of the table define local variables called context entries, used to simplify the expressions in subsequent rows. Note here that the value expression of a context entry, such as Loan amount here, can itself be a context or some other boxed expression.

The expressions of each context entry, the cells in gray here, use a low-code expression language called FEEL. Although FEEL is defined in the DMN spec, it has the potential to be used in BPMN and CMMN as well. More about that later.

The combination of DRDs, boxed expressions, and FEEL means business users can create directly executable decision logic themselves without programming. In BPMN and CMMN, graphical design is limited to an outline of the process or case logic, the part shown in the diagram. To make the model fully executable, those standards normally still require programming.

On the Trisotech platform, however, that’s no longer the case. When BPMN and CMMN were developed, they did not define an expression language, because they assumed, correctly at the time, that the details of system integration and user experience would require programming in a language like Java. But DMN was published 6 years later, when software tools were more advanced. DMN was intended to allow non-programmers to create executable decision services themselves using a combination of diagrams, tables, and FEEL, a standard low-code expression language. An expression language is not a full programming language; it does not define and update variables, but just computes values through formulas. For example, the Formulas ribbon in Excel represents an expression language.

While adoption of FEEL as a standard has been slow, even within the decision management community, many business automation vendors have implemented their own low-code expression languages with the objective of further empowering business teams to collaborate with developers in solutions.. Low-code expression languages should be seen today as the important other half of model-based business automation, along with diagrams composed of standardized shapes, semantics, and operational behavior. The goal is faster time-to-value, easier solution maintenance, and improved user acceptance through close collaboration of business and development teams.

Shared Principles

Three key principles underlie these OMG standards, which play a key role in the Trisotech platform today, and I want to say more about why they are important.

The first is that they are all model-based – meaning diagrams and tables and those models are used both for outlining the desired solution, what have been called business requirements, and creating the executable solution itself, going back to the original promise of What You Draw Is What You Execute.

The traditional method of text-based business requirements is a well-known cause of project failure. Instead, models allow subject matter experts in the business to create an outline of the solution in the form of diagrams and tables that can be verified for completeness and self-consistency. Moreover, the shapes and symbols used in the diagrams are defined in a spec, so the models look the same, and work the same, in any tool. And even better, the execution semantics of the shapes is also spelled out in a spec, so what you draw is actually what you execute. Engaging business users together with IT from the start of the project speeds time to value and increases user acceptance.

Second, the models have a standard xml file format, so they can be interchanged between tools without loss of fidelity. In practice, for example, that means business users and developers can use different tools and interchange models between them. Ive been involved in engagements where the business wants to use modeling tool X, and IT, in a separate organization, insists on using tool Y for the runtime. Thats not great, but doable, since you can interchange models between the tools using the standard file format.

Third, OMG procedures ensure vendor-neutrality. Proprietary features cannot be part of the standard, and the IP is free to use. In practice this tends to result in open-source tools and lower software cost. It may not be widely appreciated but many, possibly the majority, of BPMN diagramming tools today stem from an open source project, and another open source project created a popular BPMN runtime. DMN is a similar story. Tool vendors were stumped by the challenge of parsing FEEL, where variable and function names may contain spaces, until Red Hat made their FEEL parser and DMN runtime open source. Implementing these standards is difficult, and open source software has been a big help in encouraging adoption.

Enduring Business Value

So where is the business value of these standards? What are the benefits to customers? In my view, these fall into 3 categories:

First, common terminology and understanding. I remember the days of workflow automation before BPMN. There was no common terminology. You learned a particular tool, and tools were for programmers. Now BPMN, CMMN, and DMN provide tool-independent understanding of processes, cases, and decision logic. There is much wider availability of information about these technologies, through books, web posts, and training, down to the details of modeling and automation. This in turn makes it easier to hire employees and engage service providers already conversant and experienced in the technology. It also lowers the re-learning cost if it becomes necessary to switch platform vendors.

Benefit #2 is faster time-to-value. Today this might be the most critical one for customers. Model-based solutions are simply faster to specify, faster to implement, and faster to adapt to changing requirements than traditional paradigms. Faster to specify because you can more easily engage the subject matter experts up front in requirements development and more quickly verify the requirements for completeness and self-consistency. Faster to implement because you are building on top of standard automation engines. The custom code is just for specific system integration and user experience details. And faster to adapt to changing requirements because often this involves only changing the model not the custom code.

So, for example, a healthcare provider can go live with a Customer Onboarding portal in only 3 months instead of a year, leveraging involvement from business users across multiple departments and close collaboration between business and IT. Everyone is using the same terminology, the same diagrams, and what you draw is what you execute.

A key feature of the 3 OMG standards is they were designed to work together. CMMN, for example, has a standard task type that calls a BPMN process and another that calls a DMN decision service. BPMN can call a DMN decision service and, on the Trisotech platform, a CMMN case. And so you have platforms like Trisotech that handle the full complement of business automation requirements event-driven case management, straight through process automation, long-running process automation, and decision automation. The alternative is a lot of custom code expensive and hard to adapt to changing requirements or separate platforms for process, decision, and case management, again expensive and requiring a lot of system integration.

A single platform that handles all three especially one based on models, business engagement, and vendor-independent standards lowers costs the cost of software, of implementation, and of maintenance of the solution over the long term.

SDMN, a Common Data Language

While BPMN, CMMN, and DMN work together well, they still lack a common format for data. DMN uses FEEL, but in most tools, executable models BPMN and CMMN use a programming language like Java to define data. It would be better if they shared a way to define data. Trisotech uses FEEL for all three model types, which is ideal. But now there is an effort to make that common data language a standard.

BPM+ Health is an HL7 “community of practice” in the healthcare space seeking industry convergence around such a common data language, SDMN. That language is essentially FEEL. Technically, it is defined by the OMG Shared Data Model and Notation (SDMN) standard, a key BPM+ Health initiative seeking to unify data modeling across BPMN, CMMN, and DMN. 

Data modeling is primarily used in executable models and omitted from purely descriptive models. The challenge for SDMN is to unify DMN variables, BPMN data objects, and CMMN case file items, and describe them as what DMN calls item definitions.

SDMN defines a standards-based logical data model for data used across BPMN, CMMN, and DMN. Data linked to SDMN definitions is not specific to a particular model but usable consistently across models of different types. In this sense it is the logical data model equivalent to the Trisotech Knowledge Entity Modeler. But where KEM defines a conceptual data model, with focus of KEM is on term semantics, SDMN is a logical data model, with focus on data structure, what DMN calls item definitions.

At this time, SDMN 1.0 is still in beta, but two artifacts illustrate what is planned. The DataItem diagram, shown below, describes the names and types of data items, and structural relationships between them.

In addition, an ItemDefinition diagram, again shown below, details the structure of each data item.

Trisotech users will recognize this as an alternative way to create item definitions, easier for tool vendors without Trisotech’s graphics capability. I still prefer Trisotech’s collapsible implementation:

The Path Forward

Going forward, look for more solutions developed using BPMN, CMMN, and DMN in combination. Typically CMMN is used at the top level, where it can handle all possibilities in a real-world process application. Specific actions are best implemented as BPMN processes, invoked from CMMN as a process task. Decision logic can be called from either CMMN or BPMN as a decision task. So you see why having common data definitions for all three modeling languages is valuable in such integrated solutions.

On the Trisotech platform, SDMN, in the form of FEEL, is already supported in all three languages, providing the additional benefit of Low-Code business automation. Starting in healthcare, SDMN will make this integration enabler available on other platforms as well.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

DecisionCamp 2024 Presentation
Revolutionizing Credit Risk Management in Banking

Presented By
Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE)
Description

The European Banking Authority (EBA) Dear CEO letter, typically issued to provide guidance and expectations for banks on key regulatory issues, emphasizes the need for stringent credit risk management, continuous monitoring, and compliance with evolving regulations.

The primary challenge for banks in monitoring customers and credit risks is the complexity and volume of data that must be continuously analyzed and acted upon. This complexity arises from several factors: the variety of triggers, the volume and complexity of metrics, continuous monitoring, quickly adaptable regulatory compliance, a comprehensive 360-degree customer view.

By leveraging DMN modeling & execution banks can effectively meet the EBA’s expectations outlined in the Dear CEO letter. DMN engines provide a robust solution for automated decision-making, continuous monitoring, regulatory compliance, and transparency, ensuring that banks can manage credit risks proactively and efficiently while maintaining the required standards set by the EBA and other regulatory bodies. This alignment not only helps in fulfilling regulatory obligations but also strengthens the overall financial health and stability of the bank.

During his presentation Stefaan Lambrecht will demonstrate an end-to-end solution to these challenges inspired by a real-life case, and making use of an integrated use of DMN, CMMN and BPMN.

Watch the video

Content

Presentation

View all

DecisionCamp 2024 Presentation
Revolutionizing Credit Risk Management in Banking

Presented By
Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE)
Description

The European Banking Authority (EBA) Dear CEO letter, typically issued to provide guidance and expectations for banks on key regulatory issues, emphasizes the need for stringent credit risk management, continuous monitoring, and compliance with evolving regulations.

The primary challenge for banks in monitoring customers and credit risks is the complexity and volume of data that must be continuously analyzed and acted upon. This complexity arises from several factors: the variety of triggers, the volume and complexity of metrics, continuous monitoring, quickly adaptable regulatory compliance, a comprehensive 360-degree customer view.

By leveraging DMN modeling & execution banks can effectively meet the EBA’s expectations outlined in the Dear CEO letter. DMN engines provide a robust solution for automated decision-making, continuous monitoring, regulatory compliance, and transparency, ensuring that banks can manage credit risks proactively and efficiently while maintaining the required standards set by the EBA and other regulatory bodies. This alignment not only helps in fulfilling regulatory obligations but also strengthens the overall financial health and stability of the bank.

During his presentation Stefaan Lambrecht will demonstrate an end-to-end solution to these challenges inspired by a real-life case, and making use of an integrated use of DMN, CMMN and BPMN.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Mastering Decision Centric Orchestration

Balancing Human Insight and AI Automation for Justifiable, Context-Aware Business Choices

Presented By
Denis Gagne CEO & CTO Trisotech.
Description

In today’s dynamic business landscape, decision-centric orchestration is pivotal for organizational success. This presentation delves into the diverse spectrum of decisions within an organization, ranging from those that are purely human to those that are fully automated. We will explore how various types of artificial intelligence (AI) support decision automation, with a particular emphasis on the crucial role of context in making informed business choices. Attendees will gain a comprehensive understanding of how context-aware AI can enhance decision-making processes, ensuring that outcomes are not only efficient but also relevant to specific business needs.

Highlighting the necessity of explainable and justifiable decisions, we will discuss the imperative of maintaining human oversight in all business processes. The presentation will address the challenges and opportunities associated with integrating AI into decision-making frameworks, focusing on the balance between human intuition and AI-driven efficiency. Attendees will learn strategies for achieving a harmonious integration of these elements, ensuring that their organization’s decision-making is robust, transparent, and aligned with ethical standards. Ultimately, this session aims to equip business leaders with the knowledge and tools to leverage both human insight and AI automation for superior organizational performance.

Watch the video

Content

Presentation

View all

Mastering Decision Centric Orchestration

Balancing Human Insight and AI Automation for Justifiable, Context-Aware Business Choices

Presented By
Denis Gagne CEO & CTO Trisotech.
Description

In today’s dynamic business landscape, decision-centric orchestration is pivotal for organizational success. This presentation delves into the diverse spectrum of decisions within an organization, ranging from those that are purely human to those that are fully automated. We will explore how various types of artificial intelligence (AI) support decision automation, with a particular emphasis on the crucial role of context in making informed business choices. Attendees will gain a comprehensive understanding of how context-aware AI can enhance decision-making processes, ensuring that outcomes are not only efficient but also relevant to specific business needs.

Highlighting the necessity of explainable and justifiable decisions, we will discuss the imperative of maintaining human oversight in all business processes. The presentation will address the challenges and opportunities associated with integrating AI into decision-making frameworks, focusing on the balance between human intuition and AI-driven efficiency. Attendees will learn strategies for achieving a harmonious integration of these elements, ensuring that their organization’s decision-making is robust, transparent, and aligned with ethical standards. Ultimately, this session aims to equip business leaders with the knowledge and tools to leverage both human insight and AI automation for superior organizational performance.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Bruce Silver's blog post - What Is a Decision Service?
Bruce Silver
Blog

What Is a Decision Service?

By Bruce Silver

Read Time: 5 Minutes

Beginning DMN modelers might describe a decision service as that rounded rectangle shape in a DRD that behaves similarly to a BKM. That’s true, but it is a special case. Fundamentally, a decision service is the unit of execution of DMN logic, whether that is invoked by a decision in the DRD, a business rule task in BPMN, a decision task in CMMN, or an API call in any external client application or process. Whenever you execute DMN, whether in production or simply testing in the modeling environment, you are executing a decision service.

In Trisotech Decision Modeler, that is the case even if you never created such a service yourself. That’s because the tool has created one for you automatically, the Whole Model Decision Service, and used it by default in model testing. In fact, a single decision model is typically the source for multiple decision services, some created automatically by the tool and some you define manually, and you can select any one of them for testing or deployment. In this post we’ll see how that works.

Service Inputs, Outputs, and Encapsulated Decisions

As with any service, the interface to a decision service is defined by its inputs and outputs. The DMN model also defines the internal logic of the decision service, the logic that computes the service’s output values from its input values. Unlike a BKM, where the internal logic is a single boxed expression, the internal logic of a decision service is defined by a DRD.

When the service is invoked by a decision in a DMN model, typically it has been defined in another DMN model and imported to the invoking model, such as by dragging from the Digital Enterprise Graph. But otherwise, the service definition is a fragment of a larger decision model, including possibly the entire DRD, and is defined in that model.

Within that fragment, certain decisions represent the service outputs, other decisions and input data represent the inputs, and decisions in between the outputs and inputs are defined as “encapsulated”, meaning they are used in the service logic but their values are not returned in the service output. When you execute a decision service, you supply values to the service inputs and the service returns the values of its outputs.

One Model, Many Services

In Trisotech’s Whole Model Decision Service, the inputs are all the input data elements in the model, the outputs are all “top-level” decisions – that is, decisions that are not information requirements for other decisions. All other decisions in the DRD are encapsulated. In addition to this service, Trisotech automatically creates a service for each DRD page in the model, named Diagram Page N. If there is only one DRD page in the model, it will be the same as the Whole Model Decision Service, but if your model has imported a decision service or defines one independently of the invoking DRD, an additional Diagram Page N service will reflect that one.

All that is just scratching the surface, because quite often you will want to define additional decision services besides those automatically created. For example, you might want your service to return one or more decisions that are defined as encapsulated in the Whole Model Decision Service. Or you might want some inputs to your service to be not input data elements but supporting decisions. In fact, this is very common in Trisotech Low-Code Business Automation models, where executable BPMN typically invoke a sequence of decision services, each a different fragment of a single DMN model. In BPMN, if you are invoking the Whole Model Decision Service it’s best to rename it, because the BPMN task that invokes it inherits the decision service name as the task name.

Defining a Decision Service

So how do you define a decision service? The DMN spec describes one way, but it is not the best way. The way described in the spec is as a separate DRD page containing an expanded decision service shape, a resizable rounded rectangle bisected by a horizontal line. The shape label is the service name. Decisions drawn above the line are output decisions, those below the line are encapsulated decisions. Service inputs – whether input data or decisions – are drawn outside the service shape, as are any BKMs invoked by decisions in the service. Information requirements and knowledge requirements are drawn normally.

The better way, at least in Trisotech Decision Modeler, is the decision service wizard.

In the wizard, you first select the output decisions from those defined in your model, and the wizard populates the service inputs with their direct information requirements. You can then promote selected inputs to encapsulated, and the wizard recalculates the needed inputs. You can keep doing that until all service inputs are input data, or you can stop anywhere along the way. The reason why this is better is that it ensures that all the logic needed to compute the output values from the inputs are properly captured in encapsulated decisions. You cannot guarantee that with the expanded decision service shape method.

Testing DMN Models

Trisotech’s Automation feature lets you test the logic of your DMN models, and I believe that is critically important. On the Execution ribbon, the Test button invites you first to select a particular decision service to test. If you forget to do this, the service it selects by default depends on the model page you have open at the time you click Test.

In Test, the service selector dropdown lists even more services than the automatically generated ones and those you created manually, which are listed above a gray line in the dropdown. Below the line is listed a separate service for every decision in the model, named with the decision name, with direct information requirements as the inputs. (For this reason, you should not name a decision service you create with the name of a decision, as this name conflicts with the generated service.) In addition, below the line is listed one more: Whole model, whose inputs are all the input data elements and outputs are all the decisions in the model. It’s important to note that these below-the-line services are available only for testing in Decision Modeler. If you want to deploy one of them, you need to manually create it, in which case it is listed above the line.

In Test, your choice of decision service from the dropdown determines the inputs expected by the tool. As an alternative to the normal html form, which is based on the datatypes you have assigned to the inputs, you can select an XML or json file with the proper datatype, or use a previously saved test case.

Invoking a Decision Service in DMN

Invoking a decision service in DMN works the same way as invoking a BKM. On the DRD containing the invocation, the decision service is shown as a collapsed decision service shape linked to the invoking decision with a knowledge requirement.

The invoking decision can use either a boxed invocation or literal invocation. In the former, service inputs are identified by name; in the latter, input names are not used. Arguments are passed in the order of the parameters in the service definition, so you may need to refer to the service definition to make sure you have that right.

Invoking a Decision Service in BPMN

In Business Automation models it is common to model almost any kind of business logic as a DMN decision service invoked by a business rule task, also called a decision task. In Trisotech Workflow Modeler, you need to link the task to a decision service in your workspace; it is not necessary to first Deploy the service. (Deployment is necessary to invoke the service from an external client.) As mentioned previously, the BPMN task inherits the name of the decision service. By default, the task inputs are the decision service inputs and the task outputs are the decision service outputs.

Data associations provide data mappings between process variables – BPMN data objects and data inputs – to the task inputs, and from task outputs to other process variables – BPMN data objects and data outputs. On the Trisotech platform, these data mappings are boxed expressions using FEEL, similar to those used in DMN.

The Bottom Line

The important takeaway is the a decision service is more than a fancy BKM that you will rarely use. If you are actually employing DMN in your work, you will use decision services all the time, both for logic testing and deployment, and for providing business logic in Business Automation models. The decision service wizard makes it easy.

If you want to find out more about how to define and use decision services, check out DMN Method and Style 3rd edition, with DMN Cookbook, or my DMN Method and Style training, which includes post-class certification.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - FEEL Operators Explained
Bruce Silver
Blog

FEEL Operators Explained

By Bruce Silver

Read Time: 5 Minutes

Although DMN’s expression language FEEL was designed to be business-friendly, it remains intimidating to many. That has led to the oft-heard charge that “DMN is too hard for business users”. That’s not true, at least for those willing to learn how to use it. Although the Microsoft Excel Formula language is actually less business-friendly than FEEL, somehow you never hear that “Excel is too hard for business users.”

One key reason why FEEL is more business-friendly than the Excel Formula language, which they now call Power FX, is its operators. FEEL has many, and Power FX has very few. In this post we’ll discuss what operators are, how they simplify the expression syntax, and how DMN boxed expressions make some FEEL operators more easily understood by business users.

It bears repeating that an expression language is not the same as a programming language. A programming language has statements. It defines variables, calculates and assigns their values. You could call DMN as a whole a programming language, but the expression language FEEL does not define variables or assign their values. Those things done graphically, in diagrams and tables – the DRD and boxed expressions. FEEL expressions are simply formulas that calculate values: data values in, data values out.

Functions and Operators

Those formulas are based on two primary constructs: functions and operators.

The logic of a function is specified in the function definition in terms of inputs called parameters. The same logic can be reused simply by invoking the function with different parameter values, called arguments. The syntax of function invocation – both in FEEL and Excel Formulas – is the function name immediately followed by parentheses enclosing a comma-separated list of arguments. FEEL provides a long list of built-in functions, meaning the function names and their parameters are defined by the language itself. Excel Formulas do the same. In addition, DMN allows modelers to create custom functions in the form of Business Knowledge Models (BKMs) and decision services, something Excel does not allow without programming.

Operators are based on reserved words and symbols in the expression with meaning defined by the expression language itself. There are no user-defined operators. They do not use the syntax of a name followed by parentheses enclosing a list of arguments. As a consequence, the syntax of an expression using operators is usually shorter, simpler, and easier to understand than an expression using functions.

You can see this from a few examples in FEEL where you could use either a function or an operator. One is simple addition. Compare the syntax of the expression adding variables a and b using the sum() function

sum(a, b)

with its equivalent using the addition operator +:

a + b

The FEEL function list contains() and the in operator do the same thing, test containment of a value in a list. Compare

list contains(myList, "abc")

with

"abc" in myList

Both FEEL and Excel support the basic arithmetic operators like +, -, *, and /, comparison operators like =, >, or <=, and string concatenation. But those are essentially the only operators provided by Excel, whereas FEEL provides several more. It is with these more complex operators that FEEL’s business-friendliness advantage stands out.

if..then..else

Let’s start with the conditional operator, if..then..else. These keywords comprise an operator in FEEL, where Excel can use only functions. Compare the FEEL expression

if Credit Score = "High" and Affordability = "OK" then "Approved" else "Disapproved"

with Excel’s function-based equivalent:

IF(AND(Credit Score = "High", Affordability = "OK"), "Approved", "Disapproved")

The length is about the same but the FEEL is more human-readable. Of course, the Excel expression assumes you have assigned a variable name to the cells – something no one ever does. So you would be more likely to see something like this:

IF(AND(B3 = "High", C3 = "OK"), "Approved", "Disapproved")

That is a trivial example. A more realistic if..then..else might be

if Credit Score = "High" and Affordability = "OK" then "Approved"
        else if Credit Score = "High" and Affordability = "Marginal" then "Referred"
        else "Disapproved"

That’s longer but still human-readable. Compare that with the Excel formula:

IF(AND(Credit Score = "High", Affordability= "OK"), "Approved", IF(AND(Credit Score = "High",
         Affordability = "Marginal"), "Referred", "Disapproved"))

Even though the FEEL syntax is fairly straightforward, DMN includes a conditional boxed expression that enters the if, then, and else expressions in separate cells, in theory making the operator friendlier for some users and less like code. Using that boxed expression, the logic above looks like this:

Filter

The FEEL filter operator is square brackets enclosing either a Boolean or integer expression, immediately following a list. When the enclosed expression is a Boolean, the filter selects items from the list for which the expression is true. When the enclosed expression evaluates to positive integer n, the filter selects the nth item in the list. (With negative integer n, it selects the nth item counting backward from the end.) In practice, the list you are filtering is usually a table, a list of rows representing table records, and the Boolean expression references columns of that table. I wrote about this last month in the context of lookup tables in DMN. As we saw then, if variable Bankrates is a table of available mortgage loan products like the one below,

then the filter

Bankrates[lenderName = "Citibank"]

selects the Citibank record from this table. Actually, a Boolean filter always returns a list, even if it contains just one item, so to extract the record from that list we need to append a second integer filter [1]. So the correct expression is

Bankrates[lenderName = "Citibank"][1]

Excel Formulas do not include a filter operator, but again use a function: FILTER(table, condition, else value). So if we had assigned cells A2:D11 to the name Bankrates and the column A2:A11 to the name lenderName, the equivalent Excel Formula would be

FILTER(Bankrates, lenderName = "Citibank", "")

but would more likely be entered as

FILTER(A2:D11, A2:A11 = "Citibank", "")

FEEL’s advantage becomes even more apparent with multiple query criteria. For example, the list of zero points/zero fees loan products in FEEL is

Bankrates[pointsPct = 0 and fees = 0]

whereas in Excel you would have

FILTER(A2:D11, (C2:C11=0)*(D2:D11=0), "")

There is no question here that FEEL is more business-friendly.

Iteration

The for..in..return operator iterates over an input list and returns an output list. It means for each item in the input list, to which we assign a dummy range variable name, calculate the value of the return expression:

for <range variable> in <input list> return <return expression, based on range variable>

It doesn’t matter what you name the range variable, also called the iterator, as long as it does not conflict with a real variable name in the model. I usually just use something generic like x, but naming the range variable to suggest the list item makes the expression more understandable. In the most common form of iteration, the input list is some expression that represents a list or table, and the range variable is an item in that list or row in that table.

For example, suppose we want to process the Bankrates table above and create a new table Payments by Lender with columns Lender Name and Monthly Payment, using a requested loan amount of $400,000. And suppose we have a BKM Lender Payment, with parameters Loan Product and Requested Amount, that creates one row of the new table, a structure with components Lender Name and Monthly Payment. We will iterate a call to this BKM over the rows of Bankrates using the for..in..return operator. Each iteration will create one row of Payments by Lender, so at the end we will have a complete table.

The literal expression for Payments by Lender is

for product in Bankrates return Lender Payment(product, Requested Amount)

Here product is the range variable, meaning one row of Bankrates, a structure with four components as we saw earlier. Bankrates is the input list that we iterate over. The BKM Lender Payment is the return expression.Beginners are sometimes intimidated by this literal expression, so, as with if..then..else, DMN provides an iterator boxed expression that enters the for, in, and return expressions in separate cells.

The BKM Lender Payment uses a context boxed expression with no final result box to create each row of the table. The context entry Monthly Payment invokes another BKM, Loan Amortization Formula, which calculates the value based on the adjusted loan amount, the interest rate, and fees.

Excel Formulas do not include an iteration function. Power FX’s FORALL function provides iteration, but it is not available in Excel. To iterate an expression in Excel you are expected to fill down in the spreadsheet.

The FEEL operators some..in..satisfies and every..in..satisfies represent another type of iteration. The range variable and input list are the same as with for..in..return. But in these expressions the satisfies clause is a Boolean expression, and the iteration operator returns not a list but a simple Boolean value. The one with some returns true if any iteration returns true, and the one with every returns true only if all iterations return true.

For example, again using Bankrates,

some product in Bankrates satisfies product.pointsPct = 0 and product.fees = 0

returns true, while

every product in Bankrates satisfies product.pointsPct = 0 and product.fees = 0

returns false. The iterator boxed expression works with this operator as well.

The bottom line is this: FEEL operators are key to its combination of expressive power and business-friendliness, surpassing that of Microsoft Excel Formulas. Modelers should not be intimidated by them. For detailed instruction and practice in using these and other DMN constructs, check out my DMN Method and Style training. You get 60-day use of Trisotech Decision Modeler and post-class certification at no additional cost.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Lookup Tables in DMN
Bruce Silver
Blog

Lookup Tables in DMN

By Bruce Silver

Read Time: 5 Minutes

Lookup tables are a common logic pattern in decision models. To model them, I have found that beginners naturally gravitate to decision tables, being the most familiar type of value expression. But decision tables are almost never the right way to go. One basic reason is that we generally want to be able to modify the table data without creating a new version of the decision model, and with decision tables you cannot do that. Another reason is that decision tables must be keyed in by hand, whereas normal data tables can be uploaded from Excel, stored in a cloud datastore, or submitted programmatically as JSON or XML.

The best way to model a lookup table is a filter expression on a FEEL data table. There are several ways to model the data table – as submitted input data, a cloud datastore, a zero-input decision, or a calculated decision. Each way has its advantages in certain circumstances. In this post we’ll look at all the possibilities.

As an example, suppose we have a table of available 30-year fixed rate home mortgage products, differing in the interest rate, points (an addition to the requested amount as a way to “buy down” the interest rate), and fixed fees. You can find this data on the web, and the rates change daily. In our decision service, we want to allow users to find the current rate for a particular lender, and in addition find the monthly payment for that lender, which depends on the requested loan amount. Finding the lender’s current rate is a basic lookup of unmodified external data. Finding the monthly payment using that lender requires additional calculation. Let’s look at some different ways to model this.

We can start with a table of lenders and rates, either keyed in or captured by web scraping. In Excel it looks like this:

When we import the Excel table into DMN, we get a FEEL table of type Collection of tBankrate, shown here:

Each row of the table, type tBankrate, has components matching the table columns. Designating a type as tPercent simply reminds us that the number value represents a percent, not a decimal.

Here is one way to model this, using a lookup of the unmodified external data, and then applying additional logic to the returned value.

We define the input data Bankrates as type Collection of tBankrate and lookup my rate – the one that applies to input data my lender. The lookup decision my rate uses a filter expression. A data table filter typically has the format

<table>[<Boolean expression of table columns>]

in which the filter, enclosed in square brackets, contains a Boolean expression. Here the table is Bankrates and the Boolean expression is lenderName = my lender. In other words, select the row for which column lenderName matches the input data my lender.

A filter always returns a list, even if it contains just one item. To extract the item from this list, we use a second form of a filter, in which the square brackets enclose an integer:

<list>[<integer expression>]

In this case, we know our data table just has a single entry for each lender, so we can extract the selected row from the first filter by appending the filter [1]. The result is no longer a list but a single row in the table, type tBankrate.

The decision my payment uses a BKM holding the Loan Amortization Formula, a complicated arithmetic expression involving the loan principal (p), interest rate (r), and number of payments (n), in this case 360.

Decision my payment invokes this BKM using the lookup result my rate. Input data loan amount is just the borrower’s requested amount, but the loan principal used in Loan Amortization Formula (parameter p) also includes the lender’s points and fees. Since pointsPct and ratePct in our data table are expressed as percent, we need to divide by 100 to get their decimal value used in the BKM formula.

When we run it with my lender “Citibank” and loan amount $400,000, we get the result shown here.

That is one way to do it. Another way is to enrich the external data table with additional columns, such as the monthly payment for a given loan amount, and then perform the lookup on this enriched data table. In that case the data table is a decision, not input data.

Here the enriched table Payments by Bank has an additional column, payment, based on the input data loan amount. Adding a column to a table involves iteration over the table rows, each iteration generating a new row including the additional column. In the past I have typically used a context BKM with no final result box to generate the each new row. But actually it is simpler to use a literal expression with the context put() function, as no BKM is required to generate the row, although we still need the Loan Amortization Formula. (Simpler for me, but the resulting literal expression is admittedly daunting, so I’ll show you an alternative boxed expression that breaks it into simpler pieces.)

context put(), with parameters context, keys, and value, appends components (named by keys) to a an existing structure (context), and assigns their value. If keys includes an existing component of context, value overwrites the previous value. Here keys is the new column name “payment”, and value is calculated using the BKM Loan Amortization Formula. So, as a single literal expression, Payments by Bank looks like this:

Here we used literal invocation of the BKM instead of boxed invocation, and we applied the decimal() function to round the result.

Alternatively, we can use the iterator boxed expression instead of the literal for..in..return operator and invocation boxed expressions for the built-in functions decimal() and context put() as well as the BKM. With FEEL built-in functions you usually use literal invocation but you can use boxed invocation just as well.

Now my payment is a simple lookup of the enriched data table Payments by Bank, appending the [1] filter to extract the row and then .payment to extract the payment value for that row.

When we run it, we get the same result for Citibank, loan amount $400,000:

The enriched data table now allows more flexibility in the queries. For example, instead of finding the payment for a particular lender, you could use a filter expression to find the loan product(s) with the lowest monthly payment:

Payments by Bank[payment=min(Payments by Bank.payment)]

which returns a single record, AimLoan. Of course, you can also use the filter query to select a number of records meeting your criteria. For example,

Payments by Bank[payment < 2650]

will return records for AimLoan, AnnieMac, Commonwealth, and Consumer Direct.

Payments by Bank[pointsPct=0 and fees=0]

will return records for zero-points/zero-fee loan products: Aurora Financial, Commonwealth, and eLend.

Both of these methods require submitting the data table Bankrates at time of execution. Our example table was small, but in real projects the data table could be quite large, with thousands of rows. This is more of a problem for testing in the modeling environment, since with the deployed service the data is submitted programmatically as JSON or XML. But to simplify testing, there are a couple ways you can avoid having to input the data table each time.

You can make the data table a zero-input decision using a Relation boxed expression. On the Trisotech platform, you can populate the Relation with upload from Excel. To run this you merely need to enter values for my lender and loan amount. You can do this in production as well, but remember, with a zero-input decision you cannot change the Bankrates values without versioning the model.

Alternatively, you can leave Bankrates as input data but bind it to a cloud datastore. Via an admin interface you can upload the Excel table into the datastore, where it is persisted as a FEEL table. So in the decision model, you don’t need to submit the table data on execution, and you can periodically update the Bankrates values without versioning the model. Icons on the input data in the DRD indicate its values are locked to the datastore.

Lookup tables using filter expressions are a basic pattern you will use all the time in DMN. For more information on using DMN in your organization’s decision automation projects, check out my DMN Method and Style training or my new book, DMN Method and Style 3rd edition, with DMN Cookbook.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Dr. John Svirbely's blog post - Going from Zero to Success using BPM+ for Healthcare. 
                Part I: Learning Modeling and Notation Tools
Dr. John Svirbely, MD
Blog

Going from Zero to Success using BPM+ for Healthcare.

Part I:
Learning Modeling and Notation Tools

By Dr. John Svirbely, MD

Read Time: 3 Minutes

Welcome to the first installment of this informative three-part series providing an overview of the resources and the success factors required to develop innovative, interoperable healthcare workflow and decision applications using the BPM+ family of open standards. This series will unravel the complexities and necessities for achieving success with your first clinical guideline automation project. Part I focuses on how long it will take you to reach cruising speed for creating BPM+ visual models.

When starting something new, people often ask some common questions. One is how long will it take to learn the new skills required. This impacts how long it will take to complete a project and therefore costs. Learning something new can also be somewhat painful when we are set in our old ways.

Asking such questions is important, since there is often a disconnect between what is promoted online and the reality. I can give my perspective based on using the Trisotech tools for several years, starting essentially from scratch.

How long does it take to learn?

The simple answer – it depends. A small project can be tackled by a single person quite rapidly. That is how I got started. Major projects using these tools should be approached as team projects rather than something an individual can do. Sure, there are people who can master a wide range of skills, but in general most people are better at some things than others. Focusing on a few things is more productive than trying to do everything. A person can become familiar with the range of tools, but they need to realize that they may only be able to unlock a part of what is needed to automate a clinical guideline.

The roles that need to be filled to automate a clinical guideline with BPM+ include:

1 subject matter expert (SME)

2 medical informaticist

3 visual model builder

4 hospital programmer/system integrator

5 project manager

6 and of course, tester

A team may need to be composed of various people who bring a range of skills and fill various roles. A larger project may need more than one person in some of these roles.

The amount of time needed to bring a subject matter expert (SME) up to speed is relatively short. Most modeling diagrams can be understood and followed after a few days. I personally use a tool called the Knowledge Entity Modeler (KEM) to document domain knowledge; this allows specification of term definitions, clinical coding, concepts maps and rule definitions. The KEM is based on the SVBR standard, but its visual interface makes everything simple to grasp. Other comparable visual tools are available. The time spent is quickly compensated for by greater efficiency in knowledge transfer.

The medical informaticist has a number of essential tasks such as controlling terminology, standardizing data, and assigning code terms. The person must understand the nuances of how clinical data is acquired including FHIR. These services cannot be underestimated since failures here can cause many problems later as the number of models increase or as models from different sources are installed.

The model builder uses the various visual modelling languages (DMN, BPMN, CMMN) according to the processes and decisions specified by the SME. These tools can be learned quickly to some extent, but there are nuances that may take years to master. While some people can teach themselves from books or videos, the benefits of taking a formal course vastly outweigh the cost and time spent. Trsiotech offers eLearning modules that you can learn from at your own pace.

When building models, there is a world of difference between a notional model and one that is automatable. Notional models are good for knowledge capture and transfer. A notional model may look good on paper only to fail when one tries to automate it. The reasons for this will be discussed in Part 3 of this blog series.

The hospital programmer or system integrator is the person who connects the models with the local EHR or FHIR server so that the necessary data is available. Tools based on CDS Hooks or SMART on FHIR can integrate the models into the clinical workflow so that they can be used by clinicians. This person may not need to learn the modeling tools to perform these tasks.

The job of the project manager is primarily standard project management. Some knowledge of the technologies is helpful for understanding the problems that arise. This person’s main task is to orchestrate the entire project so that it keeps focused and on schedule. In addition, the person keeps chief administrators up to date and tries to get adequate resources.

The final player is the tester. Testing prior to release is best done independently of other team members to maintain objectivity. There is potential for liability with any medical software, and these tools are no exception. This person also oversees other quality measures such as bug reports and complaints. Knowing the modeling languages is helpful but understanding how to test software is more important.

My journey

I am a retired pathologist and not a programmer. While having used computers for many years, my career was spent working in community hospitals. When I first encountered the BPM+ standards, it took several months and a lot of prodding before I was convinced to take formal training. I have never regretted that decision and wish that I had taken training sooner.

I started with DMN. On-line training takes about a month. After an additional month I had enough familiarity to become productive. In the following 12 months I was able to generate over 1,000 DMN models while doing many other things. It was not uncommon to generate 4 models in one day.

I learned BPMN next. Training online again took a month. This takes a bit longer to learn because it requires an appreciation of how to design a process so that it executes optimally. Initially a model would take me 2-3 days to complete, but later this dropped to less than a day. Complex models can take longer, especially when multiple people need to be orchestrated and exception handling is introduced.

CMMN, although offering great promise for healthcare, is a tough nut to crack. Training is harder to arrange, and few vendors offer automatable versions. This standard is better saved until the other standards have been mastered.

What are the barriers?

Most of the difficulties that I have encountered have not been related to using the standards. They usually arise from organizational or operational issues. Some common barriers that I have encountered include:

1 lack of clear objectives, or objectives that constantly change.

2 lack of commitment from management, with insufficient resources.

3 unrealistic expectations.

4 rushing into models before adequate preparations are made.

If these can be avoided, then most projects can be completed in a satisfactory manner. How long it takes to implement a clinical guideline will be discussed in the next blog.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - DMN 101
Bruce Silver
Blog

DMN 101

By Bruce Silver

Read Time: 4 Minutes

Most of my past posts about DMN have assumed that the reader knows what it is and may be using it already. But there is undoubtedly a larger group of readers who have heard about it but don’t really understand what it’s all about. And possibly an equally large group that have heard about it from detractors and have some misconceptions. So in this post I will try to explain what it is and how it works.

DMN, which stands for Decision Model and Notation, is a model-based language for business decision logic. Furthermore, it is a vendor-neutral standard maintained by the Object Management Group (OMG), the organization behind BPMN, CMMN, and other standards. As with OMG’s other business modeling standards, “model” means the names, meaning, and execution behavior of each language element are formally defined in a UML metamodel, and “notation” means that a significant portion of the language is defined graphically, in diagrams and tables using specific shapes and symbols linked to model elements. In other words, the logic of a business decision, business process, or case is defined by diagrams having a precise meaning, independent of the tool that created them. The main reason for defining the logic graphically is to engage non-programmers, aka business people, in their creation and maintenance.

DMN models the logic of operational decisions, those made many times a day following the same explicit rules. Examples include approval of a loan, validation of submitted data, or determining the next best action in a customer service request. These decisions typically depend on multiple factors, and the logic is frequently complex. The most familiar form of DMN logic is the decision table. All DMN tools support decision tables, and that’s because business people understand them readily with zero training. Consider the decision table below, which estimates the likelihood of qualifying for a home mortgage:

Qualifying for a home mortgage depends primarily on three factors: the borrower’s Credit Score, a measure of creditworthiness; the Loan-to-Value ratio, dividing the loan amount by the property appraised value, expressed as a percent; and the borrower’s Debt-to-Income ratio, dividing monthly housing costs plus other loan payments by monthly income, expressed as a percent. Those three decision table inputs are represented by the columns to the left of the double line. The decision table output, here named Loan Prequalification, with possible values “Likely approved”, “Possibly approved”, “Likely disapproved”, or “Disapproved”, is the column to the right of the double line. Below the column name is its datatype, including allowed values. Each numbered row of the table is a decision rule. Cells in the input columns are conditions on the input, and if all input conditions for a rule evaluate to true, the rule is said to match and the value in the output column is selected as the decision table output value.

A hyphen in an input column means the input is not used in the rule; the condition is true by default. So the first rule says, if Credit Score is less than 620, Loan Prequalification is “Disapproved”. Numeric ranges are shown as values separated by two dots, all enclosed in parentheses or square brackets. Parenthesis means exclude the endpoint in the range; square bracket means include it. So rule 4 says, if Credit Score is greater than or equal to 620 and less than 660, and LTV Pct is greater than 75 and less than or equal to 80, and DTI Pct is greater than 36 and less than or equal to 45, then Loan Prequalification is “Likely disapproved”. Once you get the numeric range notation, the meaning of the decision table is clear, and this is a key reason why DMN is considered business-friendly.

But if you think harder about it, you see that while Credit Score might be a known input value, LTV Pct and DTI Pct are not. They are derived values. They are calculated from known input values such as the loan amount, appraised property value, monthly income, housing expense including mortgage payment, tax, and insurance, and other loan payments. In DMN, those calculations are provided as supporting decisions to the top-level decision Loan Prequalification. Each calculation itself could be complex, based on other supporting decisions. This leads to DMN’s other universally supported feature, the Decision Requirements Diagram, or DRD. Below you see the DRD for Loan Prequalification. The ovals are input data, known input values, and the rectangles are decisions, or calculated values. The solid arrows pointing into a decision, called information requirements, define the inputs to the decision’s calculations, either input data or supporting decisions.

Like decision tables, DRDs are readily understood by business users, who can create them to outline the dependencies of the overall logic. In the view above, we show the datatype of each input data and decision in the DRD. Built-in datatypes include things like Text, Number, Boolean, and collections of those, but DMN also allows the modeler to create user-defined types representing constraints on the built-in types – such as the numeric range 300 to 850 for type tCreditScore – and structured types, specified as a hierarchy of components. For example, tLoan, describing the input data Loan, is the structure seen below:

Individual components of the structure are referenced using a dot notation. For example, the Loan Amount value is Loan.Amount.

A complete DRD, therefore, including datatypes for all supporting decisions down to input data, provides significant business value and can be created easily by subject matter experts. As a consequence, all DMN tools support DRDs. But by itself, the DRD and a top-level decision table is not enough to evaluate the decision. For that you need to provide the logic for the supporting decisions. And here there is some disagreement within the DMN community. Some tool vendors believe that DMN should be used only to provide model-based business requirements. Those requirements are then handed offto developers for completion of the decision logic using some other language, either a programming language like Java or a proprietary business rule language like IBM ODM. I call those tools DMN Lite, because fully implemented DMN allows subject matter experts to define the complete, fully executabledecision logic themselves, without programming.

Full DMN adds two key innovations to DRDs and decision tables: the expression language FEEL and standardized tabular formats called boxed expressions. Using boxed expressions and FEEL, real DMN tools let non-programmers create executable decision models, even when the logic is quite complex. So you can think of DMN as a Low-Code language for decision logic that is business-friendly, transparent, and executable.

In that language, the shapes in the DRD define variables (with assigned datatypes), with the shape labels defining the variable names. Defining variables by drawing the DRD explains an unusual feature of FEEL, which is that variable names may contain spaces and other punctuation not normally allowed by programming languages. The value expression of each individual decision is the calculation of that decision variable’s value based on the values of its inputs, or information requirements. It is the intention of FEEL and boxed expressions that subject matter experts who are not programmers can create the value expressions themselves.

FEEL is called an expression language, a formula language like Excel formulas, as opposed to a programming language. FEEL just provides a formula for calculating an output value based on a set of input values. It does not create the output and input variables; the DRD does that. Referring back to our DRD, let’s look at the value expression for LTV Pct, the Loan-to-Value ratio expressed as a percent. The FEEL expression looks like this:

It’s simple arithmetic. Anyone can understand it. This is the simplest boxed expression type, called a literal expression, just a FEEL formula in a box, with the decision name and datatype in a tab at the top. Decision table is another boxed expression type, and there are a few more. Each boxed expression type has a distinct tabular format and meaning, and cells in those tables are FEEL expressions. In similar fashion, here is the literal expression for DTI Pct:

The tricky one is Mortgage Payment. It’s also just arithmetic, based on the components of Loan. But the formula is hard to remember, even harder to derive. And it’s one that in lending is used all the time. For that, the calculation is delegated to a bit of reusable decision logic called a Business Knowledge Model, or BKM. In the DRD, it’s represented as a box with two clipped corners, with a dashed arrow connecting it to a decision. A BKM does not have incoming solid arrows, or information requirements. Instead, its inputs are parameters defined by the BKM itself. BKMs provide two benefits: One, they allow the decision modeler to delegate the calculation to another user, possibly with more technical or subject matter knowledge, and use it in the model. Two, it allows that calculation to be defined once and reused in multiple decision models. The dashed arrow, called a knowledge requirement, signifies that the decision at the head of the arrow passes parameter values to the BKM, which then returns its output value to the decision. We say the decision invokes the BKM, like calling an api. The BKM parameter names are usually different from the variable names in the decision that invokes them. Instead, the invocation is a data mapping.

Here I have that BKM previously saved in my model repository under the name Loan Amortization Formula. On the Trisotech platform, I can simply drag it out onto the DRD and replace the BKM Payment Calculation with it. The BKM definition is shown below, along with an explanation of its use from the BKM Description panel. It has three parameters – p, r, and n, representing the loan amount, rate, and number of payments over the term – shown in parentheses above the value expression. The value expression is again a FEEL literal expression. It can only reference the parameters. The formula is just arithmetic – the ** symbol is FEEL’s exponentiation operator – but as you see, it’s complicated.

The decision Mortgage Payment invokes the BKM by supplying values to the parameters p, r, and n, mappings from the decision’s own input, Loan. We could use a literal expression for this, but DMN provides another boxed expression type called Invocation, which is more business-friendly:

In a boxed Invocation, the name of the invoked BKM is below the tab, and below that is a two-column table, with the BKM parameter names in the first column and their value expressions in the second column. Note that because Loan.Rate Pct is expressed as a percent, we need to divide its value by 100 to get r, which is a decimal value, not a percent.

At this point, our decision model is complete. But we need to test it! I can’t emphasize enough how important it is to ensure that your decision logic runs without error and returns the expected result. So let’s do that now, using the input data values below:

Here Loan Prequalification returns “Possibly approved”, and the supporting decision values look reasonable. We can look at the decision table and see that rule 8 is the one that matches.

So you see, DMN is something subject matter experts who are not programmers can use in their daily work. Of course, FEEL expressions can do more than arithmetic, and like Excel formulas, the language includes a long list of built-in functions that operate on text, numbers, dates and times, data structures, and lists. I’ve discussed much of that in my previous posts on DMN. But learning to use the full power of FEEL and boxed expressions – which you will need in real-world decision modeling projects – generally requires training. Our DMN Method and Style training gives you all that, including 60-day use of Trisotech Decision Modeler, lots of hands-on exercises, quizzes to test your understanding, and post-class certification in which you need to create a decision model containing certain required elements. It’s actually in perfecting that certification model that the finer points of the training finally sink in. And you need that full 60 days to really understand DMN’s capabilities.

DMN lets you accelerate time-to-value by engaging subject matter experts directly in the solution. If you want to see if DMN is right for you, check out the DMN training.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all