Bruce Silver's blog post - DMN List and Table Basics<
Bruce Silver
Blog

DMN List and Table Basics

By Bruce Silver

Read Time: 5 Minutes

Beginners in DMN often define a separate variable for every simple value in their model, a string, for example, or a number. But it’s often far better to group related values within a single variable, and the FEEL language in DMN has multiple ways to do that:

Defining DMN variables as structures, lists, and tables rather than exclusively simple types usually makes your models simpler, more powerful, and easier to maintain.

Datatypes

Trisotech Decision Modeler makes it easy to define structured datatypes. You simply provide a name and type for each component, as you see here:

There are many ways to create a list. The simplest is the list operator, square brackets enclosing a comma-separated list of expressions, such as [1,2,3]. The datatype of a list variable is called a collection, and best practice is to name the type Collection of [item type]. For example, the datatype of a table of IncomeItems with columns IncomeType and MonthlyAmount would be specified like this:

List Functions

FEEL provides a wide variety of built-in functions that operate of lists and tables. Some are specific to lists of numbers and date/times, but most apply to any list item type. Examples are shown in the table below:

Filters

To select an item from a list or table, FEEL provides the filter operator, modeled as square brackets immediately following the list or table name, containing a filter expression, either an integer or a Boolean. If an integer, the filter returns the item in that position. For example, myList[1] returns the first item in myList. A Boolean expression returns all items for which the expression is true. For example, Loan Table[ratePct<4.5] returns the collection of all Loan Table items for which the component ratePct is less than 4.5.

A Boolean filter always returns a collection, even if you know it can return at most a single item. If each Customer has a unique id, Customer[id=123456] will return a list containing one Customer, and Customer[id=123456].Name returns a list containing one Name. A list containing one item is NOT the same as the item on its own. It is best practice, then, to append the filter [1] to extract the item value, making the returned value type Text rather than Collection of Text: Customer[id=123456].Name[1].

It is quite common that a filtered list or table is the argument of a list function, for example, count(Loan Table[ratePct<4.5]) returns the count of rows of Loan Table for which ratePct is less than 4.5.

If a Boolean filter expression finds no items for which the expression is true, the returned value is the empty list [], and if you try to extract a value from it you get null. For example, if the table Customer has no entry with id=123456, then Customer[id=123456] is [] and Customer[id=123456].Name[1] is null. Because of situations like this, DMN recently introduced B-FEEL, a dialect of FEEL that better handles null values, and you should be using B-FEEL in your models. Without B-FEEL, concatenating a null string value with other text returns null; with B-FEEL the null value is replaced by the empty string “”.

Iteration

A powerful feature of FEEL is the ability to iterate some expression over all items in a list or table. The syntax is

for [range variable] in [list name] return [output expression]

Here range variable is a name you select to stand for a single item value in the list. For example, if LoanTable is a list of mortgage loan products specified by lenderName, rate, pointsPct, fees, and term, we can iterate the amortization formula over each loan product to get the monthly payment for each loan product:

for x in LoanTable return payment(RequestedAmt*(1+x.pointsPct/100), x.rate/100, x.term)

Here x is the range variable, meaning a single row of LoanTable, and payment is a BKM containing the amortization formula based on the loan amount, loan rate, and term to get the monthly mortgage payment. To simplify entry of this expression, Trisotech provides an iterator boxed expression.

In addition to this iteration over item value, FEEL provides iteration over item position, with the syntax

for [range variable] in [integer range] return [output expression]

For example,

for i in 1..10 return i*i

returns a list of the square of integers from 1 to 10.

Testing Membership in a List

FEEL provides a number of ways to check whether a value is contained in some list.

some x in [list expression] satisfies [Boolean expression]

returns true if any list item satisfies the Boolean test, and

every x in [list expression] satisfies [Boolean expression]

returns true only if all list items satisfy the test.

The in operator is another way of testing membership in a list:

[item value] in [list expression]

returns true if the value is in the list.

The list contains() function does the same thing:

list contains([list expression], [value])

And you can also use the count() function on a filter, like this:

count([filtered list expression])>0

Set Operations

The membership tests described above are ways to check whether a single value is contained in a list, but sometimes we have two lists and we want to know whether some or all items in listA are contained in listB. Sometimes we want to consider comparing setA to setB, where these sets are deduplicated and unordered versions of the lists. If necessary, we can use the function distinct values() to remove duplicates from the lists.

Intersection

ListA and listB intersect if any value is common between them. To test intersection, we can use the expression

some x in listA satisfies list contains(listB, x)
Containment

To test whether all items in listA are contained in listB we can use the every operator:

every x in listA satisfies list contains (listB, x)
Identity

The simple function listA=listB returns true only if the lists are identical at every position value. Testing whether the deduplicated unordered sets are identical is a little trickier. We can use the FEEL union() function to concatenate and deduplicate the lists, in combination with the every operator:

every x in union(listA, listB) satisfies (list contains(listA, x) and list contains(listB,x))

Sorting a List

We can sort a list using the FEEL sort() function, which is unusual in that the second parameter of the function is itself a function. Usually it is defined inline as an anonymous function, using the keyword function with two range variables standing for any two items in the list, and a Boolean expression of those range variables. The Boolean expression determines which range variable value precedes the other in the sorted list. For example,

sort(LoanTable, function(x,y) x.rate<y.rate)

sorts the list LoanTable in increasing value of the loan rate. Here the range variables x and y stand for two rows of LoanTable, and row x precedes row y in the sorted list if its rate value is lower.

Replacing a List Item

Finally, we can replace a list item with another using Trisotech’s list replace() function, another one that takes a function as a parameter. The syntax is

list replace([list], [match function], [newItem])

The match function is a Boolean test involving a range variable standing for some item in the original list. For example, EmployeeTable lists each employee’s available vacation days. When the employee takes a vacation, the days are deducted from the previous AvailableDays to generate an updated row NewRecord. We use the match function to select the employee matching the EmployeeId in another input, VacationRequest.

Now we want to replace the employee record in that table with NewRecord:

list replace(EmployeeTable, function(x, Vacation Request) x.employeeid=VacationRequest.employeeId, NewRecord)

Here the range variable x stands for some row of EmployeeTable, and we are going to replace the row matching the employeeid in VacationRequest. So in this case we are replacing a table row with another table row, but we could use list replace() to replace a value in a simple list as well.

The Bottom Line

In real-world decision logic, your source data is often specified using lists and tables. Rather than extracting individual items in your input data and processing one item at a time, it is generally easier and faster to process all the items at once using lists and tables. FEEL has a wide variety of functions and operators that make it easy, and the Automation features of Trisotech Decision Modeler make it easy to execute them.

Want a deeper dive into the use of lists and tables? Check out our DMN Method and Style training.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - BPM+ : The Enduring Value of Business Automation Standards
Bruce Silver
Blog

BPM+ : The Enduring Value of Business Automation Standards

By Bruce Silver

Read Time: 5 Minutes

While it no longer has the marketing cachet it once enjoyed, the fact that the foundations of the Trisotech platform are the three business automation standards from the Object Management Group – BPMN, CMMN, and DMN – remains a key advantage, with enduring benefits to users. In this post we’ll discuss what these standards do, how we got them, and their enduring business benefits.

In the Beginning

In the beginning, lets say the year 2000, there were no standards, not for business automation, not for almost anything in tech, really. The web changed all that. I first heard about BPMN in 2003. I had been working in something called workflow, automating and tracking the flow of human tasks, for a dozen years already, first as an engineering manager and later as an industry analyst. There were many products, all completely proprietary. But suddenly there was this new thing called BPM that promised to change all that.

Previously, in addition to workflow, there had been a completely separate technology of enterprise application integration or EAI. It was also a process automation technology but based on a message bus, system-to-system only, no human tasks, and again all proprietary technology. It was a mess.

Now a new technology called web services offered the possibility to unify them based on web standards. In the earlier dotcom days, the web just provided a way to advertise your business, maybe with some rudimentary e-commerce. Now people realized that the standard protocols and XML data of the web could be used to standardize API calls in the new Service-Oriented Architecture (SOA). A Silicon Valley startup called Intalio proposed using web services to define a new process automation language called BPML that unified workflow and application integration in this architecture. But they went further: Surprisingly, solutions using this language would be defined graphically, using a diagram notation called BPMN. The goal was engaging business users in the transition to SOA, and BPMN offered the hope of bridging the business-IT divide that had made automation projects fail in the past.

Enterprise application software vendors liked it too. Functionality that previously was embedded in their monolithic applications could be split off into microservices that could be invoked as needed and sequenced using the protocols of the world wide web. This was a game-changer. Maybe you could make human workflow tasks into services as well.

As it evolved, BPMN 1.0 took on the look of swimlane flowcharts, which were already used widely by business users, although without standard semantics for the shapes and symbols. Now with each shape bound to the semantics of the execution language BPML, BPMN promised “What You Draw Is What You Execute.” Intalio was able to organize a consortium of over 200 software companies called BPMI.org in support of this new business automation standard.

But in fact, they were too early, since the core web services standard WSDL was not yet final, and in the end BPML did not work with it. So BPML was replaced by another language BPEL that did. While the graphical notation BPMN remained popular with business for process documentation and improvement, it was not a perfect fit with BPEL for automation. There were patterns you could draw in BPMN that did not map to BPEL. So no longer could it claim “what you draw is what you execute.” BPMI.org withered and ultimately was acquired by OMG, a true standards organization.

Even so, BPMN remained extremely popular with business users. They were not, in fact, looking to automate their processes, merely to document them for analysis and potential improvement. Still the application software vendors and SOA middleware vendors pushed for a new version of BPMN that would make the diagrams executable. But it was not until 2010 that BPMN 2.0 finally gave BPMN diagrams a proper execution language. I had been doing BPMN training on version 1.x since 2007, and somehow I wound up on the BPMN 2.0 task force, representing the interests of those business users doing descriptive modeling.

BPMN 2.0 was widely adopted by almost all process automation vendors. Later OMG developed two other business automation standards, CMMN for case modeling and DMN for decision modeling, all based on a similar set of principles. There are three standards because they cover different things. They were developed at different times for different reasons. So let me briefly describe what they do, their common principles, and why they remain important today.

BPMN

BPMN was the first. BPMN describes a process as a flowchart of actions leading from some defined triggering event to one of several possible end states. Each action typically represents either a human task or an automated service, for the first time unifying human workflow and EAI. The diagrams include both the normal “happy path” and branching paths followed by exceptions. It is easy to see the sequence of steps the process may follow, but every instance of the process must follow some path in the diagram. That is both its strength and its basic limitation.

BPMN remains very popular because the diagrams are intuitive. Users understand them in general terms without training, and what you draw is what you execute. BPMN can describe many real-world processes but not all of them.

There are actually two distinct communities of BPMN users. It is interesting that they interact hardly at all. One uses BPMN purely for process description. They have no interest in automation, just documentation and analysis. Those BPMN users simply want to document their current business processes with hope of improving them, since the diagrams, if made properly with something like Method and Style, more clearly express the process logic than a long text document. For them, the value of BPMN is it generally follows the look of traditional swimlane flowcharts.

The other community is interested in process automation, the actual implementation of new and improved processes. For them, the value of BPMN is what you draw is what you execute. It is what allows business teams to better specify process requirements and generally collaborate more closely with development teams. This reduces time to value and generally increases user acceptance.

As a provider of BPMN training, my interactions have been mostly with the first community. In a prior life as an industry analyst, they were exclusively with the second community. So I have a good sense of the expectations of both groups. And while I believe the descriptive modeling community is probably still larger, it is now declining, and heres why.

In a descriptive modeling team, there are typically meetings and workshops where the facilitator asks how does the process start? What happens next? And so forth. Thats good for capturing the happy path, the normal activity flow to a successful end state, which is the main interest. But BPMN diagrams also contain gateways and events that define exception paths, since, remember, every instance of the process must follow some path drawn in the diagram. In reality, however, you are never going to capture all the exceptions that way. 

So over the past decade, a lot of process improvement efforts have shifted away from descriptive process modeling in this way to something called process mining, instrumenting key steps of the process as it is running however they are achieved and correlating the timestamps to create a picture of all the paths from start to end including, importantly, that small fraction of instances that take much too long, cost way too much, and are responsible for complaints to customer service. Dealing with those cases leads to significant process improvement, and automating them requires accommodating them somehow.

Some process mining tools map their findings back to BPMN, but BPMN demands a reason some gateway condition or event to explain each divergence from the happy path. This points out the key weakness of BPMN as an automation language, the difficulty in accommodating all the weird exceptions that occur in real world scenarios. Something more flexible is needed.

CMMN

CMMN was created to handle those processes that BPMN cannot. It focuses on a key difference from BPMN, work taking an unpredictable path of over its lifetime, emphasizing event-triggered behavior and ad-hoc decisions by knowledge workers.

A case in CMMN is not described by a flowchart, where you can clearly see the order of actions, but rather a diagram of states, called stages, each specifying actions that MAY be performed or in some cases MUST be performed, in order to complete some milestone or transition to some other state. That allows great flexibility, a better way than BPMN, in fact, to handle real-world scenarios in which it is impossible to enumerate the all the possible sequences of actions needed.

This comes at a cost, however. While what you draw is still what you execute, what you draw is not easy to understand. CMMN diagrams are not flowcharts but state diagrams. They are not intuitive, even if you understand the shapes and symbols well. Each diagram shape goes through various lifecycle states Available, Enabled, Active, Completed or Terminated – and each change of state is represented by a standard event that triggers state transitions in other shapes. So in the end the diagram, when fully expanded down to individual tasks, shows actions that MAY be performed or MUST be performed, and the events that trigger other shapes.

In practice, CMMN is almost always used in conjunction with BPMN, as the specific actions described, typically a mix of automated services and human interaction, are better modeled as BPMN processes. So typically in a business automation solution, CMMN is at the top level, where the choice of what actions to perform is flexible and event-driven. Many of the CMMN actions are modeled as process tasks, meaning enactment of a BPMN process. Organizing the solution in this way has the benefit of CMMNs flexibility to handle the events and exceptions of real-world end-to-end scenarios, and BPMNs business-understandable automation of the details of the required actions.

Above is an example from my book CMMN Method and Style, based on social care services in a European country. The stage Active Case contains 4 sub-stages without any entry conditions. That means they can be activated at any time. The triangle marker at the bottom means manually activated; without that they would activate automatically. And notice within each stage are a mix of human tasks, with the head and shoulders icon, and process tasks, with the chevron icon. At the bottom are 4 more tasks not in a substage that deal with administrative functions of the case. They potentially can occur at any time when Active Case is in the active state, but they have entry conditions the white diamonds meaning they must be triggered by events, either a user interaction or a timer. This diagram does not include any file item events receipt or update of some document or data which also triggers tasks and stage entry or exit. As you can see, the case logic is very flexible almost anything could be started at any time but at this level not too revealing as to the details of the actions. Those are detailed within the BPMN processes and tasks.

Here is another CMMN example that illustrates the flexibility needed for what seems like a simple thing like creating a user account in a financial institution. Yes, that is a straightforward process BPMN but it lives inside a wider context, which is managing the users status as a client. A new client has to be initialized, set up thats another BPMN process and then you can add one or more accounts as long as the client remains an Active User. You can deactivate the user thats another process which automatically terminates the Manage Active User stage, thats the black diamond, and triggers the Manage Inactive User stage thats the link to the white diamond labeled exit, exit being the lifecycle event associated with the Terminated state of Manage Active User. 

The only thing that happens in the Manage Inactive User stage is you can reactivate the user, another process. And when that is complete, since there is nothing else in that stage, the stage is complete, meaning the link to the white diamond in Manage Active User labeled complete triggers activation of that stage. The hashtag marker on both stages means after completion or termination they can be retriggered; without that marker they could not. And finally, the process task Delete user has no entry conditions, so it can be activated manually at any time, and when complete terminates the case, the black diamond. So what seemed initially like a simple process, Add User Account, in a real world application expands to a case handling the whole lifecycle of that client.

DMN

DMN describes not actions but business logic, understandable by non-programmers but deployable nevertheless as an executable service. DMN is a successor to business rule engine technology from the 1990s, better aligned with today’s microservice-oriented architecture. In DMN, a decision service takes data in and returns data out. It takes no other actions. Like BPMN and CMMN, design is largely graphical.

The diagram you see here, called a Decision Requirements Diagram, shows the dependencies of each rectangle, representing a decision, on supporting decisions and input data connected to it via incoming arrows called information requirements. The value expression of each decision can reference only those direct information requirements. Even if they are unable to model the value expressions, business users are able create these diagrams, which serve as verifiable business requirements for the executable solution.

But DMN was designed to let non-programmers go further, all the way in fact to executable decision services. Instead of program code, DMN provides standard tabular formats called boxed expressions. These are what define the expressions that compute the value of each decision.

Decision tables, like the one shown above, are the most familiar boxed expression type, but there are many more, such as Contexts, like the one shown below, where rows of the table define local variables called context entries, used to simplify the expressions in subsequent rows. Note here that the value expression of a context entry, such as Loan amount here, can itself be a context or some other boxed expression.

The expressions of each context entry, the cells in gray here, use a low-code expression language called FEEL. Although FEEL is defined in the DMN spec, it has the potential to be used in BPMN and CMMN as well. More about that later.

The combination of DRDs, boxed expressions, and FEEL means business users can create directly executable decision logic themselves without programming. In BPMN and CMMN, graphical design is limited to an outline of the process or case logic, the part shown in the diagram. To make the model fully executable, those standards normally still require programming.

On the Trisotech platform, however, that’s no longer the case. When BPMN and CMMN were developed, they did not define an expression language, because they assumed, correctly at the time, that the details of system integration and user experience would require programming in a language like Java. But DMN was published 6 years later, when software tools were more advanced. DMN was intended to allow non-programmers to create executable decision services themselves using a combination of diagrams, tables, and FEEL, a standard low-code expression language. An expression language is not a full programming language; it does not define and update variables, but just computes values through formulas. For example, the Formulas ribbon in Excel represents an expression language.

While adoption of FEEL as a standard has been slow, even within the decision management community, many business automation vendors have implemented their own low-code expression languages with the objective of further empowering business teams to collaborate with developers in solutions.. Low-code expression languages should be seen today as the important other half of model-based business automation, along with diagrams composed of standardized shapes, semantics, and operational behavior. The goal is faster time-to-value, easier solution maintenance, and improved user acceptance through close collaboration of business and development teams.

Shared Principles

Three key principles underlie these OMG standards, which play a key role in the Trisotech platform today, and I want to say more about why they are important.

The first is that they are all model-based – meaning diagrams and tables and those models are used both for outlining the desired solution, what have been called business requirements, and creating the executable solution itself, going back to the original promise of What You Draw Is What You Execute.

The traditional method of text-based business requirements is a well-known cause of project failure. Instead, models allow subject matter experts in the business to create an outline of the solution in the form of diagrams and tables that can be verified for completeness and self-consistency. Moreover, the shapes and symbols used in the diagrams are defined in a spec, so the models look the same, and work the same, in any tool. And even better, the execution semantics of the shapes is also spelled out in a spec, so what you draw is actually what you execute. Engaging business users together with IT from the start of the project speeds time to value and increases user acceptance.

Second, the models have a standard xml file format, so they can be interchanged between tools without loss of fidelity. In practice, for example, that means business users and developers can use different tools and interchange models between them. Ive been involved in engagements where the business wants to use modeling tool X, and IT, in a separate organization, insists on using tool Y for the runtime. Thats not great, but doable, since you can interchange models between the tools using the standard file format.

Third, OMG procedures ensure vendor-neutrality. Proprietary features cannot be part of the standard, and the IP is free to use. In practice this tends to result in open-source tools and lower software cost. It may not be widely appreciated but many, possibly the majority, of BPMN diagramming tools today stem from an open source project, and another open source project created a popular BPMN runtime. DMN is a similar story. Tool vendors were stumped by the challenge of parsing FEEL, where variable and function names may contain spaces, until Red Hat made their FEEL parser and DMN runtime open source. Implementing these standards is difficult, and open source software has been a big help in encouraging adoption.

Enduring Business Value

So where is the business value of these standards? What are the benefits to customers? In my view, these fall into 3 categories:

First, common terminology and understanding. I remember the days of workflow automation before BPMN. There was no common terminology. You learned a particular tool, and tools were for programmers. Now BPMN, CMMN, and DMN provide tool-independent understanding of processes, cases, and decision logic. There is much wider availability of information about these technologies, through books, web posts, and training, down to the details of modeling and automation. This in turn makes it easier to hire employees and engage service providers already conversant and experienced in the technology. It also lowers the re-learning cost if it becomes necessary to switch platform vendors.

Benefit #2 is faster time-to-value. Today this might be the most critical one for customers. Model-based solutions are simply faster to specify, faster to implement, and faster to adapt to changing requirements than traditional paradigms. Faster to specify because you can more easily engage the subject matter experts up front in requirements development and more quickly verify the requirements for completeness and self-consistency. Faster to implement because you are building on top of standard automation engines. The custom code is just for specific system integration and user experience details. And faster to adapt to changing requirements because often this involves only changing the model not the custom code.

So, for example, a healthcare provider can go live with a Customer Onboarding portal in only 3 months instead of a year, leveraging involvement from business users across multiple departments and close collaboration between business and IT. Everyone is using the same terminology, the same diagrams, and what you draw is what you execute.

A key feature of the 3 OMG standards is they were designed to work together. CMMN, for example, has a standard task type that calls a BPMN process and another that calls a DMN decision service. BPMN can call a DMN decision service and, on the Trisotech platform, a CMMN case. And so you have platforms like Trisotech that handle the full complement of business automation requirements event-driven case management, straight through process automation, long-running process automation, and decision automation. The alternative is a lot of custom code expensive and hard to adapt to changing requirements or separate platforms for process, decision, and case management, again expensive and requiring a lot of system integration.

A single platform that handles all three especially one based on models, business engagement, and vendor-independent standards lowers costs the cost of software, of implementation, and of maintenance of the solution over the long term.

SDMN, a Common Data Language

While BPMN, CMMN, and DMN work together well, they still lack a common format for data. DMN uses FEEL, but in most tools, executable models BPMN and CMMN use a programming language like Java to define data. It would be better if they shared a way to define data. Trisotech uses FEEL for all three model types, which is ideal. But now there is an effort to make that common data language a standard.

BPM+ Health is an HL7 “community of practice” in the healthcare space seeking industry convergence around such a common data language, SDMN. That language is essentially FEEL. Technically, it is defined by the OMG Shared Data Model and Notation (SDMN) standard, a key BPM+ Health initiative seeking to unify data modeling across BPMN, CMMN, and DMN. 

Data modeling is primarily used in executable models and omitted from purely descriptive models. The challenge for SDMN is to unify DMN variables, BPMN data objects, and CMMN case file items, and describe them as what DMN calls item definitions.

SDMN defines a standards-based logical data model for data used across BPMN, CMMN, and DMN. Data linked to SDMN definitions is not specific to a particular model but usable consistently across models of different types. In this sense it is the logical data model equivalent to the Trisotech Knowledge Entity Modeler. But where KEM defines a conceptual data model, with focus of KEM is on term semantics, SDMN is a logical data model, with focus on data structure, what DMN calls item definitions.

At this time, SDMN 1.0 is still in beta, but two artifacts illustrate what is planned. The DataItem diagram, shown below, describes the names and types of data items, and structural relationships between them.

In addition, an ItemDefinition diagram, again shown below, details the structure of each data item.

Trisotech users will recognize this as an alternative way to create item definitions, easier for tool vendors without Trisotech’s graphics capability. I still prefer Trisotech’s collapsible implementation:

The Path Forward

Going forward, look for more solutions developed using BPMN, CMMN, and DMN in combination. Typically CMMN is used at the top level, where it can handle all possibilities in a real-world process application. Specific actions are best implemented as BPMN processes, invoked from CMMN as a process task. Decision logic can be called from either CMMN or BPMN as a decision task. So you see why having common data definitions for all three modeling languages is valuable in such integrated solutions.

On the Trisotech platform, SDMN, in the form of FEEL, is already supported in all three languages, providing the additional benefit of Low-Code business automation. Starting in healthcare, SDMN will make this integration enabler available on other platforms as well.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

DecisionCamp 2024 Presentation
Revolutionizing Credit Risk Management in Banking

Presented By
Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE)
Description

The European Banking Authority (EBA) Dear CEO letter, typically issued to provide guidance and expectations for banks on key regulatory issues, emphasizes the need for stringent credit risk management, continuous monitoring, and compliance with evolving regulations.

The primary challenge for banks in monitoring customers and credit risks is the complexity and volume of data that must be continuously analyzed and acted upon. This complexity arises from several factors: the variety of triggers, the volume and complexity of metrics, continuous monitoring, quickly adaptable regulatory compliance, a comprehensive 360-degree customer view.

By leveraging DMN modeling & execution banks can effectively meet the EBA’s expectations outlined in the Dear CEO letter. DMN engines provide a robust solution for automated decision-making, continuous monitoring, regulatory compliance, and transparency, ensuring that banks can manage credit risks proactively and efficiently while maintaining the required standards set by the EBA and other regulatory bodies. This alignment not only helps in fulfilling regulatory obligations but also strengthens the overall financial health and stability of the bank.

During his presentation Stefaan Lambrecht will demonstrate an end-to-end solution to these challenges inspired by a real-life case, and making use of an integrated use of DMN, CMMN and BPMN.

Watch the video

Content

Presentation

View all

DecisionCamp 2024 Presentation
Revolutionizing Credit Risk Management in Banking

Presented By
Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE)
Description

The European Banking Authority (EBA) Dear CEO letter, typically issued to provide guidance and expectations for banks on key regulatory issues, emphasizes the need for stringent credit risk management, continuous monitoring, and compliance with evolving regulations.

The primary challenge for banks in monitoring customers and credit risks is the complexity and volume of data that must be continuously analyzed and acted upon. This complexity arises from several factors: the variety of triggers, the volume and complexity of metrics, continuous monitoring, quickly adaptable regulatory compliance, a comprehensive 360-degree customer view.

By leveraging DMN modeling & execution banks can effectively meet the EBA’s expectations outlined in the Dear CEO letter. DMN engines provide a robust solution for automated decision-making, continuous monitoring, regulatory compliance, and transparency, ensuring that banks can manage credit risks proactively and efficiently while maintaining the required standards set by the EBA and other regulatory bodies. This alignment not only helps in fulfilling regulatory obligations but also strengthens the overall financial health and stability of the bank.

During his presentation Stefaan Lambrecht will demonstrate an end-to-end solution to these challenges inspired by a real-life case, and making use of an integrated use of DMN, CMMN and BPMN.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Mastering Decision Centric Orchestration

Balancing Human Insight and AI Automation for Justifiable, Context-Aware Business Choices

Presented By
Denis Gagne CEO & CTO Trisotech.
Description

In today’s dynamic business landscape, decision-centric orchestration is pivotal for organizational success. This presentation delves into the diverse spectrum of decisions within an organization, ranging from those that are purely human to those that are fully automated. We will explore how various types of artificial intelligence (AI) support decision automation, with a particular emphasis on the crucial role of context in making informed business choices. Attendees will gain a comprehensive understanding of how context-aware AI can enhance decision-making processes, ensuring that outcomes are not only efficient but also relevant to specific business needs.

Highlighting the necessity of explainable and justifiable decisions, we will discuss the imperative of maintaining human oversight in all business processes. The presentation will address the challenges and opportunities associated with integrating AI into decision-making frameworks, focusing on the balance between human intuition and AI-driven efficiency. Attendees will learn strategies for achieving a harmonious integration of these elements, ensuring that their organization’s decision-making is robust, transparent, and aligned with ethical standards. Ultimately, this session aims to equip business leaders with the knowledge and tools to leverage both human insight and AI automation for superior organizational performance.

Watch the video

Content

Presentation

View all

Mastering Decision Centric Orchestration

Balancing Human Insight and AI Automation for Justifiable, Context-Aware Business Choices

Presented By
Denis Gagne CEO & CTO Trisotech.
Description

In today’s dynamic business landscape, decision-centric orchestration is pivotal for organizational success. This presentation delves into the diverse spectrum of decisions within an organization, ranging from those that are purely human to those that are fully automated. We will explore how various types of artificial intelligence (AI) support decision automation, with a particular emphasis on the crucial role of context in making informed business choices. Attendees will gain a comprehensive understanding of how context-aware AI can enhance decision-making processes, ensuring that outcomes are not only efficient but also relevant to specific business needs.

Highlighting the necessity of explainable and justifiable decisions, we will discuss the imperative of maintaining human oversight in all business processes. The presentation will address the challenges and opportunities associated with integrating AI into decision-making frameworks, focusing on the balance between human intuition and AI-driven efficiency. Attendees will learn strategies for achieving a harmonious integration of these elements, ensuring that their organization’s decision-making is robust, transparent, and aligned with ethical standards. Ultimately, this session aims to equip business leaders with the knowledge and tools to leverage both human insight and AI automation for superior organizational performance.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Bruce Silver's blog post - What Is a Decision Service?
Bruce Silver
Blog

What Is a Decision Service?

By Bruce Silver

Read Time: 5 Minutes

Beginning DMN modelers might describe a decision service as that rounded rectangle shape in a DRD that behaves similarly to a BKM. That’s true, but it is a special case. Fundamentally, a decision service is the unit of execution of DMN logic, whether that is invoked by a decision in the DRD, a business rule task in BPMN, a decision task in CMMN, or an API call in any external client application or process. Whenever you execute DMN, whether in production or simply testing in the modeling environment, you are executing a decision service.

In Trisotech Decision Modeler, that is the case even if you never created such a service yourself. That’s because the tool has created one for you automatically, the Whole Model Decision Service, and used it by default in model testing. In fact, a single decision model is typically the source for multiple decision services, some created automatically by the tool and some you define manually, and you can select any one of them for testing or deployment. In this post we’ll see how that works.

Service Inputs, Outputs, and Encapsulated Decisions

As with any service, the interface to a decision service is defined by its inputs and outputs. The DMN model also defines the internal logic of the decision service, the logic that computes the service’s output values from its input values. Unlike a BKM, where the internal logic is a single boxed expression, the internal logic of a decision service is defined by a DRD.

When the service is invoked by a decision in a DMN model, typically it has been defined in another DMN model and imported to the invoking model, such as by dragging from the Digital Enterprise Graph. But otherwise, the service definition is a fragment of a larger decision model, including possibly the entire DRD, and is defined in that model.

Within that fragment, certain decisions represent the service outputs, other decisions and input data represent the inputs, and decisions in between the outputs and inputs are defined as “encapsulated”, meaning they are used in the service logic but their values are not returned in the service output. When you execute a decision service, you supply values to the service inputs and the service returns the values of its outputs.

One Model, Many Services

In Trisotech’s Whole Model Decision Service, the inputs are all the input data elements in the model, the outputs are all “top-level” decisions – that is, decisions that are not information requirements for other decisions. All other decisions in the DRD are encapsulated. In addition to this service, Trisotech automatically creates a service for each DRD page in the model, named Diagram Page N. If there is only one DRD page in the model, it will be the same as the Whole Model Decision Service, but if your model has imported a decision service or defines one independently of the invoking DRD, an additional Diagram Page N service will reflect that one.

All that is just scratching the surface, because quite often you will want to define additional decision services besides those automatically created. For example, you might want your service to return one or more decisions that are defined as encapsulated in the Whole Model Decision Service. Or you might want some inputs to your service to be not input data elements but supporting decisions. In fact, this is very common in Trisotech Low-Code Business Automation models, where executable BPMN typically invoke a sequence of decision services, each a different fragment of a single DMN model. In BPMN, if you are invoking the Whole Model Decision Service it’s best to rename it, because the BPMN task that invokes it inherits the decision service name as the task name.

Defining a Decision Service

So how do you define a decision service? The DMN spec describes one way, but it is not the best way. The way described in the spec is as a separate DRD page containing an expanded decision service shape, a resizable rounded rectangle bisected by a horizontal line. The shape label is the service name. Decisions drawn above the line are output decisions, those below the line are encapsulated decisions. Service inputs – whether input data or decisions – are drawn outside the service shape, as are any BKMs invoked by decisions in the service. Information requirements and knowledge requirements are drawn normally.

The better way, at least in Trisotech Decision Modeler, is the decision service wizard.

In the wizard, you first select the output decisions from those defined in your model, and the wizard populates the service inputs with their direct information requirements. You can then promote selected inputs to encapsulated, and the wizard recalculates the needed inputs. You can keep doing that until all service inputs are input data, or you can stop anywhere along the way. The reason why this is better is that it ensures that all the logic needed to compute the output values from the inputs are properly captured in encapsulated decisions. You cannot guarantee that with the expanded decision service shape method.

Testing DMN Models

Trisotech’s Automation feature lets you test the logic of your DMN models, and I believe that is critically important. On the Execution ribbon, the Test button invites you first to select a particular decision service to test. If you forget to do this, the service it selects by default depends on the model page you have open at the time you click Test.

In Test, the service selector dropdown lists even more services than the automatically generated ones and those you created manually, which are listed above a gray line in the dropdown. Below the line is listed a separate service for every decision in the model, named with the decision name, with direct information requirements as the inputs. (For this reason, you should not name a decision service you create with the name of a decision, as this name conflicts with the generated service.) In addition, below the line is listed one more: Whole model, whose inputs are all the input data elements and outputs are all the decisions in the model. It’s important to note that these below-the-line services are available only for testing in Decision Modeler. If you want to deploy one of them, you need to manually create it, in which case it is listed above the line.

In Test, your choice of decision service from the dropdown determines the inputs expected by the tool. As an alternative to the normal html form, which is based on the datatypes you have assigned to the inputs, you can select an XML or json file with the proper datatype, or use a previously saved test case.

Invoking a Decision Service in DMN

Invoking a decision service in DMN works the same way as invoking a BKM. On the DRD containing the invocation, the decision service is shown as a collapsed decision service shape linked to the invoking decision with a knowledge requirement.

The invoking decision can use either a boxed invocation or literal invocation. In the former, service inputs are identified by name; in the latter, input names are not used. Arguments are passed in the order of the parameters in the service definition, so you may need to refer to the service definition to make sure you have that right.

Invoking a Decision Service in BPMN

In Business Automation models it is common to model almost any kind of business logic as a DMN decision service invoked by a business rule task, also called a decision task. In Trisotech Workflow Modeler, you need to link the task to a decision service in your workspace; it is not necessary to first Deploy the service. (Deployment is necessary to invoke the service from an external client.) As mentioned previously, the BPMN task inherits the name of the decision service. By default, the task inputs are the decision service inputs and the task outputs are the decision service outputs.

Data associations provide data mappings between process variables – BPMN data objects and data inputs – to the task inputs, and from task outputs to other process variables – BPMN data objects and data outputs. On the Trisotech platform, these data mappings are boxed expressions using FEEL, similar to those used in DMN.

The Bottom Line

The important takeaway is the a decision service is more than a fancy BKM that you will rarely use. If you are actually employing DMN in your work, you will use decision services all the time, both for logic testing and deployment, and for providing business logic in Business Automation models. The decision service wizard makes it easy.

If you want to find out more about how to define and use decision services, check out DMN Method and Style 3rd edition, with DMN Cookbook, or my DMN Method and Style training, which includes post-class certification.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - FEEL Operators Explained
Bruce Silver
Blog

FEEL Operators Explained

By Bruce Silver

Read Time: 5 Minutes

Although DMN’s expression language FEEL was designed to be business-friendly, it remains intimidating to many. That has led to the oft-heard charge that “DMN is too hard for business users”. That’s not true, at least for those willing to learn how to use it. Although the Microsoft Excel Formula language is actually less business-friendly than FEEL, somehow you never hear that “Excel is too hard for business users.”

One key reason why FEEL is more business-friendly than the Excel Formula language, which they now call Power FX, is its operators. FEEL has many, and Power FX has very few. In this post we’ll discuss what operators are, how they simplify the expression syntax, and how DMN boxed expressions make some FEEL operators more easily understood by business users.

It bears repeating that an expression language is not the same as a programming language. A programming language has statements. It defines variables, calculates and assigns their values. You could call DMN as a whole a programming language, but the expression language FEEL does not define variables or assign their values. Those things done graphically, in diagrams and tables – the DRD and boxed expressions. FEEL expressions are simply formulas that calculate values: data values in, data values out.

Functions and Operators

Those formulas are based on two primary constructs: functions and operators.

The logic of a function is specified in the function definition in terms of inputs called parameters. The same logic can be reused simply by invoking the function with different parameter values, called arguments. The syntax of function invocation – both in FEEL and Excel Formulas – is the function name immediately followed by parentheses enclosing a comma-separated list of arguments. FEEL provides a long list of built-in functions, meaning the function names and their parameters are defined by the language itself. Excel Formulas do the same. In addition, DMN allows modelers to create custom functions in the form of Business Knowledge Models (BKMs) and decision services, something Excel does not allow without programming.

Operators are based on reserved words and symbols in the expression with meaning defined by the expression language itself. There are no user-defined operators. They do not use the syntax of a name followed by parentheses enclosing a list of arguments. As a consequence, the syntax of an expression using operators is usually shorter, simpler, and easier to understand than an expression using functions.

You can see this from a few examples in FEEL where you could use either a function or an operator. One is simple addition. Compare the syntax of the expression adding variables a and b using the sum() function

sum(a, b)

with its equivalent using the addition operator +:

a + b

The FEEL function list contains() and the in operator do the same thing, test containment of a value in a list. Compare

list contains(myList, "abc")

with

"abc" in myList

Both FEEL and Excel support the basic arithmetic operators like +, -, *, and /, comparison operators like =, >, or <=, and string concatenation. But those are essentially the only operators provided by Excel, whereas FEEL provides several more. It is with these more complex operators that FEEL’s business-friendliness advantage stands out.

if..then..else

Let’s start with the conditional operator, if..then..else. These keywords comprise an operator in FEEL, where Excel can use only functions. Compare the FEEL expression

if Credit Score = "High" and Affordability = "OK" then "Approved" else "Disapproved"

with Excel’s function-based equivalent:

IF(AND(Credit Score = "High", Affordability = "OK"), "Approved", "Disapproved")

The length is about the same but the FEEL is more human-readable. Of course, the Excel expression assumes you have assigned a variable name to the cells – something no one ever does. So you would be more likely to see something like this:

IF(AND(B3 = "High", C3 = "OK"), "Approved", "Disapproved")

That is a trivial example. A more realistic if..then..else might be

if Credit Score = "High" and Affordability = "OK" then "Approved"
        else if Credit Score = "High" and Affordability = "Marginal" then "Referred"
        else "Disapproved"

That’s longer but still human-readable. Compare that with the Excel formula:

IF(AND(Credit Score = "High", Affordability= "OK"), "Approved", IF(AND(Credit Score = "High",
         Affordability = "Marginal"), "Referred", "Disapproved"))

Even though the FEEL syntax is fairly straightforward, DMN includes a conditional boxed expression that enters the if, then, and else expressions in separate cells, in theory making the operator friendlier for some users and less like code. Using that boxed expression, the logic above looks like this:

Filter

The FEEL filter operator is square brackets enclosing either a Boolean or integer expression, immediately following a list. When the enclosed expression is a Boolean, the filter selects items from the list for which the expression is true. When the enclosed expression evaluates to positive integer n, the filter selects the nth item in the list. (With negative integer n, it selects the nth item counting backward from the end.) In practice, the list you are filtering is usually a table, a list of rows representing table records, and the Boolean expression references columns of that table. I wrote about this last month in the context of lookup tables in DMN. As we saw then, if variable Bankrates is a table of available mortgage loan products like the one below,

then the filter

Bankrates[lenderName = "Citibank"]

selects the Citibank record from this table. Actually, a Boolean filter always returns a list, even if it contains just one item, so to extract the record from that list we need to append a second integer filter [1]. So the correct expression is

Bankrates[lenderName = "Citibank"][1]

Excel Formulas do not include a filter operator, but again use a function: FILTER(table, condition, else value). So if we had assigned cells A2:D11 to the name Bankrates and the column A2:A11 to the name lenderName, the equivalent Excel Formula would be

FILTER(Bankrates, lenderName = "Citibank", "")

but would more likely be entered as

FILTER(A2:D11, A2:A11 = "Citibank", "")

FEEL’s advantage becomes even more apparent with multiple query criteria. For example, the list of zero points/zero fees loan products in FEEL is

Bankrates[pointsPct = 0 and fees = 0]

whereas in Excel you would have

FILTER(A2:D11, (C2:C11=0)*(D2:D11=0), "")

There is no question here that FEEL is more business-friendly.

Iteration

The for..in..return operator iterates over an input list and returns an output list. It means for each item in the input list, to which we assign a dummy range variable name, calculate the value of the return expression:

for <range variable> in <input list> return <return expression, based on range variable>

It doesn’t matter what you name the range variable, also called the iterator, as long as it does not conflict with a real variable name in the model. I usually just use something generic like x, but naming the range variable to suggest the list item makes the expression more understandable. In the most common form of iteration, the input list is some expression that represents a list or table, and the range variable is an item in that list or row in that table.

For example, suppose we want to process the Bankrates table above and create a new table Payments by Lender with columns Lender Name and Monthly Payment, using a requested loan amount of $400,000. And suppose we have a BKM Lender Payment, with parameters Loan Product and Requested Amount, that creates one row of the new table, a structure with components Lender Name and Monthly Payment. We will iterate a call to this BKM over the rows of Bankrates using the for..in..return operator. Each iteration will create one row of Payments by Lender, so at the end we will have a complete table.

The literal expression for Payments by Lender is

for product in Bankrates return Lender Payment(product, Requested Amount)

Here product is the range variable, meaning one row of Bankrates, a structure with four components as we saw earlier. Bankrates is the input list that we iterate over. The BKM Lender Payment is the return expression.Beginners are sometimes intimidated by this literal expression, so, as with if..then..else, DMN provides an iterator boxed expression that enters the for, in, and return expressions in separate cells.

The BKM Lender Payment uses a context boxed expression with no final result box to create each row of the table. The context entry Monthly Payment invokes another BKM, Loan Amortization Formula, which calculates the value based on the adjusted loan amount, the interest rate, and fees.

Excel Formulas do not include an iteration function. Power FX’s FORALL function provides iteration, but it is not available in Excel. To iterate an expression in Excel you are expected to fill down in the spreadsheet.

The FEEL operators some..in..satisfies and every..in..satisfies represent another type of iteration. The range variable and input list are the same as with for..in..return. But in these expressions the satisfies clause is a Boolean expression, and the iteration operator returns not a list but a simple Boolean value. The one with some returns true if any iteration returns true, and the one with every returns true only if all iterations return true.

For example, again using Bankrates,

some product in Bankrates satisfies product.pointsPct = 0 and product.fees = 0

returns true, while

every product in Bankrates satisfies product.pointsPct = 0 and product.fees = 0

returns false. The iterator boxed expression works with this operator as well.

The bottom line is this: FEEL operators are key to its combination of expressive power and business-friendliness, surpassing that of Microsoft Excel Formulas. Modelers should not be intimidated by them. For detailed instruction and practice in using these and other DMN constructs, check out my DMN Method and Style training. You get 60-day use of Trisotech Decision Modeler and post-class certification at no additional cost.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Lookup Tables in DMN
Bruce Silver
Blog

Lookup Tables in DMN

By Bruce Silver

Read Time: 5 Minutes

Lookup tables are a common logic pattern in decision models. To model them, I have found that beginners naturally gravitate to decision tables, being the most familiar type of value expression. But decision tables are almost never the right way to go. One basic reason is that we generally want to be able to modify the table data without creating a new version of the decision model, and with decision tables you cannot do that. Another reason is that decision tables must be keyed in by hand, whereas normal data tables can be uploaded from Excel, stored in a cloud datastore, or submitted programmatically as JSON or XML.

The best way to model a lookup table is a filter expression on a FEEL data table. There are several ways to model the data table – as submitted input data, a cloud datastore, a zero-input decision, or a calculated decision. Each way has its advantages in certain circumstances. In this post we’ll look at all the possibilities.

As an example, suppose we have a table of available 30-year fixed rate home mortgage products, differing in the interest rate, points (an addition to the requested amount as a way to “buy down” the interest rate), and fixed fees. You can find this data on the web, and the rates change daily. In our decision service, we want to allow users to find the current rate for a particular lender, and in addition find the monthly payment for that lender, which depends on the requested loan amount. Finding the lender’s current rate is a basic lookup of unmodified external data. Finding the monthly payment using that lender requires additional calculation. Let’s look at some different ways to model this.

We can start with a table of lenders and rates, either keyed in or captured by web scraping. In Excel it looks like this:

When we import the Excel table into DMN, we get a FEEL table of type Collection of tBankrate, shown here:

Each row of the table, type tBankrate, has components matching the table columns. Designating a type as tPercent simply reminds us that the number value represents a percent, not a decimal.

Here is one way to model this, using a lookup of the unmodified external data, and then applying additional logic to the returned value.

We define the input data Bankrates as type Collection of tBankrate and lookup my rate – the one that applies to input data my lender. The lookup decision my rate uses a filter expression. A data table filter typically has the format

<table>[<Boolean expression of table columns>]

in which the filter, enclosed in square brackets, contains a Boolean expression. Here the table is Bankrates and the Boolean expression is lenderName = my lender. In other words, select the row for which column lenderName matches the input data my lender.

A filter always returns a list, even if it contains just one item. To extract the item from this list, we use a second form of a filter, in which the square brackets enclose an integer:

<list>[<integer expression>]

In this case, we know our data table just has a single entry for each lender, so we can extract the selected row from the first filter by appending the filter [1]. The result is no longer a list but a single row in the table, type tBankrate.

The decision my payment uses a BKM holding the Loan Amortization Formula, a complicated arithmetic expression involving the loan principal (p), interest rate (r), and number of payments (n), in this case 360.

Decision my payment invokes this BKM using the lookup result my rate. Input data loan amount is just the borrower’s requested amount, but the loan principal used in Loan Amortization Formula (parameter p) also includes the lender’s points and fees. Since pointsPct and ratePct in our data table are expressed as percent, we need to divide by 100 to get their decimal value used in the BKM formula.

When we run it with my lender “Citibank” and loan amount $400,000, we get the result shown here.

That is one way to do it. Another way is to enrich the external data table with additional columns, such as the monthly payment for a given loan amount, and then perform the lookup on this enriched data table. In that case the data table is a decision, not input data.

Here the enriched table Payments by Bank has an additional column, payment, based on the input data loan amount. Adding a column to a table involves iteration over the table rows, each iteration generating a new row including the additional column. In the past I have typically used a context BKM with no final result box to generate the each new row. But actually it is simpler to use a literal expression with the context put() function, as no BKM is required to generate the row, although we still need the Loan Amortization Formula. (Simpler for me, but the resulting literal expression is admittedly daunting, so I’ll show you an alternative boxed expression that breaks it into simpler pieces.)

context put(), with parameters context, keys, and value, appends components (named by keys) to a an existing structure (context), and assigns their value. If keys includes an existing component of context, value overwrites the previous value. Here keys is the new column name “payment”, and value is calculated using the BKM Loan Amortization Formula. So, as a single literal expression, Payments by Bank looks like this:

Here we used literal invocation of the BKM instead of boxed invocation, and we applied the decimal() function to round the result.

Alternatively, we can use the iterator boxed expression instead of the literal for..in..return operator and invocation boxed expressions for the built-in functions decimal() and context put() as well as the BKM. With FEEL built-in functions you usually use literal invocation but you can use boxed invocation just as well.

Now my payment is a simple lookup of the enriched data table Payments by Bank, appending the [1] filter to extract the row and then .payment to extract the payment value for that row.

When we run it, we get the same result for Citibank, loan amount $400,000:

The enriched data table now allows more flexibility in the queries. For example, instead of finding the payment for a particular lender, you could use a filter expression to find the loan product(s) with the lowest monthly payment:

Payments by Bank[payment=min(Payments by Bank.payment)]

which returns a single record, AimLoan. Of course, you can also use the filter query to select a number of records meeting your criteria. For example,

Payments by Bank[payment < 2650]

will return records for AimLoan, AnnieMac, Commonwealth, and Consumer Direct.

Payments by Bank[pointsPct=0 and fees=0]

will return records for zero-points/zero-fee loan products: Aurora Financial, Commonwealth, and eLend.

Both of these methods require submitting the data table Bankrates at time of execution. Our example table was small, but in real projects the data table could be quite large, with thousands of rows. This is more of a problem for testing in the modeling environment, since with the deployed service the data is submitted programmatically as JSON or XML. But to simplify testing, there are a couple ways you can avoid having to input the data table each time.

You can make the data table a zero-input decision using a Relation boxed expression. On the Trisotech platform, you can populate the Relation with upload from Excel. To run this you merely need to enter values for my lender and loan amount. You can do this in production as well, but remember, with a zero-input decision you cannot change the Bankrates values without versioning the model.

Alternatively, you can leave Bankrates as input data but bind it to a cloud datastore. Via an admin interface you can upload the Excel table into the datastore, where it is persisted as a FEEL table. So in the decision model, you don’t need to submit the table data on execution, and you can periodically update the Bankrates values without versioning the model. Icons on the input data in the DRD indicate its values are locked to the datastore.

Lookup tables using filter expressions are a basic pattern you will use all the time in DMN. For more information on using DMN in your organization’s decision automation projects, check out my DMN Method and Style training or my new book, DMN Method and Style 3rd edition, with DMN Cookbook.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Dr. John Svirbely's blog post - Going from Zero to Success using BPM+ for Healthcare. 
                Part I: Learning Modeling and Notation Tools
Dr. John Svirbely, MD
Blog

Going from Zero to Success using BPM+ for Healthcare.

Part I:
Learning Modeling and Notation Tools

By Dr. John Svirbely, MD

Read Time: 3 Minutes

Welcome to the first installment of this informative three-part series providing an overview of the resources and the success factors required to develop innovative, interoperable healthcare workflow and decision applications using the BPM+ family of open standards. This series will unravel the complexities and necessities for achieving success with your first clinical guideline automation project. Part I focuses on how long it will take you to reach cruising speed for creating BPM+ visual models.

When starting something new, people often ask some common questions. One is how long will it take to learn the new skills required. This impacts how long it will take to complete a project and therefore costs. Learning something new can also be somewhat painful when we are set in our old ways.

Asking such questions is important, since there is often a disconnect between what is promoted online and the reality. I can give my perspective based on using the Trisotech tools for several years, starting essentially from scratch.

How long does it take to learn?

The simple answer – it depends. A small project can be tackled by a single person quite rapidly. That is how I got started. Major projects using these tools should be approached as team projects rather than something an individual can do. Sure, there are people who can master a wide range of skills, but in general most people are better at some things than others. Focusing on a few things is more productive than trying to do everything. A person can become familiar with the range of tools, but they need to realize that they may only be able to unlock a part of what is needed to automate a clinical guideline.

The roles that need to be filled to automate a clinical guideline with BPM+ include:

1 subject matter expert (SME)

2 medical informaticist

3 visual model builder

4 hospital programmer/system integrator

5 project manager

6 and of course, tester

A team may need to be composed of various people who bring a range of skills and fill various roles. A larger project may need more than one person in some of these roles.

The amount of time needed to bring a subject matter expert (SME) up to speed is relatively short. Most modeling diagrams can be understood and followed after a few days. I personally use a tool called the Knowledge Entity Modeler (KEM) to document domain knowledge; this allows specification of term definitions, clinical coding, concepts maps and rule definitions. The KEM is based on the SVBR standard, but its visual interface makes everything simple to grasp. Other comparable visual tools are available. The time spent is quickly compensated for by greater efficiency in knowledge transfer.

The medical informaticist has a number of essential tasks such as controlling terminology, standardizing data, and assigning code terms. The person must understand the nuances of how clinical data is acquired including FHIR. These services cannot be underestimated since failures here can cause many problems later as the number of models increase or as models from different sources are installed.

The model builder uses the various visual modelling languages (DMN, BPMN, CMMN) according to the processes and decisions specified by the SME. These tools can be learned quickly to some extent, but there are nuances that may take years to master. While some people can teach themselves from books or videos, the benefits of taking a formal course vastly outweigh the cost and time spent. Trsiotech offers eLearning modules that you can learn from at your own pace.

When building models, there is a world of difference between a notional model and one that is automatable. Notional models are good for knowledge capture and transfer. A notional model may look good on paper only to fail when one tries to automate it. The reasons for this will be discussed in Part 3 of this blog series.

The hospital programmer or system integrator is the person who connects the models with the local EHR or FHIR server so that the necessary data is available. Tools based on CDS Hooks or SMART on FHIR can integrate the models into the clinical workflow so that they can be used by clinicians. This person may not need to learn the modeling tools to perform these tasks.

The job of the project manager is primarily standard project management. Some knowledge of the technologies is helpful for understanding the problems that arise. This person’s main task is to orchestrate the entire project so that it keeps focused and on schedule. In addition, the person keeps chief administrators up to date and tries to get adequate resources.

The final player is the tester. Testing prior to release is best done independently of other team members to maintain objectivity. There is potential for liability with any medical software, and these tools are no exception. This person also oversees other quality measures such as bug reports and complaints. Knowing the modeling languages is helpful but understanding how to test software is more important.

My journey

I am a retired pathologist and not a programmer. While having used computers for many years, my career was spent working in community hospitals. When I first encountered the BPM+ standards, it took several months and a lot of prodding before I was convinced to take formal training. I have never regretted that decision and wish that I had taken training sooner.

I started with DMN. On-line training takes about a month. After an additional month I had enough familiarity to become productive. In the following 12 months I was able to generate over 1,000 DMN models while doing many other things. It was not uncommon to generate 4 models in one day.

I learned BPMN next. Training online again took a month. This takes a bit longer to learn because it requires an appreciation of how to design a process so that it executes optimally. Initially a model would take me 2-3 days to complete, but later this dropped to less than a day. Complex models can take longer, especially when multiple people need to be orchestrated and exception handling is introduced.

CMMN, although offering great promise for healthcare, is a tough nut to crack. Training is harder to arrange, and few vendors offer automatable versions. This standard is better saved until the other standards have been mastered.

What are the barriers?

Most of the difficulties that I have encountered have not been related to using the standards. They usually arise from organizational or operational issues. Some common barriers that I have encountered include:

1 lack of clear objectives, or objectives that constantly change.

2 lack of commitment from management, with insufficient resources.

3 unrealistic expectations.

4 rushing into models before adequate preparations are made.

If these can be avoided, then most projects can be completed in a satisfactory manner. How long it takes to implement a clinical guideline will be discussed in the next blog.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - DMN 101
Bruce Silver
Blog

DMN 101

By Bruce Silver

Read Time: 4 Minutes

Most of my past posts about DMN have assumed that the reader knows what it is and may be using it already. But there is undoubtedly a larger group of readers who have heard about it but don’t really understand what it’s all about. And possibly an equally large group that have heard about it from detractors and have some misconceptions. So in this post I will try to explain what it is and how it works.

DMN, which stands for Decision Model and Notation, is a model-based language for business decision logic. Furthermore, it is a vendor-neutral standard maintained by the Object Management Group (OMG), the organization behind BPMN, CMMN, and other standards. As with OMG’s other business modeling standards, “model” means the names, meaning, and execution behavior of each language element are formally defined in a UML metamodel, and “notation” means that a significant portion of the language is defined graphically, in diagrams and tables using specific shapes and symbols linked to model elements. In other words, the logic of a business decision, business process, or case is defined by diagrams having a precise meaning, independent of the tool that created them. The main reason for defining the logic graphically is to engage non-programmers, aka business people, in their creation and maintenance.

DMN models the logic of operational decisions, those made many times a day following the same explicit rules. Examples include approval of a loan, validation of submitted data, or determining the next best action in a customer service request. These decisions typically depend on multiple factors, and the logic is frequently complex. The most familiar form of DMN logic is the decision table. All DMN tools support decision tables, and that’s because business people understand them readily with zero training. Consider the decision table below, which estimates the likelihood of qualifying for a home mortgage:

Qualifying for a home mortgage depends primarily on three factors: the borrower’s Credit Score, a measure of creditworthiness; the Loan-to-Value ratio, dividing the loan amount by the property appraised value, expressed as a percent; and the borrower’s Debt-to-Income ratio, dividing monthly housing costs plus other loan payments by monthly income, expressed as a percent. Those three decision table inputs are represented by the columns to the left of the double line. The decision table output, here named Loan Prequalification, with possible values “Likely approved”, “Possibly approved”, “Likely disapproved”, or “Disapproved”, is the column to the right of the double line. Below the column name is its datatype, including allowed values. Each numbered row of the table is a decision rule. Cells in the input columns are conditions on the input, and if all input conditions for a rule evaluate to true, the rule is said to match and the value in the output column is selected as the decision table output value.

A hyphen in an input column means the input is not used in the rule; the condition is true by default. So the first rule says, if Credit Score is less than 620, Loan Prequalification is “Disapproved”. Numeric ranges are shown as values separated by two dots, all enclosed in parentheses or square brackets. Parenthesis means exclude the endpoint in the range; square bracket means include it. So rule 4 says, if Credit Score is greater than or equal to 620 and less than 660, and LTV Pct is greater than 75 and less than or equal to 80, and DTI Pct is greater than 36 and less than or equal to 45, then Loan Prequalification is “Likely disapproved”. Once you get the numeric range notation, the meaning of the decision table is clear, and this is a key reason why DMN is considered business-friendly.

But if you think harder about it, you see that while Credit Score might be a known input value, LTV Pct and DTI Pct are not. They are derived values. They are calculated from known input values such as the loan amount, appraised property value, monthly income, housing expense including mortgage payment, tax, and insurance, and other loan payments. In DMN, those calculations are provided as supporting decisions to the top-level decision Loan Prequalification. Each calculation itself could be complex, based on other supporting decisions. This leads to DMN’s other universally supported feature, the Decision Requirements Diagram, or DRD. Below you see the DRD for Loan Prequalification. The ovals are input data, known input values, and the rectangles are decisions, or calculated values. The solid arrows pointing into a decision, called information requirements, define the inputs to the decision’s calculations, either input data or supporting decisions.

Like decision tables, DRDs are readily understood by business users, who can create them to outline the dependencies of the overall logic. In the view above, we show the datatype of each input data and decision in the DRD. Built-in datatypes include things like Text, Number, Boolean, and collections of those, but DMN also allows the modeler to create user-defined types representing constraints on the built-in types – such as the numeric range 300 to 850 for type tCreditScore – and structured types, specified as a hierarchy of components. For example, tLoan, describing the input data Loan, is the structure seen below:

Individual components of the structure are referenced using a dot notation. For example, the Loan Amount value is Loan.Amount.

A complete DRD, therefore, including datatypes for all supporting decisions down to input data, provides significant business value and can be created easily by subject matter experts. As a consequence, all DMN tools support DRDs. But by itself, the DRD and a top-level decision table is not enough to evaluate the decision. For that you need to provide the logic for the supporting decisions. And here there is some disagreement within the DMN community. Some tool vendors believe that DMN should be used only to provide model-based business requirements. Those requirements are then handed offto developers for completion of the decision logic using some other language, either a programming language like Java or a proprietary business rule language like IBM ODM. I call those tools DMN Lite, because fully implemented DMN allows subject matter experts to define the complete, fully executabledecision logic themselves, without programming.

Full DMN adds two key innovations to DRDs and decision tables: the expression language FEEL and standardized tabular formats called boxed expressions. Using boxed expressions and FEEL, real DMN tools let non-programmers create executable decision models, even when the logic is quite complex. So you can think of DMN as a Low-Code language for decision logic that is business-friendly, transparent, and executable.

In that language, the shapes in the DRD define variables (with assigned datatypes), with the shape labels defining the variable names. Defining variables by drawing the DRD explains an unusual feature of FEEL, which is that variable names may contain spaces and other punctuation not normally allowed by programming languages. The value expression of each individual decision is the calculation of that decision variable’s value based on the values of its inputs, or information requirements. It is the intention of FEEL and boxed expressions that subject matter experts who are not programmers can create the value expressions themselves.

FEEL is called an expression language, a formula language like Excel formulas, as opposed to a programming language. FEEL just provides a formula for calculating an output value based on a set of input values. It does not create the output and input variables; the DRD does that. Referring back to our DRD, let’s look at the value expression for LTV Pct, the Loan-to-Value ratio expressed as a percent. The FEEL expression looks like this:

It’s simple arithmetic. Anyone can understand it. This is the simplest boxed expression type, called a literal expression, just a FEEL formula in a box, with the decision name and datatype in a tab at the top. Decision table is another boxed expression type, and there are a few more. Each boxed expression type has a distinct tabular format and meaning, and cells in those tables are FEEL expressions. In similar fashion, here is the literal expression for DTI Pct:

The tricky one is Mortgage Payment. It’s also just arithmetic, based on the components of Loan. But the formula is hard to remember, even harder to derive. And it’s one that in lending is used all the time. For that, the calculation is delegated to a bit of reusable decision logic called a Business Knowledge Model, or BKM. In the DRD, it’s represented as a box with two clipped corners, with a dashed arrow connecting it to a decision. A BKM does not have incoming solid arrows, or information requirements. Instead, its inputs are parameters defined by the BKM itself. BKMs provide two benefits: One, they allow the decision modeler to delegate the calculation to another user, possibly with more technical or subject matter knowledge, and use it in the model. Two, it allows that calculation to be defined once and reused in multiple decision models. The dashed arrow, called a knowledge requirement, signifies that the decision at the head of the arrow passes parameter values to the BKM, which then returns its output value to the decision. We say the decision invokes the BKM, like calling an api. The BKM parameter names are usually different from the variable names in the decision that invokes them. Instead, the invocation is a data mapping.

Here I have that BKM previously saved in my model repository under the name Loan Amortization Formula. On the Trisotech platform, I can simply drag it out onto the DRD and replace the BKM Payment Calculation with it. The BKM definition is shown below, along with an explanation of its use from the BKM Description panel. It has three parameters – p, r, and n, representing the loan amount, rate, and number of payments over the term – shown in parentheses above the value expression. The value expression is again a FEEL literal expression. It can only reference the parameters. The formula is just arithmetic – the ** symbol is FEEL’s exponentiation operator – but as you see, it’s complicated.

The decision Mortgage Payment invokes the BKM by supplying values to the parameters p, r, and n, mappings from the decision’s own input, Loan. We could use a literal expression for this, but DMN provides another boxed expression type called Invocation, which is more business-friendly:

In a boxed Invocation, the name of the invoked BKM is below the tab, and below that is a two-column table, with the BKM parameter names in the first column and their value expressions in the second column. Note that because Loan.Rate Pct is expressed as a percent, we need to divide its value by 100 to get r, which is a decimal value, not a percent.

At this point, our decision model is complete. But we need to test it! I can’t emphasize enough how important it is to ensure that your decision logic runs without error and returns the expected result. So let’s do that now, using the input data values below:

Here Loan Prequalification returns “Possibly approved”, and the supporting decision values look reasonable. We can look at the decision table and see that rule 8 is the one that matches.

So you see, DMN is something subject matter experts who are not programmers can use in their daily work. Of course, FEEL expressions can do more than arithmetic, and like Excel formulas, the language includes a long list of built-in functions that operate on text, numbers, dates and times, data structures, and lists. I’ve discussed much of that in my previous posts on DMN. But learning to use the full power of FEEL and boxed expressions – which you will need in real-world decision modeling projects – generally requires training. Our DMN Method and Style training gives you all that, including 60-day use of Trisotech Decision Modeler, lots of hands-on exercises, quizzes to test your understanding, and post-class certification in which you need to create a decision model containing certain required elements. It’s actually in perfecting that certification model that the finer points of the training finally sink in. And you need that full 60 days to really understand DMN’s capabilities.

DMN lets you accelerate time-to-value by engaging subject matter experts directly in the solution. If you want to see if DMN is right for you, check out the DMN training.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - More On DMN Validation
Bruce Silver
Blog

More On DMN Validation

By Bruce Silver

Read Time: 5 Minutes

This month we return to a topic I’ve written about twice before, data validation in DMN models. This post, in which I will describe a third method, is hopefully the last word.

Beginning decision modelers generally assume that the input data supplied at execution time is complete and valid. But that is not always the case, and when input data is missing or invalid the invoked decision service returns either an error or an incorrect result. When the service returns an error result, typically processing stops at the first one and the error message generated deep within the runtime is too cryptic to be helpful to the modeler. So it is important to precede the main decision logic with a data validation service, either as part of the same decision model or a separate one. It should report all validation errors, not stop at the first one, and should allow more helpful, modeler-defined error messages. There is more than one way to do that, and it turns out that the design of that validation service depends on details of the use case.

The first method, which I wrote about in April 2021, uses a Collect decision table with generalized unary tests to find null or invalid input values, as you see below. When I introduced my DMN training, I thought this was the best way to do it, but it’s really ideal only for the simple models I was using in that training. That is because the method assumes that values used in the logic are easily extracted from the input data, and that the rule logic is readily expressed in a generalized unary test. Moreover, because an error in the decision table will usually fail without indicating which rule had the problem, the method assumes a modest number of rules with fairly simple validation expressions. As a consequence, this method is best used when:

The second method, which I wrote about in March 2023, takes advantage of enhanced type checking against the item definition, a new feature of DMN1.5. Unlike the first method, this one returns an error result when validation errors are present, but it returns all errors, not just the first one, each with a modeler-defined error message. Below you see the enhanced type definition, using generalized unary tests, and the modeler-defined error messages when testing in the Trisotech Decision Modeler. Those same error messages are returned in the fault message when executed as a decision service. On the Trisotech platform, this enhanced type checking can be either disabled, enabled only for input data, or enabled for input data and decisions.

This method of data validation is avoids many of the limitations of the first method, but cannot be used if you want the decision service to return a normal response, not a fault, when validation errors are present. Thus it is applicable when:

More recently I have been involved in a large data validation project in which neither of these methods are ideal. Here the input data is a massive data structure containing several hundred elements to be validated, and we want validation errors to generate a normal response, not a fault, with helpful error messages. Moreover, data values used in the rules are often buried deeply within the structure and many of them recurring, so simply extracting their values properly is non-trivial. Think of a tax return or loan application. And once you’ve extracted the needed data values, the validation rules themselves may be complex, depending on conditions of many other variables in the structure.

For these reasons, neither of the two methods described in my previous posts fit the bill here. Because an element’s validation rule can be a complex expression involving multiple elements, this rules out the type-checking method and is a problem as well for the Collect decision table. Decision tables also add the problem of testing. When you have many rules, some of them are going to be coded incorrectly the first time, and if a rule returns an error the whole decision table fails, so debugging is extremely difficult. Moreover, if a rule fails to return the expected result, you need to be able to determine whether it’s because you have incorrectly extracted the data element value or you have incorrectly defined the rule logic. Your validation method needs to separate those concerns.

This defines a new set of requirements:

The third method thus requires a more complex architecture, comprising:

While overkill for simple validation services, in complex validation scenarios this method has a number of distinct advantages over the other two:

Let’s walk through this third data validation method. We start with the Extraction service. The input data Complex Input has the structure shown here:

In this case there is only one non-repeating component, containing just two child elements, and one repeating component, also containing just two child elements. In the project I am working on, there are around 10 non-repeating components and 50 repeating components, many containing 10 or more child elements. So this model is much simpler than the one in my engagement.

The Extraction DRD has a separate branch for each non-repeating and each repeating component. Repeating element branches must iterate a BKM that extracts the individual elements for that instance.

The decisions ending in “Elements” extract all the variables referenced in the validation rules. These are not identical to the elements contained in Complex Input. For example, element A1 is just the value of the input data element A1, but element A1Other is either the input data element A2, if the value of A1 is “Other”, or null otherwise.

Repeating component branches must iterate a BKM that extracts the variable from a single instance of the branch.

In this case, we are extracting three variables – C1, AllCn, and Ctotal – although AllCn is just used in the calculation of Ctotal, not used in a rule. The goal of Extraction is just to obtain the values of variables used in the validation rules.

The ExtractAll service will be invoked by the Rules, and again the model has one branch for each non-repeating component and one for each repeating component. Encapsulating ExtractAll as a separate service is not necessary in a model this simple, but when there are dozens of branches it helps.

Let’s focus on Repeating Component Errors, which iterates a BKM that reports errors for a single instance of that branch.

In this example we have just two validation rules. One reports an error if element C1 is null, i.e. missing in the input. The other reports an error if element Ctotal is not greater than 0. The BKM here is a context, one context entry per rule, and all context entries have the same type, tRuleData, with the four components shown here. We could have added a fifth component containing the error message text, but here we assume that is looked up from a separate table based on the RuleID.

So the datatype tRepeatingComponentError is a context containing a context, and the decision Repeating Component Errors is a collection of a context containing a context. And to collect all the errors, we have one of these for each branch in the model.

That is an unwieldy format. We’d really like to collect the output for all the rules – with isError either true or false – in a single table. The decision ErrorTable provides that, using the little-known FEEL function get entries(). This function converts a context into a table of key-value pairs,and we want to apply it to the inner context, i.e. a single context entry of Repeating Component Error.

It might take a minute to wrap your head around this logic. Here fRow is a function definition – basically a BKM as a context entry – that converts the output of get entries() into a table row containing the key as a column. For non-repeating branches, we iterate over each error, calling get entries() on each one. This generates a table with one row per error and five columns. For repeating branches, we need to iterate over both the branches and for the errors in each branch, an iteration nested in another iteration. That creates a list of lists, so we need the flatten() function to make that a simple list, again one row per error (across all instances of the branch) and five columns. In the final result box, we just concatenate the tables to make one table for all errors in the model.

Here is the output of ErrorTable when run with the inputs below:

ErrorTable as shown here lists all the rules, whether an error or not. This is good for testing your logic. Once tested, you can easily filter this table to list only rules for which isError is true.

Bottom Line: Validating input data is always important in real-world decision services. We’ve now seen three different ways to do it, with different features and applicable in different use cases.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Calendar Arithmetic in DMN
Bruce Silver
Blog

Calendar Arithmetic in DMN

By Bruce Silver

Read Time: 4 Minutes

Often in decision models you need to calculate a date or duration. For example, an application must be submitted within 90 days of some event, or a vaccine should not be administered within 120 days of a previous dose. DMN has powerful calendar arithmetic features. This post will illustrate how to use them.

ISO 8601 Format

The FEEL expressions for dates, times, and durations may look strange, but don’t blame DMN for that. They are based on the formats defined by ISO 8601, the international standard for dates and times in many computer languages. In FEEL expressions, dates and times can be specified by using a constructor function, e.g. date(), applied to a literal string value in ISO format, as seen below. For input and output data values, the constructor function is omitted.

A date is a 4-digit year, hyphen, 2-digit month, hyphen, 2-digit day. For example, to express May 3, 2017 as an input data value, just write 2015-05-03. But in a FEEL expression you must write date(2017-05-03).

Time is based on a 24-hour clock, no am/pm. The format is a 2-digit hour, 2-digit minute, 2-digit second, with optional fractional seconds after the decimal point. There is also an optional time offset field, representing the time zone expressed as an offset from UTC, what they used to call Greenwich Mean Time. For example, to specify 1:10 pm and 30 seconds Pacific Daylight Time, which is UTC 7 hours, you would enter 13:10:30-07:00 as an input data value, or time(13:10:30-07:00) in a FEEL expression. To specify the time as UTC, you can either use 00:00 as the time offset, or the letter Z, which stands for Zulu, which is the military symbol for UTC. DateTime values concatenate the date and time formats with a capital T separating them, as you see here. In a FEEL expression, you must use the proper constructor function with the ISO text string enclosed in quotes.

Date and Time Components

It is possible in FEEL to extract the year, month, or day component from a date, time, or dateTime using a dot followed by the component name: year, month, day, hour, minute, second, or time offset. For example, the expression date(“2017-05-03”).year returns the number 2017. All of these extractions return a number except for the time offset component, which returns not a number but a duration, with a format well discuss shortly.

DMN also provides an attribute, weekday not really a component but also extracted via dot notation returning an integer from 1 to 7, with 1 meaning Monday and 7 meaning Sunday.

Durations

The interval between two dates, times, or dateTimes defines a duration. DMN, like ISO, defines two kinds of duration: days and time duration, and years and months duration. Days and time duration is equivalent to the number of seconds in the duration. The ISO format is PdDThHmMsS, where the lower case d,h,m, and s are integers indicating the days, hours, minutes, and seconds in the duration. If any of them is zero, they are omitted along with the corresponding uppercase D, H, M, or S. And they are supposed to be normalized so that the sum of the component values is minimized.

For example a duration of 61 seconds could be written P0DT61S, but we can omit the 0D, and the normalized form is 1 minute 1 second, so the correct value is PT1M1S. In a FEEL expression, you need to enclose that in quotes and make it the argument of the duration constructor function: duration(“PT1M1S”).

Days and time duration is the one normally used in calendar arithmetic, but for long durations the alternative years and months duration is available, equivalent to the number of whole months included in the duration. The ISO format is PyYmM, where lower case y and m are again numbers representing number of years and months in the duration, and again the normalized form minimizes the sum of those component values. So a duration of 14 months would be written in normalized form as P1Y2M, and in FEEL, duration(P1Y2M). Since months contain varying numbers of days, the precise value of years and months duration is the number of months in between the start date and the end date, plus 1 if the day of the end month is greater than or equal to the day of the start month, or plus 0 otherwise.

As with dates and times, you can extract the components of a duration using a dot notation. For days and time duration, the components are days, hours, minutes, and seconds. For years and months duration, the components are years and months.

Arithmetic Expressions

The point of all this is to be able to do calendar arithmetic.

Lets start with addition of a duration.

A common use of calendar arithmetic is finding the difference between two dates or dateTimes.

You can multiply a duration times a number to get another duration of the same type, either days and time duration or years and months duration.

You can divide a duration by a number to get another duration of the same type.But the really useful one is dividing a duration by a duration, giving a number. For example, to find the number of seconds in a year, the expression duration(“P365D”)/duration(“PT1S”) returns the correct value, number 31536000. Note this is not the result returned by extracting the seconds component. The expression duration(P365D).seconds returns 0.

Example:
Refund Eligibility

Here is a simple example of calendar arithmetic from the DMN training. Given the purchase date and the timestamp of item return, determine the eligibility for a refund, based on simple rules.

The solution, using calendar arithmetic, is shown below:

Here we use a context in which the first context entry computes a days and time duration by subtracting the two dateTimes. The second context entry is a decision table that applies the rules. Note we can use durations like any other FEEL type in the decision table input entries.

Example:
Unix Timestamp

A second example comes from the Low-Code Business Automation training. It is common for databases and REST services to express dateTime values as a simple number. One common format used is the Unix timestamp, defined as the number of seconds since January 1, 1970, midnight UTC. To convert a Unix timestamp to a FEEL dateTime, you can use the BKM below:

The BKM multiplies the duration of one second by timestamp, a number, returning a days and time duration, and then adds that to the dateTime of January 1, 1970 midnight UTC, giving a dateTime. And you can perform the reverse mapping with the BKM below:

This time we subtract January 1, 1970 midnight UTC from the FEEL dateTime, giving a days and time duration, and then divide that by the duration one second, giving a number.

Calendar arithmetic is used all the time in both decision models and Low-Code business automation models. While the formats are unfamiliar at first, FEEL makes calendar arithmetic very easy. It’s all explained in the training. Both the DMN Method and Style course and the Low-Code Business Automation course provide 60-day use of the Trisotech platform and include post-class certification. Check out!

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all