Stefaan Lambrecht's blog post - Building DMN Models You Can Trust
Stefaan Lambrecht
Blog

Building DMN Models You Can Trust

How testing, validation, and simulation ensure reliable decision logic—without leaving your modeling environment.

By Stefaan Lambrecht

Read Time: 5 Minutes

Building a sound Decision Model and Notation (DMN) model to support a core customer service or operational process is an enormous responsibility. These models encapsulate critical business logic that directly influences decisions, customer outcomes, and overall organizational performance.

As such, they must be reliable, robust, and functionally correct. But technical accuracy alone is not enough. A DMN model must also faithfully reflect the organization’s intent—its mission, vision, and policies. Ensuring this alignment requires more than careful modeling; it demands rigorous testing, validation, and simulation throughout the model lifecycle.

The Role of an Integrated Modeling Platform

A modern modeling platform plays a crucial role in enabling this level of rigor. One of its key advantages is the ability to perform modeling, testing, and simulation within a single environment—without the need to write code or rely on external tools.

This integrated approach significantly enhances productivity and reduces risk. Modelers can stay focused on decision logic instead of technical implementation details, maintaining flow and clarity throughout the modeling process.

An excellent example of such a platform is Trisotech. Beyond supporting DMN, it also:

Governance and Organizational Intent

At the heart of DMN model management lies governance. A decision model is not just a technical artifact—it is a formal representation of how an organization chooses to act.

Validating whether the logic accurately reflects business intent is therefore essential. This responsibility typically rests with business owners, who must review each version of the model to confirm alignment with strategic objectives and policy constraints.

Without proper governance, even technically correct models can drift away from intended outcomes—leading to inconsistent decisions, compliance risks, and customer dissatisfaction.

Ensuring Correctness Through Validation

Guaranteeing that a DMN model works correctly requires a structured validation approach. Every model version should undergo comprehensive validation before deployment. This process typically includes:

Together, these validation practices provide confidence that the model is both correct and aligned with business goals.

Key Ingredients for Effective DMN Testing and Simulation

Several capabilities are essential for building trustworthy DMN models:

1. Explicit Data Types and Robust Input Validation

Every input and decision node should have clearly defined data types. This reduces ambiguity, enforces consistency, and strengthens validation across the model.

In addition, strong input validation mechanisms ensure that incoming data conforms to expected formats and constraints. This prevents runtime errors and guarantees that only valid inputs are processed.

2. Layered Decision Model Structure (DRD)

A well-structured Decision Requirements Diagram (DRD) separates concerns into distinct layers:

This layered approach improves readability, maintainability, and testability.

3. Complete Input Coverage

All valid input combinations should lead to a proper outcome. A robust model must never fail due to unexpected—but valid—inputs. Instead, it should handle all scenarios gracefully.

4. Incremental Testing During Modeling

Each component—decision tables, contexts, and other elements—should be testable as it is built.

The ability to execute logic instantly, without compilation or technical overhead, is critical. It allows modelers to stay in flow, iterate quickly, and validate assumptions in real time.

5. Comprehensive Test Scenario Management

Once the model is complete, it should be possible to define a representative set of test scenarios covering both functional and non-regression requirements.

These tests should be executable in a single run, directly within the platform, without any coding effort.

6. Simulation with Real Production Data

An advanced capability is the reuse of inputs from previous production versions—typically in JSON format.

This allows teams to simulate how model changes will affect real-world scenarios before deployment. The result is better insight into potential impacts, enabling informed decisions and reducing risk.

Eliminating the Need for External Coding

A major advantage of integrated platforms is the elimination of the need to write code in external environments.

Traditional approaches often require exporting models, writing scripts for testing, or integrating with separate execution engines. This introduces friction, increases complexity, and creates opportunities for errors.

By contrast, an all-in-one environment enables:

This alignment between business and technology is one of the core promises of DMN—and it is fully realized only when the platform supports it end-to-end.

Conclusion

The importance of testing and simulation in DMN decision modeling cannot be overstated. Given the critical role these models play in driving business decisions, ensuring their correctness, robustness, and alignment with organizational intent is essential.

A structured validation approach—supported by functional testing, non-regression testing, and simulation—provides the necessary assurance. When combined with a powerful, integrated platform, organizations can achieve this rigor efficiently, without the need for external coding or technical overhead.

Ultimately, the ability to model, test, and simulate decisions in one cohesive environment empowers organizations to deliver reliable, transparent, and high-quality decision logic—at scale and with confidence.

Follow Stefaan Lambrecht on his website.

Blog Articles

Stefaan Lambrecht

View all

All Blog Articles

Read our experts’ blog

View all

Stefaan Lambrecht's blog post - Decision-Centric Orchestration: The Next Competitive Advantage
Stefaan Lambrecht
Blog

Decision-Centric Orchestration: The Next Competitive Advantage

How Executives Can Turn Decision Intelligence into Faster, Smarter, and More Profitable Customer Services

By Stefaan Lambrecht

Read Time: 7 Minutes

If you want real leverage from decision intelligence, stop scattering decision logic across your processes.

Too often, organizations embed business rules in different activities inside a business process or case handling flow. Decisions are treated as supporting elements—called from tasks, hidden in gateways, duplicated in scripts, or distributed over multiple process variants.

It works—until it doesn’t.

If your core service depends on consistent, explainable, and optimizable decisions, you need to flip the perspective. Instead of process-centric orchestration with embedded decisions, design a decision-centric orchestration, driven by a coherent integrated decision model.

Let’s explore why this matters—and how to do it.

The Problem:
Scattered Decisions in a Core Business Service

Consider a travel insurance company offering a travel claim handling service. During the lifecycle of a claim, multiple decisions must be made:

1

Eligibility

2

Acceptance

3

Fraud Assessment

4

Compensation

5

Recovery

A common implementation approach is:

This seems modular. Clean. Structured.

But it creates three structural problems.

Why Process-Centric Orchestration Fails at Scale

1

You Depend on Process Correctness for Decision Completeness

If decision logic is distributed across multiple process steps, you must trust that:

In reality, processes evolve. Variants multiply. Exceptions are added. Gateways become complex. Human tasks introduce variability.

Eventually, the guarantee that “all relevant decision considerations have been evaluated” becomes fragile.

And in regulated industries like insurance, fragile guarantees are unacceptable.

2

Data Evolves — Sequential Logic Breaks

In claim handling, information does not arrive all at once:

If your orchestration is sequential and process-driven, re-evaluating earlier decisions becomes difficult:

You end up encoding decision state management inside the process model—something BPMN or CMMN were never meant to optimize.

3

You Block Straight-Through Processing

Imagine a “happy path” case:

Why force the claim through a long sequence of process steps?

A decision-centric approach would:

This results in:

Process-centric orchestration often prevents this because the process—not the decision model—controls progression.

The Shift: From Supporting Decisions to Decision-Centric Orchestration

The alternative is simple in principle, but powerful in effect:

Build one integrated decision model that governs the end-to-end business service outcome.

Instead of creating separate DMN models for Eligibility, Acceptance, Fraud, Compensation, and Recovery, you create a comprehensive decision model that:

In this architecture:

The orchestration becomes decision-driven.

What an Integrated Decision Model Looks Like

The integrated DMN model would:

1

Take the complete claim case data as input

2

Evaluate:

3

Produce outputs such as:

The decision model becomes the single source of truth about the state of the service.

How Decision-Centric Orchestration Works

Step 1:

Every Data Change Triggers Re-Evaluation

Whenever:

The integrated DMN model is re-evaluated.

No process loops. No manual state transitions. No “did we already check this?” logic.

The decision model determines:

Step 2:

The Process Executes What the Decision Requires

The BPMN or CMMN layer becomes simpler and more generic.

It does not orchestrate decision logic.

Instead, it:

The process becomes reactive to the decision model.

Decision drives process—not the other way around.

Architectural Advantages

1

Consistency Across Channels

Whether a claim is submitted via:

The same integrated decision model evaluates it.

No channel-specific rule drift.

2

Transparency and Explainability

A single DMN model makes it possible to:

Scattered rule fragments inside processes make this nearly impossible.

3

Optimization and Simulation

Because decisions are centralized:

This is how decision intelligence becomes strategic—not tactical.

4

True Straight-Through Processing

With integrated evaluation:

This enables maximum automation while preserving control.

How to Design a Decision-Centric Model

Here’s a practical approach:

1

Model the Final Service Outcome First

Start with the ultimate business question:

Can this claim be closed, and if so, how?

Work backward using DMN decision requirements diagrams (DRDs).

2

Identify All Dependent Decisions

Map:

Define their dependencies explicitly in the decision model—not implicitly in the process.

3

Define Clear Decision Outputs

Ensure your integrated model outputs:

This allows the orchestration layer to remain generic and stable.

4

Make the Process Event-Driven

Design BPMN/CMMN models that:

The process becomes a facilitator—not a decision coordinator.

The Core Insight

When delivering a core business service, your competitive advantage does not lie in drawing better flowcharts.

It lies in:

That requires decision-centricity.

Not decision support.

Conclusion: Decision Intelligence as the Orchestrator

If you want high leverage from DMN:

Instead:

That is decision-centric orchestration.

And that is how decision intelligence becomes the brain of your organization—not just a collection of rules hidden in your workflows.

Follow Stefaan Lambrecht on his website.

Blog Articles

Stefaan Lambrecht

View all

All Blog Articles

Read our experts’ blog

View all

Stefaan Lambrecht's blog post - The 10 Key Decisions That Make or Break Operational Excellence
Stefaan Lambrecht
Blog

The 10 Key Decisions That Make or Break Operational Excellence

How decision modeling unlocks operational excellence at scale

By Stefaan Lambrecht

Read Time: 7 Minutes

Every organization—whether it acknowledges it or not—continuously answers a small set of high‑leverage questions:

The answers to these questions are your strategy in action. They translate your mission, vision, and policies into repeatable operational behavior.

That is why they should be treated as explicit, governed business decisions—not as fragments hidden in procedures, spreadsheets, or application code.

The 10 Key Decisions That Make or Break Operational Excellence

Across industries, these strategic questions consistently crystallize into ten decision domains that every organization should consciously manage and control:

  1. Customer eligibility and access – Operationalizes market positioning, ethics, and risk appetite.
  2. Risk acceptance and tolerance – Makes risk appetite executable instead of theoretical.
  3. Pricing, discounting, and margin protection – Embeds competitive strategy and financial discipline.
  4. Prioritization and allocation of scarce resources – Reveals true priorities, beyond stated strategy.
  5. Customer treatment and experience differentiation – Shapes fairness, loyalty, and brand perception.
  6. Compliance and policy enforcement – Turns regulation and internal policy into consistent action.
  7. Exception handling and escalation – Exposes the organization’s real, often hidden, strategy.
  8. Partner, supplier, and third‑party qualification – Externalizes risk and brand impact.
  9. Automation versus human judgment – Governs digital ethics, resilience, and accountability.
  10. Learning, adaptation, and boundary overrides – Defines how the organization evolves without losing intent.

These are high‑leverage decisions. Small changes here create large‑scale effects—across processes, channels, and systems.

Decisions live everywhere—but are rarely visible

Consider a travel insurance company. Its core processes span product management, sales, underwriting, claims, assistance, finance, and compliance.

Each of those processes repeatedly invokes the same ten decisions—sometimes explicitly, often implicitly:

When decision logic is undocumented or scattered across code, procedures, and human judgment, the organization loses transparency, consistency, accountability and agility.

This is where DMN (Decision Model and Notation) becomes a strategic enabler.

DMN is not just a modeling notation. Used correctly, it is a management interface between:

DMN makes decisions explicit, readable, executable and governable.

But its real value becomes clear when viewed through a decision‑centric maturity lens.

From process‑centric to decision‑centric: a maturity journey

Level 1 – Process‑driven, decision‑implicit

At this level, decisions are buried inside process flows, procedures, and individual judgment.

How DMN helps:

DMN can be used diagnostically—to surface and name decisions that are currently hidden. Even simple decision inventories and high‑level decision requirements diagrams already create clarity.

Value unlocked:

Visibility. Leaders can finally ask: Which decisions are we really making, and where?

Level 2 – Rules‑aware, technically owned

Decisions are extracted into rules engines or code, but ownership sits mainly with IT. Business intent is interpreted, not modeled.

How DMN helps:

DMN replaces opaque rule logic with business‑readable decision models. It creates a shared language between business, IT, and compliance.

Value unlocked:

Transparency and safer change. Policy discussions move from documents to executable models.

Level 3 – Decision‑modeled, business‑readable

Key decisions are explicitly modeled and reused across processes and channels.

How DMN helps:

DMN becomes the formal interface between policy and execution. Decisions are versioned, tested, and reused consistently.

Value unlocked:

Intentional alignment. When policy changes, behavior changes—predictably and traceably.

Level 4 – Decision‑centric, strategically governed

Decisions are recognized as strategic control points, each with explicit business ownership.

How DMN helps:

DMN enables governance at the decision level: ownership, impact analysis, compliance evidence, and cross‑channel consistency.

Value unlocked:

Designed steering. The organization deliberately translates strategy into operational outcomes.

Level 5 – Adaptive, accountable, and learning decisions

Decisions continuously adapt based on outcomes, analytics, and learning systems—within explicit policy boundaries.

How DMN helps:

DMN defines guardrails for learning and automation. It ensures that adaptive systems optimize within intent, not redefine it.

Value unlocked:

Sustainable agility. Strategy becomes a testable hypothesis, not a static statement.

Governance: from control to capability

At higher maturity levels, decision management is no longer a modeling exercise. It is an operating capability.

Effective decision governance means:

Governance here is not about slowing things down. It is about ensuring that when strategy changes, behavior changes—deliberately, explainably, and at scale.

Why decision‑centricity simplifies complexity

When the ten key decisions are explicit, governed, and explainable:

Most importantly, leadership regains its steering wheel.

Operational excellence is no longer just about doing things faster.

It becomes about doing the right things—consistently, transparently, and on purpose.

Follow Stefaan Lambrecht on his website.

Blog Articles

Stefaan Lambrecht

View all

All Blog Articles

Read our experts’ blog

View all

Stefaan Lambrecht's blog post - The Digital Twin: The Living Core of the Case-Decision-Process Methodology
Stefaan Lambrecht
Blog

The Digital Twin: The Living Core of the Case-Decision-Process Methodology

A Unified Data Foundation for Intelligent Business Fulfillment

By Stefaan Lambrecht

Read Time: 5 Minutes

In the Case-Decision-Process (CDP) methodology, every customer request — a claim, a loan, an order, or a service ticket — is more than just a transaction.

It becomes a living digital twin: a continuously evolving, data-rich representation of the customer’s request and its fulfillment journey.

This digital twin is not a passive record. It is the heartbeat of the CDP architecture — the single source of truth that enables real-time decisioning, contextual awareness, and adaptive orchestration across CMMN, DMN, and BPMN models.

From Static Records to Living Data

In traditional IT or ERP systems, case data are often scattered across multiple modules, updated asynchronously, and only loosely connected to the logic driving business processes.

The result is fragmentation — decisions are made based on partial or outdated information, and process agility is lost.

In contrast, the CDP methodology treats the customer request case data as a continuously synchronized digital twin. It contains:

This evolving data object becomes the structural foundation for the entire business fulfillment lifecycle. Every event — a new document, an external update, or a human input — triggers a re-evaluation by the business fulfillment case assessment decision model (DMN).

Continuous Evaluation: The Role of DMN as the “Spider in the Web”

The DMN model acts as the intelligent orchestrator.

Whenever the digital twin is updated, the DMN layer evaluates what needs to happen next:

Because all logic is explicitly modeled in DMN and driven by the shared case data, the system responds instantly and intelligently to every change.

This continuous feedback loop transforms business fulfillment into a living, adaptive system, where decisions and processes evolve in lockstep with real-world events.

Structuring the Digital Twin: The Emergence of SDMN

To make this dynamic architecture work at scale, one challenge stands out: structuring and mapping the data model that underpins the digital twin.

Enter SDMN — the Shared Data Model and Notation, a new OMG standard designed precisely for this purpose.

SDMN defines how data are represented, shared, and referenced consistently across CMMN, DMN, and BPMN models. It is the data glue that ensures every component of the CDP tripod speaks the same language.

With SDMN, modelers can:

In other words, SDMN turns the digital twin from a conceptual artifact into a technically executable and model-driven reality.

Why the Digital Twin Matters

By treating the customer request as a continuously updated digital twin, organizations gain:

The digital twin bridges the gap between business intent and system behavior.

It ensures that fulfillment is not just automated — but context-aware, traceable, and continuously optimized.

Toward Truly Intelligent Fulfillment

The Case-Decision-Process methodology, powered by its digital twin foundation, represents a paradigm shift.

Rather than coding fixed workflows, organizations model living systems — systems that think, decide, and adapt.

With CMMN orchestrating cases, DMN governing decisions, BPMN executing structured processes, and SDMN unifying their data, the digital twin becomes the living core of business fulfillment.

It is through this dynamic, shared, and continuously updated data model that enterprises can finally achieve what decades of rigid automation could not: a business fulfillment system that evolves as fast as the business itself.

Follow Stefaan Lambrecht on his website.

Blog Articles

Stefaan Lambrecht

View all

All Blog Articles

Read our experts’ blog

View all

Stefaan Lambrecht's blog post - The Case-Decision-Process Tripod Methodology
Stefaan Lambrecht
Blog

The Case-Decision-Process Tripod Methodology

A Model-Driven Approach to Business Fulfillment

By Stefaan Lambrecht

Read Time: 5 Minutes

Business fulfillment — the process of delivering on customer requests through coordinated business activities — lies at the core of every organization’s value creation. Whether the request concerns an insurance claim, a loan, an HR recruitment request, or a customer order, each represents a commitment that must be managed from initiation through delivery and closure.

Traditional IT development and ERP systems have long attempted to automate fulfillment processes through rigid workflows and predefined business logic. However, in a modern service economy characterized by variability, human judgment, and dynamic business rules, these traditional approaches often prove inflexible, slow to change, and costly to maintain.

The Case-Decision-Process (CDP) Tripod methodology addresses this challenge through a model-driven architecture based on three complementary open standards:

Together, these standards provide a unified framework for modeling, simulating, and automating complex business fulfillment lifecycles — enabling agility, transparency, and consistent decision-making across the enterprise.

Business Fulfillment as a Case

In the CDP methodology, every customer request is represented as a case, a central construct that encapsulates all data, tasks, decisions, and processes required to fulfill that request.

A case is identified by a unique identifier and acts as a container for the complete lifecycle of the business object it handles — for example, a claim, a loan, a service ticket, or an order. The case structure provides an end-to-end view, grouping all related data, tasks, and decisions, while enabling full traceability across the lifecycle.

This lifecycle is organized into stages, which define the logical phases of business fulfillment:

Unlike traditional process workflows, these stages are event-driven, not sequential. Several stages may be active concurrently, and their activation or completion depends dynamically on data and context, evaluated by decision logic modeled in DMN.

The Tripod Architecture

The Case-Decision-Process methodology rests on three technical pillars: CMMN, DMN, and BPMN. Each standard contributes a distinct capability, and together they form a cohesive architecture for intelligent business fulfillment.

Case Handling with CMMN

The Case Management Model and Notation (CMMN) standard provides a declarative framework for modeling and executing event-driven, knowledge-intensive business services.

Unlike procedural process models, CMMN does not define strict control flows. Instead, it defines a case plan — a set of possible tasks, stages, and milestones that can be triggered dynamically based on conditions or external events.

CMMN is ideally suited to model business fulfillment because:

Through CMMN, organizations gain the ability to simulate, test, and automate complex fulfillment lifecycles while preserving flexibility and human oversight.

Decision Logic with DMN

The Decision Model and Notation (DMN) standard provides the intelligence behind the case.

It separates decision logicrom process control, allowing organizations to model and execute decisions independently.

In a CDP-based fulfillment model, a central DMN decision service continuously evaluates the case data to determine:

For example, in an insurance claim scenario, a DMN model might evaluate:

If any condition fails, the DMN model can immediately terminate the case — allowing the organization to fail fast and minimize unnecessary effort.

By externalizing business rules in DMN, decision transparency and auditability are achieved, and business users can modify policies without altering core process logic.

Process Flows with BPMN

While CMMN manages the overall case lifecycle, and DMN governs decisions, BPMN (Business Process Model and Notation) defines structured, repeatable process flows that can be embedded as process tasks within a CMMN model.

Typical examples include:

BPMN processes can be triggered conditionally by events in the case and can return results that update the case data.

This hybrid modeling approach — integrating CMMN, DMN, and BPMN — enables both structured automation and adaptive case handling within a single execution environment.

Example: Automotive Insurance Claim

An automotive insurance claim illustrates the methodology in action:

  1. Case Creation — A customer submits a claim; a case is instantiated with a unique identifier.
  2. Eligibility Stage — A DMN model evaluates coverage, policy validity, and customer identity. If any condition fails, the case is closed immediately.
  3. Acceptance Stage — The insurer assesses whether conditions justify proceeding with claim handling.
  4. Solution Development Stage — If accepted, either an off-the-shelf solution (standard payout) or a custom solution (repair arrangement) is developed.
  5. Solution Delivery Stage — The agreed solution is executed, possibly through BPMN workflows (e.g., payment authorization).
  6. Closing Stage — Administrative and accounting actions finalize the case.

Throughout the process, CMMN orchestrates case states, DMN governs decision logic, and BPMN executes transactional workflows — forming a cohesive, event-driven fulfillment cycle.

Comparison to Traditional IT and ERP Approaches

By decoupling case management, decision logic, and process orchestration, the CDP methodology supports faster adaptation, greater transparency, and lower maintenance costs than traditional IT development or ERP implementations.

Benefits and Outcomes

Organizations adopting the Case-Decision-Process methodology typically achieve:

About the Methodology

The Case-Decision-Process Tripod methodology provides an integrated framework for documenting, modeling, simulating, and automating business fulfillment cases using international standards. It can be applied across industries and domains — from insurance and banking to HR, logistics, and manufacturing — to modernize fulfillment operations and enable true digital agility.

The Case-Decision-Process Tripod offers a model-driven, standards-based alternative to traditional IT system design.

By uniting CMMN, DMN, and BPMN, it provides an executable architecture that reflects how real-world business fulfillment actually operates — dynamically, contextually, and collaboratively.

This methodology enables organizations to bridge the gap between business intent and system execution, delivering on customer expectations faster and more intelligently — without the rigidity and cost of legacy approaches.

Follow Stefaan Lambrecht on his website.

Blog Articles

Stefaan Lambrecht

View all

All Blog Articles

Read our experts’ blog

View all

Stefaan Lambrecht's blog post - Turn Your Legacy ERP into a Business Execution Platform
Stefaan Lambrecht
Blog

Turn Your Legacy ERP into a Business Execution Platform

Using Decision/Case/Process Modeling & Execution

By Stefaan Lambrecht

Read Time: 5 Minutes

From “Best Practice” to “Adaptive Practice”

Traditional ERP systems were designed around stability and standardization, not adaptability. Vendors like SAP, Oracle, and JD Edwards codified “best practices” — in reality, average practices that could be applied across industries.

While this provided efficiency and compliance, it came at the cost of differentiation. Any company trying to do something innovative or non-standard hit the walls of ERP rigidity — requiring custom IT code, long projects, and high development and maintenance costs (the legacy trap).

The future ERP shouldn’t impose “best practices”; it should enable continuous adaptation to how your business actually operates — and evolves.

Model-Driven Execution: The CMMN–BPMN–DMN Tripod

The proposed shift to CMMN (Case Management Model and Notation), BPMN (Business Process Model and Notation), and DMN (Decision Model and Notation) is profound.

Together, they create a semantic model of the business — not just documentation, but executable logic that an engine can run directly.

And most importantly, the models are developed by the business, not IT. The business owners and experts return to the steering wheel of their own business organization.

This is the cornerstone of a “business-defined, engine-driven architecture” — where models become the application itself.

Closing the Business–IT Gap

In traditional ERP, business analysts describe needs → IT interprets → developers code → business tests → IT fixes → repeat.

In model-driven architectures, the models are the requirements — and also the implementation.

That means:

Composable Capability Architecture

A next-gen ERP wouldn’t be a monolith. Instead, it would be a composable set of capabilities:

The ERP becomes an application fabric, where processes, rules, and data are dynamically orchestrated rather than hard-coded.

This allows an organization to tailor specific processes (say, a unique claims process in insurance or a niche procurement flow in manufacturing) without breaking the overall system.

This is a paradigm shift: from code-first to model-first, or even model-only.

Implications for Governance, Transparency, and Agility

Integration with AI and Data Layers

In a modern context, AI and analytics could plug into this architecture seamlessly:

In this way, the ERP becomes a living system, learning and adapting in real-time.

The Result – A Business Execution Platform

The next generation of ERP won’t be “enterprise resource planning” in the old sense — it will be a Business Execution Platform:

It’s a move from hard-coded best practices → to configurable, executable models → to self-evolving intelligent systems.

Follow Stefaan Lambrecht on his website.

Blog Articles

Stefaan Lambrecht

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - DMN: Beyond Decision Tables
Bruce Silver
Blog

DMN: Beyond Decision Tables

By Bruce Silver

Read Time: 4 Minutes

If you are using DMN, of course you are using decision tables. Every decision management platform, DMN or not, has some form of decision table.

Business users like them because the meaning of the decision logic is obvious from the table structure without any previous knowledge about how it works. That’s because diagrams and tables are much more intuitive to non-programmers – i.e., subject matter experts – than script or program code. There is still a bit of script in the table cells – in our case, FEEL – but the expressions are usually simple.

The creators of DMN a decade ago took this basic fact and carried it a step further: In order to make the full power of DMN accessible to subject matter experts, the language will include additional table structures with simple FEEL inside the table cells, instead of requiring complex FEEL expressions with functions nested inside other functions. They called those table structures boxed expressions. Today there are nine of them, including literal expression, which just means a FEEL expression in text form, and decision table.

From the outset, boxed expressions were put in DMN with the expressed intent to make the full power of the language more accessible to non-programmers. Today that may seem strange, since very few DMN tools outside of Trisotech support them, other than decision tables. Why is that? Two reasons:

1

Most DMN tools are not designed to expose executable logic directly to subject matter experts. Those tool vendors assume subject matter experts merely want to create business requirements in the form of DRDs, and leave the implementation to programmers. That’s antiquated thinking.

2

Implementing boxed expressions is challenging for the tool vendor. The basic structure of most boxed expressions is a two-column table in which the first column is a variable name and the second column is its value expression. But that value expression does not have to be a literal expression; it could be any boxed expression at all, for example, a decision table. That means the second column must be able to expand to a nested table, and there is no limit to the degree of nesting. From a graphics perspective, that’s not easy!

The fact is, except for decision table and one other boxed expression type, boxed expressions are simply graphical conveniences. The same logic could be written as a FEEL literal expression, a more complex one. But non-programmers have an easier time creating complex logic using simple FEEL inside the boxed expression cells than in constructing complex FEEL expressions.

Let’s look at some examples.

Invocation

An Invocation boxed expression represents a call by a decision to a logic function, typically a user-defined function such as a BKM or decision service, but in principle also a FEEL built-in function. Decision logic that is reused is typically packaged as a function with defined parameters. The Invocation maps values to each parameter and receives in return the function result.

For example, a commonly reused function in mortgage lending is the Loan Amortization Formula, in which the monthly payment is based on three parameters: p, the loan amount; r the interest rate as a decimal value; and n, the number of months in the term of the loan. It’s just an arithmetic expression, but it’s hard to remember so best encapsulated as a BKM.

A decision can get the monthly payment for any mortgage loan by supplying values of p, r, and n through an invocation. With formulas like these, it’s a good idea to explain the parameters in the Description field of the BKM Details.

The Invocation boxed expression is a two-column table in which the first column lists the parameters p, r, and n, and the second column lists expressions of decision variables for each parameter.

Here the decision Loan Payment is defined as an invocation of the BKM Loan Amortization Formula. The BKM parameters and their datatypes are listed in the first column. The second column provides their value as simple FEEL expressions of decision variables. Because the decision variable Loan rate pct is expressed as a percent, it must be divided by 100 when passed to parameter r.

We could just as easily written this invocation as a slightly more complicated FEEL expression:

Loan Amortization Formula(Loan amount, Loan rate pct/100, 360)

With this so-called literal invocation, the names of the parameters do not appear, but the order of the supplied value expressions must match the order of the parameters in the BKM definition. With FEEL built-in functions, it is much more common to use literal invocation, but they can be called using the Invocation boxed expression just as well.

Context

Similar in appearance to Invocation, the Context boxed expression is a two-column table in which each row, called a context entry, contains a name in the first column and a value expression in the second. Again, the second column could be any boxed expression type, including another context, so graphically these tables can be quite complex. The Context boxed expression really should be viewed as two separate boxed expression types, depending on whether or not the final row of the table, called the final result box, is left empty.

Context with Non-Empty Final Result Box

A Context with an expression in the final result box is the one boxed expression type besides decision table that has no literal expression equivalent. It is probably the most useful of the lesser-known boxed expressions, because it allows you to break down the complex logic of a decision into smaller pieces using local variables, i.e. names used only within the scope of the decision. Each context entry names a local variable, and the expressions of any subsequent context entry may refer to those names, as may the final result box expression.

Here is an example from the DMN training:

Total Price = Price after discount + Tax + Shipping cost

The logic is moderately complex but we really just care about the Total Price, not the details, so we can encapsulate those in local variables, the context entries, values not returned in the decision. Note the expression column for Shipping cost is a decision table, not a simple FEEL expression. The final result references the context entries, a much simpler expression.

Context with Empty Final Result Box

When the final result box is left empty, the Context boxed expression creates a structured value, in which each context entry represents a component of that data structure. Building on the previous example, suppose instead of just the Total Price, a number, we want to create an Invoice, a data structure of type tInvoice containing the components shown below:

The Context boxed expression for this is nearly identical to the previous one, except now Total price is a context entry and the final result box is empty.

Just to reinforce the point, the Context boxed expression is a simpler way to define logic that could alternatively be done using a literal expression. A FEEL context literal uses a JSON-like syntax to create data structures, which are comma-separated expressions enclosed in curly braces. For the example above, it would look like this:

{Price before discount: Quantity*Unit Price, Price after discount: if Quantity>6 then 0.9*Price before discount else Price before discount, Tax: if is Taxable=true then 0.09*Price after discount else 0, Shipping cost: if Shipping Type=”Standard” and Price after discount<100, then 9.95 else if Shipping Type=”Standard” and Price after discount>=100 then 0 else 17.95, Total price: Price after discount + Tax + Shipping cost}

Function Definition

A Function Definition boxed expression is only used in a context entry. It represents a user-defined function, just like a BKM, except it is used only within the context, i.e., it can be called only by a subsequent context entry or final result box, not from outside.

Referring back to the Loan Amortization Formula, we could make that a Function Definition inside a context if we weren’t concerned about reuse. It simplifies the final result without using a BKM. This boxed expression had the function name in the first column and the function definition — including comma-separated parameters enclosed in parentheses — in the second column.

Relation and List

A Relation boxed expression defines a FEEL table, a list of identical structures. The column names represent the components of the structure. Relations are often populated with literal values but the table cells may be any FEEL expression. On the Trisotech platform a Relation can be populated by uploading from Excel, a real convenience.

One common use for a Relation containing literal values is a static lookup table modeled as a zero-input decision, i.e., having no incoming information requirements. For example, in the table below, you can query this table using a filter expression:

Loan Table[pointsPct < 1.0].lenderName

returns a list of mortgage lenders who charge less than 1% points on the loan.

The List boxed expression is rarely used. It is simply a list of FEEL expressions arranged in a single column, such as shown below:

If the list contains literal values, it may be uploaded from Excel. One reason it is rarely used is the equivalent literal expression, a comma-separated list enclosed in square brackets, is usually easier to create:

[in1, in2, in3, upper case(in1) + “ “ +lower case(in2) + “ “ + in3]

Conditional

The Conditional boxed expression is a graphical representation of the FEEL if..then..else operator. It simply separates the if, then, and else clauses into separate rows of the table.

The expressions in the right-hand column are not restricted to literal expressions. They could, in fact, be another Conditional expression, implementing an “else if” clause. In that case, the boxed expression may be easier to read than the equivalent literal expression.

Iterator

For..in..return

The Iterator boxed expression is a graphical representation of the FEEL for..in..return or some/every..in..satisfies operators. The syntax of for..in..return is

for [name of dummy variable] in [some list] return [expression of dummy variable and other inputs]

Here the dummy variable stands for a single item in the list, and the operator returns a list of values where the return expression acts on that item. For example, returning to our list of mortgage products, suppose we want to generate a list of formatted monthly payment strings for each product, based on a specific requested loan amount.

Implementing Formatted Monthly Payments as a literal expression may be difficult for some users to follow:

for x in Loan Table 30 return string(“$%,.2f”, payment(Requested amount*(1+x.pointsPct/100) + x.fees, x.rate/100, x.term))

Using the Iterator boxed expression for Formatted Monthly Payments, we can write

Here the return expression uses the Invocation operator twice, first to call the FEEL built-in function string(), which allows a formatting mask, and to call the BKM payment(), which is the Loan Amortization Formula. With complex return expressions, the Iterator boxed expression often provides more readable logic.

This is just one form of iteration, called iteration over item value. When the in expression is a list of integers, the dummy variable represents a position in the list, and iteration is over item position. The boxed expression works in exactly the same way.

Some/every..in..satisfies

You can also use this boxed expression to test membership in a list.

some [dummy variable] in [list] satisfies [boolean expression]

returns true if any member of a list satisfies some requirement, and

every [dummy variable] in [list] satisfies [boolean expression]

returns true only if every member of the list satisfies the requirement. The Iterator boxed expression works in exactly the same way when some or every is entered in the left column of the first row.

Filter

An expression in square brackets immediately following a list expression models a filter of the list. When the bracketed expression is a Boolean, the filter selects list items for which the expression is true. When the bracketed expression is an integer, the filter selects the list item at that integer position.

The Filter boxed expression puts the list in the first row and the filter expression, enclosed in square brackets, in the second row. For example, instead of the

literal expression [1, 3, 7, 11][item> 5]

you can use the Filter boxed expression

Here the list to be filtered is a list of literal values, and the filter expression is a Boolean, returning the list [7, 11].

I personally do not use this boxed expression much, as I am usually using filters with tables, following the filter expression with .[columnName], or extracting an item from a singleton list by appending the filter [1], and you cannot do either of those with this boxed expression.

The Bottom Line

If your use of DMN stops at DRDs and decision tables, you owe it to yourself to go further. The creators of DMN created these boxed expressions for you, since breaking complex logic apart using standardized table formats makes them easier to create and easier to understand than long, complex FEEL expressions. It’s really not so hard. If you want to learn more about how to do it, check out the book DMN Method and Style 3d edition or our DMN Method and Style training. You’ll be glad you did!

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - DMN List and Table Basics<
Bruce Silver
Blog

DMN List and Table Basics

By Bruce Silver

Read Time: 5 Minutes

Beginners in DMN often define a separate variable for every simple value in their model, a string, for example, or a number. But it’s often far better to group related values within a single variable, and the FEEL language in DMN has multiple ways to do that:

Defining DMN variables as structures, lists, and tables rather than exclusively simple types usually makes your models simpler, more powerful, and easier to maintain.

Datatypes

Trisotech Decision Modeler makes it easy to define structured datatypes. You simply provide a name and type for each component, as you see here:

There are many ways to create a list. The simplest is the list operator, square brackets enclosing a comma-separated list of expressions, such as [1,2,3]. The datatype of a list variable is called a collection, and best practice is to name the type Collection of [item type]. For example, the datatype of a table of IncomeItems with columns IncomeType and MonthlyAmount would be specified like this:

List Functions

FEEL provides a wide variety of built-in functions that operate of lists and tables. Some are specific to lists of numbers and date/times, but most apply to any list item type. Examples are shown in the table below:

Filters

To select an item from a list or table, FEEL provides the filter operator, modeled as square brackets immediately following the list or table name, containing a filter expression, either an integer or a Boolean. If an integer, the filter returns the item in that position. For example, myList[1] returns the first item in myList. A Boolean expression returns all items for which the expression is true. For example, Loan Table[ratePct<4.5] returns the collection of all Loan Table items for which the component ratePct is less than 4.5.

A Boolean filter always returns a collection, even if you know it can return at most a single item. If each Customer has a unique id, Customer[id=123456] will return a list containing one Customer, and Customer[id=123456].Name returns a list containing one Name. A list containing one item is NOT the same as the item on its own. It is best practice, then, to append the filter [1] to extract the item value, making the returned value type Text rather than Collection of Text: Customer[id=123456].Name[1].

It is quite common that a filtered list or table is the argument of a list function, for example, count(Loan Table[ratePct<4.5]) returns the count of rows of Loan Table for which ratePct is less than 4.5.

If a Boolean filter expression finds no items for which the expression is true, the returned value is the empty list [], and if you try to extract a value from it you get null. For example, if the table Customer has no entry with id=123456, then Customer[id=123456] is [] and Customer[id=123456].Name[1] is null. Because of situations like this, DMN recently introduced B-FEEL, a dialect of FEEL that better handles null values, and you should be using B-FEEL in your models. Without B-FEEL, concatenating a null string value with other text returns null; with B-FEEL the null value is replaced by the empty string “”.

Iteration

A powerful feature of FEEL is the ability to iterate some expression over all items in a list or table. The syntax is

for [range variable] in [list name] return [output expression]

Here range variable is a name you select to stand for a single item value in the list. For example, if LoanTable is a list of mortgage loan products specified by lenderName, rate, pointsPct, fees, and term, we can iterate the amortization formula over each loan product to get the monthly payment for each loan product:

for x in LoanTable return payment(RequestedAmt*(1+x.pointsPct/100), x.rate/100, x.term)

Here x is the range variable, meaning a single row of LoanTable, and payment is a BKM containing the amortization formula based on the loan amount, loan rate, and term to get the monthly mortgage payment. To simplify entry of this expression, Trisotech provides an iterator boxed expression.

In addition to this iteration over item value, FEEL provides iteration over item position, with the syntax

for [range variable] in [integer range] return [output expression]

For example,

for i in 1..10 return i*i

returns a list of the square of integers from 1 to 10.

Testing Membership in a List

FEEL provides a number of ways to check whether a value is contained in some list.

some x in [list expression] satisfies [Boolean expression]

returns true if any list item satisfies the Boolean test, and

every x in [list expression] satisfies [Boolean expression]

returns true only if all list items satisfy the test.

The in operator is another way of testing membership in a list:

[item value] in [list expression]

returns true if the value is in the list.

The list contains() function does the same thing:

list contains([list expression], [value])

And you can also use the count() function on a filter, like this:

count([filtered list expression])>0

Set Operations

The membership tests described above are ways to check whether a single value is contained in a list, but sometimes we have two lists and we want to know whether some or all items in listA are contained in listB. Sometimes we want to consider comparing setA to setB, where these sets are deduplicated and unordered versions of the lists. If necessary, we can use the function distinct values() to remove duplicates from the lists.

Intersection

ListA and listB intersect if any value is common between them. To test intersection, we can use the expression

some x in listA satisfies list contains(listB, x)
Containment

To test whether all items in listA are contained in listB we can use the every operator:

every x in listA satisfies list contains (listB, x)
Identity

The simple function listA=listB returns true only if the lists are identical at every position value. Testing whether the deduplicated unordered sets are identical is a little trickier. We can use the FEEL union() function to concatenate and deduplicate the lists, in combination with the every operator:

every x in union(listA, listB) satisfies (list contains(listA, x) and list contains(listB,x))

Sorting a List

We can sort a list using the FEEL sort() function, which is unusual in that the second parameter of the function is itself a function. Usually it is defined inline as an anonymous function, using the keyword function with two range variables standing for any two items in the list, and a Boolean expression of those range variables. The Boolean expression determines which range variable value precedes the other in the sorted list. For example,

sort(LoanTable, function(x,y) x.rate<y.rate)

sorts the list LoanTable in increasing value of the loan rate. Here the range variables x and y stand for two rows of LoanTable, and row x precedes row y in the sorted list if its rate value is lower.

Replacing a List Item

Finally, we can replace a list item with another using Trisotech’s list replace() function, another one that takes a function as a parameter. The syntax is

list replace([list], [match function], [newItem])

The match function is a Boolean test involving a range variable standing for some item in the original list. For example, EmployeeTable lists each employee’s available vacation days. When the employee takes a vacation, the days are deducted from the previous AvailableDays to generate an updated row NewRecord. We use the match function to select the employee matching the EmployeeId in another input, VacationRequest.

Now we want to replace the employee record in that table with NewRecord:

list replace(EmployeeTable, function(x, Vacation Request) x.employeeid=VacationRequest.employeeId, NewRecord)

Here the range variable x stands for some row of EmployeeTable, and we are going to replace the row matching the employeeid in VacationRequest. So in this case we are replacing a table row with another table row, but we could use list replace() to replace a value in a simple list as well.

The Bottom Line

In real-world decision logic, your source data is often specified using lists and tables. Rather than extracting individual items in your input data and processing one item at a time, it is generally easier and faster to process all the items at once using lists and tables. FEEL has a wide variety of functions and operators that make it easy, and the Automation features of Trisotech Decision Modeler make it easy to execute them.

Want a deeper dive into the use of lists and tables? Check out our DMN Method and Style training.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - BPM+ : The Enduring Value of Business Automation Standards
Bruce Silver
Blog

BPM+ : The Enduring Value of Business Automation Standards

By Bruce Silver

Read Time: 5 Minutes

While it no longer has the marketing cachet it once enjoyed, the fact that the foundations of the Trisotech platform are the three business automation standards from the Object Management Group – BPMN, CMMN, and DMN – remains a key advantage, with enduring benefits to users. In this post we’ll discuss what these standards do, how we got them, and their enduring business benefits.

In the Beginning

In the beginning, lets say the year 2000, there were no standards, not for business automation, not for almost anything in tech, really. The web changed all that. I first heard about BPMN in 2003. I had been working in something called workflow, automating and tracking the flow of human tasks, for a dozen years already, first as an engineering manager and later as an industry analyst. There were many products, all completely proprietary. But suddenly there was this new thing called BPM that promised to change all that.

Previously, in addition to workflow, there had been a completely separate technology of enterprise application integration or EAI. It was also a process automation technology but based on a message bus, system-to-system only, no human tasks, and again all proprietary technology. It was a mess.

Now a new technology called web services offered the possibility to unify them based on web standards. In the earlier dotcom days, the web just provided a way to advertise your business, maybe with some rudimentary e-commerce. Now people realized that the standard protocols and XML data of the web could be used to standardize API calls in the new Service-Oriented Architecture (SOA). A Silicon Valley startup called Intalio proposed using web services to define a new process automation language called BPML that unified workflow and application integration in this architecture. But they went further: Surprisingly, solutions using this language would be defined graphically, using a diagram notation called BPMN. The goal was engaging business users in the transition to SOA, and BPMN offered the hope of bridging the business-IT divide that had made automation projects fail in the past.

Enterprise application software vendors liked it too. Functionality that previously was embedded in their monolithic applications could be split off into microservices that could be invoked as needed and sequenced using the protocols of the world wide web. This was a game-changer. Maybe you could make human workflow tasks into services as well.

As it evolved, BPMN 1.0 took on the look of swimlane flowcharts, which were already used widely by business users, although without standard semantics for the shapes and symbols. Now with each shape bound to the semantics of the execution language BPML, BPMN promised “What You Draw Is What You Execute.” Intalio was able to organize a consortium of over 200 software companies called BPMI.org in support of this new business automation standard.

But in fact, they were too early, since the core web services standard WSDL was not yet final, and in the end BPML did not work with it. So BPML was replaced by another language BPEL that did. While the graphical notation BPMN remained popular with business for process documentation and improvement, it was not a perfect fit with BPEL for automation. There were patterns you could draw in BPMN that did not map to BPEL. So no longer could it claim “what you draw is what you execute.” BPMI.org withered and ultimately was acquired by OMG, a true standards organization.

Even so, BPMN remained extremely popular with business users. They were not, in fact, looking to automate their processes, merely to document them for analysis and potential improvement. Still the application software vendors and SOA middleware vendors pushed for a new version of BPMN that would make the diagrams executable. But it was not until 2010 that BPMN 2.0 finally gave BPMN diagrams a proper execution language. I had been doing BPMN training on version 1.x since 2007, and somehow I wound up on the BPMN 2.0 task force, representing the interests of those business users doing descriptive modeling.

BPMN 2.0 was widely adopted by almost all process automation vendors. Later OMG developed two other business automation standards, CMMN for case modeling and DMN for decision modeling, all based on a similar set of principles. There are three standards because they cover different things. They were developed at different times for different reasons. So let me briefly describe what they do, their common principles, and why they remain important today.

BPMN

BPMN was the first. BPMN describes a process as a flowchart of actions leading from some defined triggering event to one of several possible end states. Each action typically represents either a human task or an automated service, for the first time unifying human workflow and EAI. The diagrams include both the normal “happy path” and branching paths followed by exceptions. It is easy to see the sequence of steps the process may follow, but every instance of the process must follow some path in the diagram. That is both its strength and its basic limitation.

BPMN remains very popular because the diagrams are intuitive. Users understand them in general terms without training, and what you draw is what you execute. BPMN can describe many real-world processes but not all of them.

There are actually two distinct communities of BPMN users. It is interesting that they interact hardly at all. One uses BPMN purely for process description. They have no interest in automation, just documentation and analysis. Those BPMN users simply want to document their current business processes with hope of improving them, since the diagrams, if made properly with something like Method and Style, more clearly express the process logic than a long text document. For them, the value of BPMN is it generally follows the look of traditional swimlane flowcharts.

The other community is interested in process automation, the actual implementation of new and improved processes. For them, the value of BPMN is what you draw is what you execute. It is what allows business teams to better specify process requirements and generally collaborate more closely with development teams. This reduces time to value and generally increases user acceptance.

As a provider of BPMN training, my interactions have been mostly with the first community. In a prior life as an industry analyst, they were exclusively with the second community. So I have a good sense of the expectations of both groups. And while I believe the descriptive modeling community is probably still larger, it is now declining, and heres why.

In a descriptive modeling team, there are typically meetings and workshops where the facilitator asks how does the process start? What happens next? And so forth. Thats good for capturing the happy path, the normal activity flow to a successful end state, which is the main interest. But BPMN diagrams also contain gateways and events that define exception paths, since, remember, every instance of the process must follow some path drawn in the diagram. In reality, however, you are never going to capture all the exceptions that way. 

So over the past decade, a lot of process improvement efforts have shifted away from descriptive process modeling in this way to something called process mining, instrumenting key steps of the process as it is running however they are achieved and correlating the timestamps to create a picture of all the paths from start to end including, importantly, that small fraction of instances that take much too long, cost way too much, and are responsible for complaints to customer service. Dealing with those cases leads to significant process improvement, and automating them requires accommodating them somehow.

Some process mining tools map their findings back to BPMN, but BPMN demands a reason some gateway condition or event to explain each divergence from the happy path. This points out the key weakness of BPMN as an automation language, the difficulty in accommodating all the weird exceptions that occur in real world scenarios. Something more flexible is needed.

CMMN

CMMN was created to handle those processes that BPMN cannot. It focuses on a key difference from BPMN, work taking an unpredictable path of over its lifetime, emphasizing event-triggered behavior and ad-hoc decisions by knowledge workers.

A case in CMMN is not described by a flowchart, where you can clearly see the order of actions, but rather a diagram of states, called stages, each specifying actions that MAY be performed or in some cases MUST be performed, in order to complete some milestone or transition to some other state. That allows great flexibility, a better way than BPMN, in fact, to handle real-world scenarios in which it is impossible to enumerate the all the possible sequences of actions needed.

This comes at a cost, however. While what you draw is still what you execute, what you draw is not easy to understand. CMMN diagrams are not flowcharts but state diagrams. They are not intuitive, even if you understand the shapes and symbols well. Each diagram shape goes through various lifecycle states Available, Enabled, Active, Completed or Terminated – and each change of state is represented by a standard event that triggers state transitions in other shapes. So in the end the diagram, when fully expanded down to individual tasks, shows actions that MAY be performed or MUST be performed, and the events that trigger other shapes.

In practice, CMMN is almost always used in conjunction with BPMN, as the specific actions described, typically a mix of automated services and human interaction, are better modeled as BPMN processes. So typically in a business automation solution, CMMN is at the top level, where the choice of what actions to perform is flexible and event-driven. Many of the CMMN actions are modeled as process tasks, meaning enactment of a BPMN process. Organizing the solution in this way has the benefit of CMMNs flexibility to handle the events and exceptions of real-world end-to-end scenarios, and BPMNs business-understandable automation of the details of the required actions.

Above is an example from my book CMMN Method and Style, based on social care services in a European country. The stage Active Case contains 4 sub-stages without any entry conditions. That means they can be activated at any time. The triangle marker at the bottom means manually activated; without that they would activate automatically. And notice within each stage are a mix of human tasks, with the head and shoulders icon, and process tasks, with the chevron icon. At the bottom are 4 more tasks not in a substage that deal with administrative functions of the case. They potentially can occur at any time when Active Case is in the active state, but they have entry conditions the white diamonds meaning they must be triggered by events, either a user interaction or a timer. This diagram does not include any file item events receipt or update of some document or data which also triggers tasks and stage entry or exit. As you can see, the case logic is very flexible almost anything could be started at any time but at this level not too revealing as to the details of the actions. Those are detailed within the BPMN processes and tasks.

Here is another CMMN example that illustrates the flexibility needed for what seems like a simple thing like creating a user account in a financial institution. Yes, that is a straightforward process BPMN but it lives inside a wider context, which is managing the users status as a client. A new client has to be initialized, set up thats another BPMN process and then you can add one or more accounts as long as the client remains an Active User. You can deactivate the user thats another process which automatically terminates the Manage Active User stage, thats the black diamond, and triggers the Manage Inactive User stage thats the link to the white diamond labeled exit, exit being the lifecycle event associated with the Terminated state of Manage Active User. 

The only thing that happens in the Manage Inactive User stage is you can reactivate the user, another process. And when that is complete, since there is nothing else in that stage, the stage is complete, meaning the link to the white diamond in Manage Active User labeled complete triggers activation of that stage. The hashtag marker on both stages means after completion or termination they can be retriggered; without that marker they could not. And finally, the process task Delete user has no entry conditions, so it can be activated manually at any time, and when complete terminates the case, the black diamond. So what seemed initially like a simple process, Add User Account, in a real world application expands to a case handling the whole lifecycle of that client.

DMN

DMN describes not actions but business logic, understandable by non-programmers but deployable nevertheless as an executable service. DMN is a successor to business rule engine technology from the 1990s, better aligned with today’s microservice-oriented architecture. In DMN, a decision service takes data in and returns data out. It takes no other actions. Like BPMN and CMMN, design is largely graphical.

The diagram you see here, called a Decision Requirements Diagram, shows the dependencies of each rectangle, representing a decision, on supporting decisions and input data connected to it via incoming arrows called information requirements. The value expression of each decision can reference only those direct information requirements. Even if they are unable to model the value expressions, business users are able create these diagrams, which serve as verifiable business requirements for the executable solution.

But DMN was designed to let non-programmers go further, all the way in fact to executable decision services. Instead of program code, DMN provides standard tabular formats called boxed expressions. These are what define the expressions that compute the value of each decision.

Decision tables, like the one shown above, are the most familiar boxed expression type, but there are many more, such as Contexts, like the one shown below, where rows of the table define local variables called context entries, used to simplify the expressions in subsequent rows. Note here that the value expression of a context entry, such as Loan amount here, can itself be a context or some other boxed expression.

The expressions of each context entry, the cells in gray here, use a low-code expression language called FEEL. Although FEEL is defined in the DMN spec, it has the potential to be used in BPMN and CMMN as well. More about that later.

The combination of DRDs, boxed expressions, and FEEL means business users can create directly executable decision logic themselves without programming. In BPMN and CMMN, graphical design is limited to an outline of the process or case logic, the part shown in the diagram. To make the model fully executable, those standards normally still require programming.

On the Trisotech platform, however, that’s no longer the case. When BPMN and CMMN were developed, they did not define an expression language, because they assumed, correctly at the time, that the details of system integration and user experience would require programming in a language like Java. But DMN was published 6 years later, when software tools were more advanced. DMN was intended to allow non-programmers to create executable decision services themselves using a combination of diagrams, tables, and FEEL, a standard low-code expression language. An expression language is not a full programming language; it does not define and update variables, but just computes values through formulas. For example, the Formulas ribbon in Excel represents an expression language.

While adoption of FEEL as a standard has been slow, even within the decision management community, many business automation vendors have implemented their own low-code expression languages with the objective of further empowering business teams to collaborate with developers in solutions.. Low-code expression languages should be seen today as the important other half of model-based business automation, along with diagrams composed of standardized shapes, semantics, and operational behavior. The goal is faster time-to-value, easier solution maintenance, and improved user acceptance through close collaboration of business and development teams.

Shared Principles

Three key principles underlie these OMG standards, which play a key role in the Trisotech platform today, and I want to say more about why they are important.

The first is that they are all model-based – meaning diagrams and tables and those models are used both for outlining the desired solution, what have been called business requirements, and creating the executable solution itself, going back to the original promise of What You Draw Is What You Execute.

The traditional method of text-based business requirements is a well-known cause of project failure. Instead, models allow subject matter experts in the business to create an outline of the solution in the form of diagrams and tables that can be verified for completeness and self-consistency. Moreover, the shapes and symbols used in the diagrams are defined in a spec, so the models look the same, and work the same, in any tool. And even better, the execution semantics of the shapes is also spelled out in a spec, so what you draw is actually what you execute. Engaging business users together with IT from the start of the project speeds time to value and increases user acceptance.

Second, the models have a standard xml file format, so they can be interchanged between tools without loss of fidelity. In practice, for example, that means business users and developers can use different tools and interchange models between them. Ive been involved in engagements where the business wants to use modeling tool X, and IT, in a separate organization, insists on using tool Y for the runtime. Thats not great, but doable, since you can interchange models between the tools using the standard file format.

Third, OMG procedures ensure vendor-neutrality. Proprietary features cannot be part of the standard, and the IP is free to use. In practice this tends to result in open-source tools and lower software cost. It may not be widely appreciated but many, possibly the majority, of BPMN diagramming tools today stem from an open source project, and another open source project created a popular BPMN runtime. DMN is a similar story. Tool vendors were stumped by the challenge of parsing FEEL, where variable and function names may contain spaces, until Red Hat made their FEEL parser and DMN runtime open source. Implementing these standards is difficult, and open source software has been a big help in encouraging adoption.

Enduring Business Value

So where is the business value of these standards? What are the benefits to customers? In my view, these fall into 3 categories:

First, common terminology and understanding. I remember the days of workflow automation before BPMN. There was no common terminology. You learned a particular tool, and tools were for programmers. Now BPMN, CMMN, and DMN provide tool-independent understanding of processes, cases, and decision logic. There is much wider availability of information about these technologies, through books, web posts, and training, down to the details of modeling and automation. This in turn makes it easier to hire employees and engage service providers already conversant and experienced in the technology. It also lowers the re-learning cost if it becomes necessary to switch platform vendors.

Benefit #2 is faster time-to-value. Today this might be the most critical one for customers. Model-based solutions are simply faster to specify, faster to implement, and faster to adapt to changing requirements than traditional paradigms. Faster to specify because you can more easily engage the subject matter experts up front in requirements development and more quickly verify the requirements for completeness and self-consistency. Faster to implement because you are building on top of standard automation engines. The custom code is just for specific system integration and user experience details. And faster to adapt to changing requirements because often this involves only changing the model not the custom code.

So, for example, a healthcare provider can go live with a Customer Onboarding portal in only 3 months instead of a year, leveraging involvement from business users across multiple departments and close collaboration between business and IT. Everyone is using the same terminology, the same diagrams, and what you draw is what you execute.

A key feature of the 3 OMG standards is they were designed to work together. CMMN, for example, has a standard task type that calls a BPMN process and another that calls a DMN decision service. BPMN can call a DMN decision service and, on the Trisotech platform, a CMMN case. And so you have platforms like Trisotech that handle the full complement of business automation requirements event-driven case management, straight through process automation, long-running process automation, and decision automation. The alternative is a lot of custom code expensive and hard to adapt to changing requirements or separate platforms for process, decision, and case management, again expensive and requiring a lot of system integration.

A single platform that handles all three especially one based on models, business engagement, and vendor-independent standards lowers costs the cost of software, of implementation, and of maintenance of the solution over the long term.

SDMN, a Common Data Language

While BPMN, CMMN, and DMN work together well, they still lack a common format for data. DMN uses FEEL, but in most tools, executable models BPMN and CMMN use a programming language like Java to define data. It would be better if they shared a way to define data. Trisotech uses FEEL for all three model types, which is ideal. But now there is an effort to make that common data language a standard.

BPM+ Health is an HL7 “community of practice” in the healthcare space seeking industry convergence around such a common data language, SDMN. That language is essentially FEEL. Technically, it is defined by the OMG Shared Data Model and Notation (SDMN) standard, a key BPM+ Health initiative seeking to unify data modeling across BPMN, CMMN, and DMN. 

Data modeling is primarily used in executable models and omitted from purely descriptive models. The challenge for SDMN is to unify DMN variables, BPMN data objects, and CMMN case file items, and describe them as what DMN calls item definitions.

SDMN defines a standards-based logical data model for data used across BPMN, CMMN, and DMN. Data linked to SDMN definitions is not specific to a particular model but usable consistently across models of different types. In this sense it is the logical data model equivalent to the Trisotech Knowledge Entity Modeler. But where KEM defines a conceptual data model, with focus of KEM is on term semantics, SDMN is a logical data model, with focus on data structure, what DMN calls item definitions.

At this time, SDMN 1.0 is still in beta, but two artifacts illustrate what is planned. The DataItem diagram, shown below, describes the names and types of data items, and structural relationships between them.

In addition, an ItemDefinition diagram, again shown below, details the structure of each data item.

Trisotech users will recognize this as an alternative way to create item definitions, easier for tool vendors without Trisotech’s graphics capability. I still prefer Trisotech’s collapsible implementation:

The Path Forward

Going forward, look for more solutions developed using BPMN, CMMN, and DMN in combination. Typically CMMN is used at the top level, where it can handle all possibilities in a real-world process application. Specific actions are best implemented as BPMN processes, invoked from CMMN as a process task. Decision logic can be called from either CMMN or BPMN as a decision task. So you see why having common data definitions for all three modeling languages is valuable in such integrated solutions.

On the Trisotech platform, SDMN, in the form of FEEL, is already supported in all three languages, providing the additional benefit of Low-Code business automation. Starting in healthcare, SDMN will make this integration enabler available on other platforms as well.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

DecisionCamp 2024 Presentation
Revolutionizing Credit Risk Management in Banking

Presented By
Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE)
Description

The European Banking Authority (EBA) Dear CEO letter, typically issued to provide guidance and expectations for banks on key regulatory issues, emphasizes the need for stringent credit risk management, continuous monitoring, and compliance with evolving regulations.

The primary challenge for banks in monitoring customers and credit risks is the complexity and volume of data that must be continuously analyzed and acted upon. This complexity arises from several factors: the variety of triggers, the volume and complexity of metrics, continuous monitoring, quickly adaptable regulatory compliance, a comprehensive 360-degree customer view.

By leveraging DMN modeling & execution banks can effectively meet the EBA’s expectations outlined in the Dear CEO letter. DMN engines provide a robust solution for automated decision-making, continuous monitoring, regulatory compliance, and transparency, ensuring that banks can manage credit risks proactively and efficiently while maintaining the required standards set by the EBA and other regulatory bodies. This alignment not only helps in fulfilling regulatory obligations but also strengthens the overall financial health and stability of the bank.

During his presentation Stefaan Lambrecht will demonstrate an end-to-end solution to these challenges inspired by a real-life case, and making use of an integrated use of DMN, CMMN and BPMN.

Watch the video

Content

Presentation

View all

DecisionCamp 2024 Presentation
Revolutionizing Credit Risk Management in Banking

Presented By
Stefaan Lambrecht (The TRIPOD for OPERATIONAL EXCELLENCE)
Description

The European Banking Authority (EBA) Dear CEO letter, typically issued to provide guidance and expectations for banks on key regulatory issues, emphasizes the need for stringent credit risk management, continuous monitoring, and compliance with evolving regulations.

The primary challenge for banks in monitoring customers and credit risks is the complexity and volume of data that must be continuously analyzed and acted upon. This complexity arises from several factors: the variety of triggers, the volume and complexity of metrics, continuous monitoring, quickly adaptable regulatory compliance, a comprehensive 360-degree customer view.

By leveraging DMN modeling & execution banks can effectively meet the EBA’s expectations outlined in the Dear CEO letter. DMN engines provide a robust solution for automated decision-making, continuous monitoring, regulatory compliance, and transparency, ensuring that banks can manage credit risks proactively and efficiently while maintaining the required standards set by the EBA and other regulatory bodies. This alignment not only helps in fulfilling regulatory obligations but also strengthens the overall financial health and stability of the bank.

During his presentation Stefaan Lambrecht will demonstrate an end-to-end solution to these challenges inspired by a real-life case, and making use of an integrated use of DMN, CMMN and BPMN.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Mastering Decision Centric Orchestration

Balancing Human Insight and AI Automation for Justifiable, Context-Aware Business Choices

Presented By
Denis Gagne CEO & CTO Trisotech.
Description

In today’s dynamic business landscape, decision-centric orchestration is pivotal for organizational success. This presentation delves into the diverse spectrum of decisions within an organization, ranging from those that are purely human to those that are fully automated. We will explore how various types of artificial intelligence (AI) support decision automation, with a particular emphasis on the crucial role of context in making informed business choices. Attendees will gain a comprehensive understanding of how context-aware AI can enhance decision-making processes, ensuring that outcomes are not only efficient but also relevant to specific business needs.

Highlighting the necessity of explainable and justifiable decisions, we will discuss the imperative of maintaining human oversight in all business processes. The presentation will address the challenges and opportunities associated with integrating AI into decision-making frameworks, focusing on the balance between human intuition and AI-driven efficiency. Attendees will learn strategies for achieving a harmonious integration of these elements, ensuring that their organization’s decision-making is robust, transparent, and aligned with ethical standards. Ultimately, this session aims to equip business leaders with the knowledge and tools to leverage both human insight and AI automation for superior organizational performance.

Watch the video

Content

Presentation

View all

Mastering Decision Centric Orchestration

Balancing Human Insight and AI Automation for Justifiable, Context-Aware Business Choices

Presented By
Denis Gagne CEO & CTO Trisotech.
Description

In today’s dynamic business landscape, decision-centric orchestration is pivotal for organizational success. This presentation delves into the diverse spectrum of decisions within an organization, ranging from those that are purely human to those that are fully automated. We will explore how various types of artificial intelligence (AI) support decision automation, with a particular emphasis on the crucial role of context in making informed business choices. Attendees will gain a comprehensive understanding of how context-aware AI can enhance decision-making processes, ensuring that outcomes are not only efficient but also relevant to specific business needs.

Highlighting the necessity of explainable and justifiable decisions, we will discuss the imperative of maintaining human oversight in all business processes. The presentation will address the challenges and opportunities associated with integrating AI into decision-making frameworks, focusing on the balance between human intuition and AI-driven efficiency. Attendees will learn strategies for achieving a harmonious integration of these elements, ensuring that their organization’s decision-making is robust, transparent, and aligned with ethical standards. Ultimately, this session aims to equip business leaders with the knowledge and tools to leverage both human insight and AI automation for superior organizational performance.

View the slides

Videos

Recorded webinars

View all videos
Attend a live webinar

Bruce Silver's blog post - What Is a Decision Service?
Bruce Silver
Blog

What Is a Decision Service?

By Bruce Silver

Read Time: 5 Minutes

Beginning DMN modelers might describe a decision service as that rounded rectangle shape in a DRD that behaves similarly to a BKM. That’s true, but it is a special case. Fundamentally, a decision service is the unit of execution of DMN logic, whether that is invoked by a decision in the DRD, a business rule task in BPMN, a decision task in CMMN, or an API call in any external client application or process. Whenever you execute DMN, whether in production or simply testing in the modeling environment, you are executing a decision service.

In Trisotech Decision Modeler, that is the case even if you never created such a service yourself. That’s because the tool has created one for you automatically, the Whole Model Decision Service, and used it by default in model testing. In fact, a single decision model is typically the source for multiple decision services, some created automatically by the tool and some you define manually, and you can select any one of them for testing or deployment. In this post we’ll see how that works.

Service Inputs, Outputs, and Encapsulated Decisions

As with any service, the interface to a decision service is defined by its inputs and outputs. The DMN model also defines the internal logic of the decision service, the logic that computes the service’s output values from its input values. Unlike a BKM, where the internal logic is a single boxed expression, the internal logic of a decision service is defined by a DRD.

When the service is invoked by a decision in a DMN model, typically it has been defined in another DMN model and imported to the invoking model, such as by dragging from the Digital Enterprise Graph. But otherwise, the service definition is a fragment of a larger decision model, including possibly the entire DRD, and is defined in that model.

Within that fragment, certain decisions represent the service outputs, other decisions and input data represent the inputs, and decisions in between the outputs and inputs are defined as “encapsulated”, meaning they are used in the service logic but their values are not returned in the service output. When you execute a decision service, you supply values to the service inputs and the service returns the values of its outputs.

One Model, Many Services

In Trisotech’s Whole Model Decision Service, the inputs are all the input data elements in the model, the outputs are all “top-level” decisions – that is, decisions that are not information requirements for other decisions. All other decisions in the DRD are encapsulated. In addition to this service, Trisotech automatically creates a service for each DRD page in the model, named Diagram Page N. If there is only one DRD page in the model, it will be the same as the Whole Model Decision Service, but if your model has imported a decision service or defines one independently of the invoking DRD, an additional Diagram Page N service will reflect that one.

All that is just scratching the surface, because quite often you will want to define additional decision services besides those automatically created. For example, you might want your service to return one or more decisions that are defined as encapsulated in the Whole Model Decision Service. Or you might want some inputs to your service to be not input data elements but supporting decisions. In fact, this is very common in Trisotech Low-Code Business Automation models, where executable BPMN typically invoke a sequence of decision services, each a different fragment of a single DMN model. In BPMN, if you are invoking the Whole Model Decision Service it’s best to rename it, because the BPMN task that invokes it inherits the decision service name as the task name.

Defining a Decision Service

So how do you define a decision service? The DMN spec describes one way, but it is not the best way. The way described in the spec is as a separate DRD page containing an expanded decision service shape, a resizable rounded rectangle bisected by a horizontal line. The shape label is the service name. Decisions drawn above the line are output decisions, those below the line are encapsulated decisions. Service inputs – whether input data or decisions – are drawn outside the service shape, as are any BKMs invoked by decisions in the service. Information requirements and knowledge requirements are drawn normally.

The better way, at least in Trisotech Decision Modeler, is the decision service wizard.

In the wizard, you first select the output decisions from those defined in your model, and the wizard populates the service inputs with their direct information requirements. You can then promote selected inputs to encapsulated, and the wizard recalculates the needed inputs. You can keep doing that until all service inputs are input data, or you can stop anywhere along the way. The reason why this is better is that it ensures that all the logic needed to compute the output values from the inputs are properly captured in encapsulated decisions. You cannot guarantee that with the expanded decision service shape method.

Testing DMN Models

Trisotech’s Automation feature lets you test the logic of your DMN models, and I believe that is critically important. On the Execution ribbon, the Test button invites you first to select a particular decision service to test. If you forget to do this, the service it selects by default depends on the model page you have open at the time you click Test.

In Test, the service selector dropdown lists even more services than the automatically generated ones and those you created manually, which are listed above a gray line in the dropdown. Below the line is listed a separate service for every decision in the model, named with the decision name, with direct information requirements as the inputs. (For this reason, you should not name a decision service you create with the name of a decision, as this name conflicts with the generated service.) In addition, below the line is listed one more: Whole model, whose inputs are all the input data elements and outputs are all the decisions in the model. It’s important to note that these below-the-line services are available only for testing in Decision Modeler. If you want to deploy one of them, you need to manually create it, in which case it is listed above the line.

In Test, your choice of decision service from the dropdown determines the inputs expected by the tool. As an alternative to the normal html form, which is based on the datatypes you have assigned to the inputs, you can select an XML or json file with the proper datatype, or use a previously saved test case.

Invoking a Decision Service in DMN

Invoking a decision service in DMN works the same way as invoking a BKM. On the DRD containing the invocation, the decision service is shown as a collapsed decision service shape linked to the invoking decision with a knowledge requirement.

The invoking decision can use either a boxed invocation or literal invocation. In the former, service inputs are identified by name; in the latter, input names are not used. Arguments are passed in the order of the parameters in the service definition, so you may need to refer to the service definition to make sure you have that right.

Invoking a Decision Service in BPMN

In Business Automation models it is common to model almost any kind of business logic as a DMN decision service invoked by a business rule task, also called a decision task. In Trisotech Workflow Modeler, you need to link the task to a decision service in your workspace; it is not necessary to first Deploy the service. (Deployment is necessary to invoke the service from an external client.) As mentioned previously, the BPMN task inherits the name of the decision service. By default, the task inputs are the decision service inputs and the task outputs are the decision service outputs.

Data associations provide data mappings between process variables – BPMN data objects and data inputs – to the task inputs, and from task outputs to other process variables – BPMN data objects and data outputs. On the Trisotech platform, these data mappings are boxed expressions using FEEL, similar to those used in DMN.

The Bottom Line

The important takeaway is the a decision service is more than a fancy BKM that you will rarely use. If you are actually employing DMN in your work, you will use decision services all the time, both for logic testing and deployment, and for providing business logic in Business Automation models. The decision service wizard makes it easy.

If you want to find out more about how to define and use decision services, check out DMN Method and Style 3rd edition, with DMN Cookbook, or my DMN Method and Style training, which includes post-class certification.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - FEEL Operators Explained
Bruce Silver
Blog

FEEL Operators Explained

By Bruce Silver

Read Time: 5 Minutes

Although DMN’s expression language FEEL was designed to be business-friendly, it remains intimidating to many. That has led to the oft-heard charge that “DMN is too hard for business users”. That’s not true, at least for those willing to learn how to use it. Although the Microsoft Excel Formula language is actually less business-friendly than FEEL, somehow you never hear that “Excel is too hard for business users.”

One key reason why FEEL is more business-friendly than the Excel Formula language, which they now call Power FX, is its operators. FEEL has many, and Power FX has very few. In this post we’ll discuss what operators are, how they simplify the expression syntax, and how DMN boxed expressions make some FEEL operators more easily understood by business users.

It bears repeating that an expression language is not the same as a programming language. A programming language has statements. It defines variables, calculates and assigns their values. You could call DMN as a whole a programming language, but the expression language FEEL does not define variables or assign their values. Those things done graphically, in diagrams and tables – the DRD and boxed expressions. FEEL expressions are simply formulas that calculate values: data values in, data values out.

Functions and Operators

Those formulas are based on two primary constructs: functions and operators.

The logic of a function is specified in the function definition in terms of inputs called parameters. The same logic can be reused simply by invoking the function with different parameter values, called arguments. The syntax of function invocation – both in FEEL and Excel Formulas – is the function name immediately followed by parentheses enclosing a comma-separated list of arguments. FEEL provides a long list of built-in functions, meaning the function names and their parameters are defined by the language itself. Excel Formulas do the same. In addition, DMN allows modelers to create custom functions in the form of Business Knowledge Models (BKMs) and decision services, something Excel does not allow without programming.

Operators are based on reserved words and symbols in the expression with meaning defined by the expression language itself. There are no user-defined operators. They do not use the syntax of a name followed by parentheses enclosing a list of arguments. As a consequence, the syntax of an expression using operators is usually shorter, simpler, and easier to understand than an expression using functions.

You can see this from a few examples in FEEL where you could use either a function or an operator. One is simple addition. Compare the syntax of the expression adding variables a and b using the sum() function

sum(a, b)

with its equivalent using the addition operator +:

a + b

The FEEL function list contains() and the in operator do the same thing, test containment of a value in a list. Compare

list contains(myList, "abc")

with

"abc" in myList

Both FEEL and Excel support the basic arithmetic operators like +, -, *, and /, comparison operators like =, >, or <=, and string concatenation. But those are essentially the only operators provided by Excel, whereas FEEL provides several more. It is with these more complex operators that FEEL’s business-friendliness advantage stands out.

if..then..else

Let’s start with the conditional operator, if..then..else. These keywords comprise an operator in FEEL, where Excel can use only functions. Compare the FEEL expression

if Credit Score = "High" and Affordability = "OK" then "Approved" else "Disapproved"

with Excel’s function-based equivalent:

IF(AND(Credit Score = "High", Affordability = "OK"), "Approved", "Disapproved")

The length is about the same but the FEEL is more human-readable. Of course, the Excel expression assumes you have assigned a variable name to the cells – something no one ever does. So you would be more likely to see something like this:

IF(AND(B3 = "High", C3 = "OK"), "Approved", "Disapproved")

That is a trivial example. A more realistic if..then..else might be

if Credit Score = "High" and Affordability = "OK" then "Approved"
        else if Credit Score = "High" and Affordability = "Marginal" then "Referred"
        else "Disapproved"

That’s longer but still human-readable. Compare that with the Excel formula:

IF(AND(Credit Score = "High", Affordability= "OK"), "Approved", IF(AND(Credit Score = "High",
         Affordability = "Marginal"), "Referred", "Disapproved"))

Even though the FEEL syntax is fairly straightforward, DMN includes a conditional boxed expression that enters the if, then, and else expressions in separate cells, in theory making the operator friendlier for some users and less like code. Using that boxed expression, the logic above looks like this:

Filter

The FEEL filter operator is square brackets enclosing either a Boolean or integer expression, immediately following a list. When the enclosed expression is a Boolean, the filter selects items from the list for which the expression is true. When the enclosed expression evaluates to positive integer n, the filter selects the nth item in the list. (With negative integer n, it selects the nth item counting backward from the end.) In practice, the list you are filtering is usually a table, a list of rows representing table records, and the Boolean expression references columns of that table. I wrote about this last month in the context of lookup tables in DMN. As we saw then, if variable Bankrates is a table of available mortgage loan products like the one below,

then the filter

Bankrates[lenderName = "Citibank"]

selects the Citibank record from this table. Actually, a Boolean filter always returns a list, even if it contains just one item, so to extract the record from that list we need to append a second integer filter [1]. So the correct expression is

Bankrates[lenderName = "Citibank"][1]

Excel Formulas do not include a filter operator, but again use a function: FILTER(table, condition, else value). So if we had assigned cells A2:D11 to the name Bankrates and the column A2:A11 to the name lenderName, the equivalent Excel Formula would be

FILTER(Bankrates, lenderName = "Citibank", "")

but would more likely be entered as

FILTER(A2:D11, A2:A11 = "Citibank", "")

FEEL’s advantage becomes even more apparent with multiple query criteria. For example, the list of zero points/zero fees loan products in FEEL is

Bankrates[pointsPct = 0 and fees = 0]

whereas in Excel you would have

FILTER(A2:D11, (C2:C11=0)*(D2:D11=0), "")

There is no question here that FEEL is more business-friendly.

Iteration

The for..in..return operator iterates over an input list and returns an output list. It means for each item in the input list, to which we assign a dummy range variable name, calculate the value of the return expression:

for <range variable> in <input list> return <return expression, based on range variable>

It doesn’t matter what you name the range variable, also called the iterator, as long as it does not conflict with a real variable name in the model. I usually just use something generic like x, but naming the range variable to suggest the list item makes the expression more understandable. In the most common form of iteration, the input list is some expression that represents a list or table, and the range variable is an item in that list or row in that table.

For example, suppose we want to process the Bankrates table above and create a new table Payments by Lender with columns Lender Name and Monthly Payment, using a requested loan amount of $400,000. And suppose we have a BKM Lender Payment, with parameters Loan Product and Requested Amount, that creates one row of the new table, a structure with components Lender Name and Monthly Payment. We will iterate a call to this BKM over the rows of Bankrates using the for..in..return operator. Each iteration will create one row of Payments by Lender, so at the end we will have a complete table.

The literal expression for Payments by Lender is

for product in Bankrates return Lender Payment(product, Requested Amount)

Here product is the range variable, meaning one row of Bankrates, a structure with four components as we saw earlier. Bankrates is the input list that we iterate over. The BKM Lender Payment is the return expression.Beginners are sometimes intimidated by this literal expression, so, as with if..then..else, DMN provides an iterator boxed expression that enters the for, in, and return expressions in separate cells.

The BKM Lender Payment uses a context boxed expression with no final result box to create each row of the table. The context entry Monthly Payment invokes another BKM, Loan Amortization Formula, which calculates the value based on the adjusted loan amount, the interest rate, and fees.

Excel Formulas do not include an iteration function. Power FX’s FORALL function provides iteration, but it is not available in Excel. To iterate an expression in Excel you are expected to fill down in the spreadsheet.

The FEEL operators some..in..satisfies and every..in..satisfies represent another type of iteration. The range variable and input list are the same as with for..in..return. But in these expressions the satisfies clause is a Boolean expression, and the iteration operator returns not a list but a simple Boolean value. The one with some returns true if any iteration returns true, and the one with every returns true only if all iterations return true.

For example, again using Bankrates,

some product in Bankrates satisfies product.pointsPct = 0 and product.fees = 0

returns true, while

every product in Bankrates satisfies product.pointsPct = 0 and product.fees = 0

returns false. The iterator boxed expression works with this operator as well.

The bottom line is this: FEEL operators are key to its combination of expressive power and business-friendliness, surpassing that of Microsoft Excel Formulas. Modelers should not be intimidated by them. For detailed instruction and practice in using these and other DMN constructs, check out my DMN Method and Style training. You get 60-day use of Trisotech Decision Modeler and post-class certification at no additional cost.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Bruce Silver's blog post - Lookup Tables in DMN
Bruce Silver
Blog

Lookup Tables in DMN

By Bruce Silver

Read Time: 5 Minutes

Lookup tables are a common logic pattern in decision models. To model them, I have found that beginners naturally gravitate to decision tables, being the most familiar type of value expression. But decision tables are almost never the right way to go. One basic reason is that we generally want to be able to modify the table data without creating a new version of the decision model, and with decision tables you cannot do that. Another reason is that decision tables must be keyed in by hand, whereas normal data tables can be uploaded from Excel, stored in a cloud datastore, or submitted programmatically as JSON or XML.

The best way to model a lookup table is a filter expression on a FEEL data table. There are several ways to model the data table – as submitted input data, a cloud datastore, a zero-input decision, or a calculated decision. Each way has its advantages in certain circumstances. In this post we’ll look at all the possibilities.

As an example, suppose we have a table of available 30-year fixed rate home mortgage products, differing in the interest rate, points (an addition to the requested amount as a way to “buy down” the interest rate), and fixed fees. You can find this data on the web, and the rates change daily. In our decision service, we want to allow users to find the current rate for a particular lender, and in addition find the monthly payment for that lender, which depends on the requested loan amount. Finding the lender’s current rate is a basic lookup of unmodified external data. Finding the monthly payment using that lender requires additional calculation. Let’s look at some different ways to model this.

We can start with a table of lenders and rates, either keyed in or captured by web scraping. In Excel it looks like this:

When we import the Excel table into DMN, we get a FEEL table of type Collection of tBankrate, shown here:

Each row of the table, type tBankrate, has components matching the table columns. Designating a type as tPercent simply reminds us that the number value represents a percent, not a decimal.

Here is one way to model this, using a lookup of the unmodified external data, and then applying additional logic to the returned value.

We define the input data Bankrates as type Collection of tBankrate and lookup my rate – the one that applies to input data my lender. The lookup decision my rate uses a filter expression. A data table filter typically has the format

<table>[<Boolean expression of table columns>]

in which the filter, enclosed in square brackets, contains a Boolean expression. Here the table is Bankrates and the Boolean expression is lenderName = my lender. In other words, select the row for which column lenderName matches the input data my lender.

A filter always returns a list, even if it contains just one item. To extract the item from this list, we use a second form of a filter, in which the square brackets enclose an integer:

<list>[<integer expression>]

In this case, we know our data table just has a single entry for each lender, so we can extract the selected row from the first filter by appending the filter [1]. The result is no longer a list but a single row in the table, type tBankrate.

The decision my payment uses a BKM holding the Loan Amortization Formula, a complicated arithmetic expression involving the loan principal (p), interest rate (r), and number of payments (n), in this case 360.

Decision my payment invokes this BKM using the lookup result my rate. Input data loan amount is just the borrower’s requested amount, but the loan principal used in Loan Amortization Formula (parameter p) also includes the lender’s points and fees. Since pointsPct and ratePct in our data table are expressed as percent, we need to divide by 100 to get their decimal value used in the BKM formula.

When we run it with my lender “Citibank” and loan amount $400,000, we get the result shown here.

That is one way to do it. Another way is to enrich the external data table with additional columns, such as the monthly payment for a given loan amount, and then perform the lookup on this enriched data table. In that case the data table is a decision, not input data.

Here the enriched table Payments by Bank has an additional column, payment, based on the input data loan amount. Adding a column to a table involves iteration over the table rows, each iteration generating a new row including the additional column. In the past I have typically used a context BKM with no final result box to generate the each new row. But actually it is simpler to use a literal expression with the context put() function, as no BKM is required to generate the row, although we still need the Loan Amortization Formula. (Simpler for me, but the resulting literal expression is admittedly daunting, so I’ll show you an alternative boxed expression that breaks it into simpler pieces.)

context put(), with parameters context, keys, and value, appends components (named by keys) to a an existing structure (context), and assigns their value. If keys includes an existing component of context, value overwrites the previous value. Here keys is the new column name “payment”, and value is calculated using the BKM Loan Amortization Formula. So, as a single literal expression, Payments by Bank looks like this:

Here we used literal invocation of the BKM instead of boxed invocation, and we applied the decimal() function to round the result.

Alternatively, we can use the iterator boxed expression instead of the literal for..in..return operator and invocation boxed expressions for the built-in functions decimal() and context put() as well as the BKM. With FEEL built-in functions you usually use literal invocation but you can use boxed invocation just as well.

Now my payment is a simple lookup of the enriched data table Payments by Bank, appending the [1] filter to extract the row and then .payment to extract the payment value for that row.

When we run it, we get the same result for Citibank, loan amount $400,000:

The enriched data table now allows more flexibility in the queries. For example, instead of finding the payment for a particular lender, you could use a filter expression to find the loan product(s) with the lowest monthly payment:

Payments by Bank[payment=min(Payments by Bank.payment)]

which returns a single record, AimLoan. Of course, you can also use the filter query to select a number of records meeting your criteria. For example,

Payments by Bank[payment < 2650]

will return records for AimLoan, AnnieMac, Commonwealth, and Consumer Direct.

Payments by Bank[pointsPct=0 and fees=0]

will return records for zero-points/zero-fee loan products: Aurora Financial, Commonwealth, and eLend.

Both of these methods require submitting the data table Bankrates at time of execution. Our example table was small, but in real projects the data table could be quite large, with thousands of rows. This is more of a problem for testing in the modeling environment, since with the deployed service the data is submitted programmatically as JSON or XML. But to simplify testing, there are a couple ways you can avoid having to input the data table each time.

You can make the data table a zero-input decision using a Relation boxed expression. On the Trisotech platform, you can populate the Relation with upload from Excel. To run this you merely need to enter values for my lender and loan amount. You can do this in production as well, but remember, with a zero-input decision you cannot change the Bankrates values without versioning the model.

Alternatively, you can leave Bankrates as input data but bind it to a cloud datastore. Via an admin interface you can upload the Excel table into the datastore, where it is persisted as a FEEL table. So in the decision model, you don’t need to submit the table data on execution, and you can periodically update the Bankrates values without versioning the model. Icons on the input data in the DRD indicate its values are locked to the datastore.

Lookup tables using filter expressions are a basic pattern you will use all the time in DMN. For more information on using DMN in your organization’s decision automation projects, check out my DMN Method and Style training or my new book, DMN Method and Style 3rd edition, with DMN Cookbook.

Follow Bruce Silver on Method & Style.

Blog Articles

Bruce Silver

View all

All Blog Articles

Read our experts’ blog

View all

Learn how it works

Request Demo

Confirm your budget

Request Pricing

Discuss your project

Request Meeting
Graph