Can You Trust an LLM with Clinical Decisions? A Deterministic Approach Using DMN
Dr. John Svirbely, MD
Blog

Can You Trust an LLM with Clinical Decisions?

A Deterministic Approach Using DMN

By Dr. John Svirbely, MD

Read Time: 5 Minutes

The Problem

Large language models (LLMs) are increasingly used to support clinical decision-making. The appeal is obvious. They offer a flexible natural language interface and can synthesize complex convoluted inputs quickly.

LLMs are inherently probabilistic and non-deterministic. That is a risk.

In regulated domains such as healthcare, this creates a fundamental problem. Clinical decisions must be explainable and auditable. An LLM alone cannot meet these requirements today.

The April Challenge from the Decision Management Community explores how LLMs can orchestrate decision services using a clinical scenario: prescribing antibiotics for acute sinusitis.

The interesting question is not whether this works. The real question is whether it can be trusted.

Clinical Caveat

This challenge defines a prescriptive clinical context, which introduces regulatory and liability concerns from the start.

There are also gaps in the provided challenge scenario. The Cockcroft-Gault equation is incomplete, omitting the adjustment typically applied for female patients. Some treatment choices, such as the use of fluoroquinolones in pediatric cases, are also questionable. The duration of therapy is fixed without clear justification. There are no cited references to validate the rules, and only a single test case is provided.

These limitations matter if the solution is interpreted as clinically authoritative.

However, the intent of the challenge is not to define best clinical practice. Its value lies in exploring how decisions can be structured, executed, and orchestrated while leveraging the strengths of a large language model.

Our Position

LLMs should not make clinical decisions.

They should act as an interface and interaction layer, while deterministic decision models define and govern the outcome.

This separation is not optional in regulated environments. It is the only way to maintain control.

In practice, this means:

  • The LLM handles user interactions
  • DMN models define the decisions
  • BPMN and CMMN models orchestrate execution when control matters

The result is a system that combines flexible interaction with deterministic control.

Solution Overview

Our solution implements three decision services using DMN:

1

Creatinine clearance calculation

2

Antibiotic selection and dosing

3

Drug interaction checking

Each service is modeled independently and exposed as a callable endpoint.

The LLM connects to these services through Model Context Protocol (MCP), allowing it to discover and invoke them as tools.

The key constraint is simple. The LLM does not decide. It calls models that make the decisions.

Decision Modeling Approach

We start with the simplest case: a single, explicit calculation.

The Creatine Clearance is straightforward, with a single literal expression used to calculate the value. As noted above, the equation does not include a factor for female sex as specified in the original formula that was referenced.

Creatinine clearance decision service (DRD and calculation logic)

Decomposition and Traceability

The antibiotic prescription service is particularly interesting. It is decomposed into multiple intermediate decisions. Each step is modeled explicitly using decision tables. This ensures that every outcome can be traced back to its inputs and rules.

This is not just a modeling preference. It is what makes the solution reviewable.

If something goes wrong, you can trace the outcome back to a specific rule.

Antibiotic prescription decision requirements diagram

A single, monolithic rule set would be harder to validate and maintain. Breaking it down into selection, standard dosing, dose adjustment, and final assembly keeps each decision focused.

Antibiotic and standard dose decision tables Antibiotic and standard dose decision tables

For the creatine clearance adjustment, a note was added to the result to explain the change. The decision model does not include the rule component 1.4 mg/dL> since no evidence for its inclusion was provided.

Adjusted dose and prescription composition

Reuse and Iteration

The final step is to check for drug interactions. The CSV file provided in the challenge was used to generate a decision table, with the antibiotic name and medication name as inputs and the interaction as the output. Here is an excerpt of the table:

Drug interaction decision table

Since a patient may take more than one medication, this logic was implemented in a Business Knowledge Model (BKM). The BKM is then called for each medication.

A boxed iteration (a DMN construct for iterating over a list) is used to build the list of interactions. Unlike traditional code, this iteration is expressed visually, making it easier to read and understand.

This is a simple pattern, but it is often missed. Instead of embedding loops in code, the logic remains visible and declarative.

BKM and boxed iteration for medication lists

Orchestration and Control

The challenge suggests that the LLM should orchestrate the process. In practice, this is not the preferred approach for regulated environments and not one we recommend.

A BPMN or CMMN model would provide deterministic orchestration, explicit sequencing or task selection, and full auditability.

However, to comply with the challenge constraints, the LLM is used as the orchestrator.

This highlights an important distinction:

  • LLM orchestration is flexible but not inherently governed
  • BPM+ Model-driven orchestration is explicit and enforceable

This solution demonstrates the former while advocating for the latter.

Governing the LLM

The LLM is configured as an agent with access to the DMN services through MCP.

It is instructed to use those services and not to generate its own clinical conclusions.

LLM agent configured with MCP-accessible tools

While this constraint is implemented through instructions, it exposes a key limitation: instruction-based governance is not sufficient.

True governance requires:

  • Explicit control over which services can be invoked
  • Validation of inputs and outputs
  • Defined execution paths
  • Auditability of every step

These controls are best implemented through BPM+ model-driven orchestration, not prompt engineering.

These controls belong in models, not prompts. Period.

Tool approval and available decision services

Example Interaction

A user provides patient data in natural language. The LLM interprets the request and invokes the decision services in sequence.

First, creatinine clearance is calculated. Then antibiotic selection and dosing are determined. Finally, drug interactions are evaluated.

Each step is a deterministic service call. The results are combined into a final response.

The interaction feels conversational. The decisions are not.

LLM invoking deterministic decision services LLM invoking deterministic decision services Final response assembled from deterministic results

What This Demonstrates

This solution is not about solving sinusitis treatment. It is about demonstrating a pattern for combining LLMs with decision automation.

Two points matter more than anything else:

  • LLMs are effective interfaces, not decision authorities
  • Decision logic must remain explicit, testable and governed

For DMN practitioners, this shows how decision services can be exposed and reused in an agent context.

For clinicians, the message is simpler. The system can remain explainable and auditable even when natural language interfaces are introduced.

What Comes Next

Moving from demonstration to production requires tightening the architecture:

    Replace LLM orchestration with BPMN or CMMN controlled execution to inforce deterministic service usage instead of relying on instructions Add validation, testing, and monitoring Define escalation paths and human oversight

These are not optional in regulated environments.

Conclusion

LLMs can be useful in clinical systems, but only within the clear boundaries of a governed architecture.

The combination of DMN for decision logic and BPM+ standards-based integration provides a path forward. When decision logic is externalized and governed, you can take advantage of LLM flexibility without losing control. It allows organizations to adopt AI while maintaining control, traceability, and trust.

The challenge highlights the opportunity. Our solution points to what is required to do it safely.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Learn how it works

Request Demo

Confirm your budget

Request Pricing

Discuss your project

Request Meeting
Graph