case study

Dana-Farber Cancer Institute

Preemptively managing the side effects of cancer treatment through model-driven clinical decision support

Cancer patients face a myriad of distressing symptoms and side effects from their disease and treatments. However, current approaches to symptom management are often fragmented and inconsistent.

Dana-Farber Cancer Institute (DFCI) has launched an innovative Symptom Management Pathways initiative to preemptively manage the side effects of cancer treatment by leveraging digital technologies, DFCI expertise, and the latest clinical evidence. By standardizing care pathways for managing cancer-related symptoms, DFCI is enhancing patient outcomes, increasing patient and caregiver engagement, streamlining clinical workflows, and extending the impact of its expertise.

Central to this effort is DFCI’s partnership with Trisotech to integrate clinical decision support automation into providers’ existing Epic® EHR workflows. DFCI has developed symptom management pathways through standardized clinical decision support models and workflows that provide evidencebased recommendations to patients and providers at the point of care. This promotes the delivery of effective clinical, educational, and community-based interventions that help prevent, mitigate, and treat symptoms in a consistent way – ultimately improving patient outcomes, increasing satisfaction, streamlining operations, and reducing costs.

This case study exemplifies how digital transformation, through the strategic application of Trisotech’s Digital Enterprise Suite, can significantly improve cancer symptom treatment thus setting a benchmark for other healthcare institutions to follow.

Improved Patient Outcomes icon

Improved Patient Outcomes
allowing patients to manage their treatment

Reduced Clinician Burden icon

Reduced Clinician Burden
without disrupting their workflow

Enhanced Symptom Management icon

Enhanced Symptom Management
reducing unnecessary emergency services utilization

Scalable and Agile Solution icon

Scalable and Agile Solution
to additional disease centers and symptoms

Case Study - Dana-Farber Cancer Institute

Download the full case study

Content

Articles

View all

Dr. John Svirbely's blog post - Orchestrating Generative AI in Business Process Models
Dr. John Svirbely, MD
Blog

Orchestrating Generative AI in Business Process Models

By Dr. John Svirbely, MD

Read Time: 2 Minutes

Generative AI is spreading fast and constantly becoming more powerful. Its uses and roles in healthcare are still uncertain. Although it will be disruptive, it is unclear what it will change or what will be replaced as the technology evolves.

The use of Generative AI poses several challenges, at least for now. In some respects, it behaves like a black box. It may be unable to give the sources for what it produces, so it is hard to judge the reliability of its sources. It can be hard to validate depending on how it is used. These factors may make doctors, patients, and regulators nervous about its use in a sensitive area like healthcare. If a claim of malpractice is made involving it, then it may be hard to defend its mysterious behavior.

Generative AI and Business Process Models

A business process model can access Generative AI simply by adding a connector to a task, which is done by a simple drag and drop. Because it is now part of a process, you can control when and how it is called.

Since there may be several possible paths through the model, you can have different calls that are appropriate for each path. Orchestrating the output provides an opportunity to give an individualized solution for a specific situation. Orchestration of Generative AI can make it less of a black box.

Since the calls to Generative AI can be tightly constrained and since you know exactly where it is being used and what the inputs are, the appropriateness of its explanation can be judged in context. This can make validation a bit less daunting.

Illustrative Example

A common problem in healthcare is the need to communicate health information to patients. Not only may the patient and family not understand what the provider is saying, but also the provider may misunderstand the patient. The need to communicate better has created a need for access to human translators around the clock. This raises other problems, as the translator may not understand the nuances of medical terms. It can also be quite expensive since you need to have multiple translators on call.

In Figure 1 there is a portion of a BPMN model for the diagnosis of anemia. A DMN decision model first determines whether a patient has anemia, and, if so, its severity. It may be desirable to inform the patient quickly and easily about these findings. The problem of translation can be approached by taking the outputs of the decision and sending them as inputs to Generative AI (in this case OpenAI, indicated by the icon in the top left corner), along with the patient’s preferred language and education level. The Generative AI then takes these inputs and instructions and generates a letter tailored to the patient.

Figure 1

Generating narrative text is a strength for Generative AI. If known inputs and appropriate constraints are placed on it, then it can reproducibly generate a letter to inform a patient of the diagnosis in language that the patient can understand. Performance can be validated by periodic review of various outputs by a suitably qualified person. This can simply but elegantly solve problems in a cost-effective manner.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all

Dr. John Svirbely's blog post - Going from Zero to Success using BPM+ for Healthcare. 
                Part III: Going from Paper to Practice
Dr. John Svirbely, MD
Blog

Going from Zero to Success using BPM+ for Healthcare.

Part III:
Going from Paper to Practice

By Dr. John Svirbely, MD

Read Time: 3 Minutes

Welcome to the third installment of this three-part series providing an overview of the resources and steps required to achieve success when automating your first clinical guideline using the BPM+ family of open standards on the Trisotech platform.

In Part I we discussed how long it will take you to reach cruising speed for creating BPM+ visual models. In Part II, we discuss the critical step of grasping the knowledge presented in the guideline and standardizing your approach to deal with the various pitfalls you may encounter in doing so. Now we will delve more into the details of how to develop an automated guideline. While the Trisotech modeling tools provide low-code programming that is easily comprehended by novices, there are many details “under the hood” that need to be specified to achieve automation.

Stages of Development

The entire process of automating a guideline starts from a written guideline and proceeds through a sequence of stages to the final automated clinical model, as outlined in the following diagram. There is some flexibility in the process; however, it is not recommended to complete a stage without completing preceding one.

Narrative Elicitation refers to an in-depth understanding of the guideline, as was discussed in Part II of this series.

Concept (or Notional) Model: Here you start to lay out what you have distilled from the guideline into the core concepts (or notions). The Trisotech Knowledge Entity Modeler (KEM) can be useful to build a standardized terminology and to lay out concept maps. You will want to identify key decisions and how information flows to achieve each goal.

Computational Independent Model: Once you have a rough idea of what you want to model, then you can start building the models in DMN and BPMN. The more concrete that your planning is then the faster the building can proceed. Tasks include labeling elements, specifying data objects for input and output, and providing references. If you are building models just to document and train, then you may choose to stop at this level.

Shared Data Model: By now you should know what decisions you need and will have a good idea of what data is required. You will want to consolidate this data to a minimum. It is common to have several models using the same data inputs, but because they were developed at different times there may be some variability in how they are specified or used. You need to resolve any discrepancies in how they are defined or referenced. In addition, some data is easy to ask for but hard to get, so you may need to refine models to use data that is readily accessible. Finally, you need to know where the data is coming from and how to retrieve it. The various codes used for retrieving data (SNOMED. LOINC, ICD-10, RxNorm codes, value sets) need to be provided.

Platform Independent Model: During this stage you finally specify all of the fine details required for the models to execute. Every element of a model has an underlying structure and logic that needs to be specified. When this step is complete there should be a smooth execution of the models’ logic. You can release this model as an API and market it to clients. However, data mapping may be required since links to a specific data source have not been established. You will want to test your model now with your test cases.

Platform Specific Model: This stage requires system integration, where everything required to interact with the client institution is set. This is the stage where you will need EHR analysts to become involved. Once this is complete then the models should be fully automated and integrated. After testing they can be released to the end-users.

How Long Does It Take?

To give you some concrete numbers, here are some specifics about a collection of models that I developed for the Pain/Opioid LHS Learning Community (POLLC). It focuses on improving chronic pain management, referencing an 86-page guideline from the University of Michigan.

Complete modeling of the guideline required:

These models were taken to the Platform Independent stage but taking them to automation has been pending key additional resources.

It took 3 months for me to produce these models while working part-time. To fully automate these models will require an additional 3 months for model refinement, data connections and testing. You should expect that it will take you at least 6 man-months to completely automate the typical guideline. As you get more experienced the speed of development will improve. If you want to move faster than this, then you will need to apply more resources. If you have multiple team members, then each can specialize on specific tasks.

Some Recommendations

Here are some personal recommendations:

If you have read all 3 blogs in this series, then you should have a pretty good idea of how to automate a clinical guideline. While a lot of work, the benefits should far outweigh the costs.

Blog Articles

John Svirbely

View all

All Blog Articles

Read our experts’ blog

View all