Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The OHIE Terminology Service (TS) is assumed to be the source of truth for all terminology, nomenclature, and coded data within the OpenHIE shared health record.   It will play a central role in terminology standardization and implementation throughout the HIE.   However, it is only one component of the several components that comprise the OpenHIE.   There are various ways the other components could interact with the TS.   Here are some areas that the OHIE-Terminology Services community has been discussing.

 

Please note that the Google Doc below can be used to add comments/feedback for discussion. Access the live document via this link.

Iframe Macro
srchttps://docs.google.com/document/d/18V3o_hp4s-GNfR0HZD3OlIf-grmyf9G8I1Qh94ikBfI/edit?usp=drive_web
 

 

 

1.       TS Service Levels

 

Does the TS support only simple (unstructured) queries, or does it accept a structured message, and potentially acts on the message appropriately?

  • It is assumed that the interoperability layer (IOL) will orchestrate much of the interaction with the TS.
  • Incoming data destined for the SHR will need to be validated against existing terminology standards in the TS.
  • Outgoing data destined for external consumers may need to be translated to other coding systems via the TS.
  • So the TS may need to be consulted many times for an incoming message or an outgoing message.
     

For discussion purposes, consider that a typical patient encounter may have say 5 to 20 individual concepts, with each concept possibly having a coded answer with corresponding units and a reference range that may need validation. Units and reference ranges are closely related to interface terminology, and not reference terminology, so where these two reside could depend on which service model the TS uses.

 

In an unstructured query service scenario, the IOL would make a call to the TS for each data item in the incoming message. The IOL would loop through these items and validate that terminology standardization had been achieved. In this scenario, the TS only has to respond to simple queries for terminology and does not need to understand the complexities of the message being processed.

 

In a structured query service scenario, the incoming message could be passed off to the TS, which could perform predefined terminology validation protocols on the message. These predefined protocols would be based on message structure (HL7 v2, HL7 v3, etc) and message type (ORU, CDA, CCD, etc).

  

An example protocol might be: If it is a HL7 v2 ORU message, validate OBX 3, OBX 5, OBX 6, and OBX 7. Where these protocols would reside will need to be determined. The terminology validation needed would be the same as in the unstructured query service scenario, but the structured service would not rely on the IOL and instead do all the parsing necessary to validate the terminology in the message. The TS may also be required to append/integrate the standardized codes to the message and return the "normalized" message to the SHR. Much of the same parsing and validating machinery needed for this would be the same as the IOL requires. The question becomes whether this terminology parsing and validation machinery should reside in the TS as well as the SHR.

 

2.       Approaches to Scalability

How to improve message throughput

As larger countries implement OpenHIE, one important metric to consider is message throughput. Message volumes of > 250 messages/minute might be expected. This should influence the architectural questions that relate to message throughput.

 

The following two approaches are being discussed:

Applications access the TS in real time

In this scenario, the TS is fully integrated in the operational infrastructure of the HIE, and must be online and available any time other components are in operation. Applications don't maintain a copy of standard terminology, but query the TS in real time when needed. This is easiest in terms of overall terminology standardization and deployment, but adds some complexity in terms of network connectivity and the need for real-time terminology updates to a live system.

 Applications use curated copy of terminology

 In this scenario, the TS is still the source of truth for all terminology, but it is not required to be online and available all the time. It would be an integral part of disseminating terminology throughout the enterprise by creating curated copies of terminology that would be consumed and incorporated by applications. The two important applications that would use these curated copies would be the SHR and the Point-of-Care systems. One advantage of this curated copy approach is that network infrastructure is not as critical to operations, since Point-of-Care systems could continue without a network connection to the central system. They could continue with the latest update they had received from the central system. Another advantage of this approach is that updates by terminologists can be staged for deployment with less interference with the live system.

 

Real-time access optimized for certain pipelines

 In this scenario, as with the real-time scenario, the TS is an integrated component of the HIE that's always online and available. Most applications will interact with the TS directly. However the pipelines in the HIE that have high-performance requirements are optimized by using cache servers and/or curated copies of the terminology (as with the second scenario). One example of such a high-performance pipeline is terminology validation for incoming encounters. One way in which to optimize this validation could be to use an in-memory cache (such as memcached) placed in between the IOL and the TS. A validation query would therefore hit the cache first for a code, and only query the TS in the event of a cache miss. Since terminology tends to be static (in the sense that the content doesn't change multiple times per second, but rather periodically (hours, days or even months)) caching may be highly suitable. A downside to this approach however is that when a change does occur on the TS, the cache will be out of date for a certain period of time.

  

3.       Change management

 How to handle life cycle of terms/codes

All standard Code Systems change over time. While most modern Code Systems no longer actually DELETE concepts/codes, many do change concept names and/or inactive/retire codes. A retired code should not be used past its effective date. Thus TS must maintain a history of all codes in a code system and be able to respond to queries based on the code's date, or version. The type of queries used will typically vary according to the specific use cases of the HIE (or SHR). Validation of a new code entry, for example, would normally be performed against on the most recent ("active") version of a Code System, but validation/conversion for an historical (longitudinal) query may depend on the date the code was originally entered into the system. It is generally not practical, or clinically correct, to convert historical codes "en-mass" to their "modern" counterparts.

How to mark terms that are deprecated or superseded

The TS will typically maintain a Status attribute on code/concepts to determine their state. This Status will be associated with a date or version and will be updated when the Code System is updated as a result of a release from its SDO. In addition to updating the Code System history, these releases often invoke a workflow to update any existing mapping to (from) the Code System (if such mappings are not supplied by the Code System developer.) So -called "local" mappings are a good case in point. If any targets of a local mapping are retired, the workflow supports clinical review and curation of updates to the mapping to bring it in line with the new state of the Code System. The reverse process can occur with updates to the local terminology. Different TS implementations will typically address this workflow in different ways. .

Process for continual periodic updates to common coding systems (ICD 9, ICD 10, LOINC, RXNORM, SNOMED CT, etc)

The TS is typically updated via an external, and unique to the TS implementation, load process. Input data files can be taken directly from an SDO distribution, or post-processed by an application that puts the varied source formats into a standard load file format appropriate for the TS. Update scheduling is usually driven more by HIE operational procedures and policies than by the (greatly varying) distribution cycles of the SDOs. In complex environments, loads may be cycled through separate Q/A, Testing, and Production TS platforms to ensure data integrity.

  

4.       Deployment

 How terminology is disseminated throughout enterprise

Even if applications access TS in real time, a mechanism is still needed for exporting a current snapshot of terminology. External systems may need to consume a current snapshot for various purposes. Point-of-Care systems may need it to keep up to date. External mapping applications may need to consume it to help automate the mapping process.

 The follow two non-exclusive approaches are being discussed:

Deployment via SFTP file transfer

This approach requires minimal technology and minimal monitoring. SFTP sites would be set up where interested parties could pull the current terminology snapshot as created by the terminologist. It is assumed that larger sections of terminology would be  deployed this way, and ad-hoc one of queries would not be considered.

Deployment via API call

This approach requires a higher technological level from the requester, but offers more query options. The TS could respond to query requests for items like the following:

·         date of last update to TS

·         a list of all concepts that have been updated from a specific date

·         details for a battery level concept and all its child elements

·         details for an order set and all its child elements

·         a list of all NSAID concepts

·         a list of all xxx concepts

  

5.       SHR's persistence model

Storing a single code or multiple codes

   

The OpenHIE SHR group has indicated that the SHR will store three kinds of data: coded data, text data, and text data with accompanying metadata (data about the text data). There have been discussions about the various ways to store coded data:

 Store only 1 reference code in SHR

 This approach assumes that only 1 cardinal code will be stored in the SHR. Other codes, such as the original local code in the message and other equivalence codes would not be stored.

...

 

...

  

In either of these models, it is important that in order to maintain clinical fidelity, the original ("verbatim") clinical entry must always be saved.

 

 6.       Terminology Mapping

 Terminology mapping is a fundamental task that the TS is expected to perform. This service is available to any registries in the HIE, as well as the IOL. Point of care systems however are not expected to perform concept mappings against the TS before submitting clinical encounters. It is presumed that power and internet connectivity will not always be reliable at the clinics and hospitals and therefore data traffic should be minimized. However the terminology used in the encounters will still need to be mapped to appropriate reference terminology. There are several ways in which this mapping can take place:

  

1.   The mapping could be initiated by the IOL when an encounter is sent by a PoC. This means that the TS would need to manage the PoC's interface terminology as well the mappings to the reference terminology. The IOL takes the responsibility of providing the SHR with a normalized encounter.

2.   If the TS manages the interface terminology for the PoC systems, these systems can then use the TS to setup/maintain their own internal concept models (non-realtime). This may not a suitable approach in an heterogeneous environment where several types of systems are used at the point of care, and especially if these systems aren't maintained by the same organizations (the PoC is "black box"), since it would be challenging for the TS to maintain the various terminology sets.

...