You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 32 Next »

Below is a list of presumptions around the terminology services layout that the OHIE-Terminology Services community is debating and cultivating.

This is a DRAFT list and is under construction.

 

The OHIE Terminology Service (TS) is assumed to be the source of truth for all terminology, nomenclature, and coded data within the OpenHIE shared health record.   It will play a central role in terminology standardization and implementation throughout the HIE.   However, it is only one component of the several components that comprise the OpenHIE.   There are various ways the other components could interact with the TS.   Here are some areas that the OHIE-Terminology Services community has been discussing.

 

  1. TS Service Levels
    - Does the TS support only simple (unstructured) queries, or does it accept a structured message, and potentially acts on the message appropriately? 

    It is assumed that the interoperability layer (IOL) will orchestrate much of the interaction with the TS.
    Incoming data destined for the SHR will need to be validated against existing terminology standards in the TS.
    Outgoing data destined for external consumers may need to be translated to other coding systems via the TS.
    So the TS may need to be consulted many times for an incoming message or an outgoing message. 
     

    For discussion purposes, consider that a typical patient encounter may have say 5 to 20 individual concepts, with each

    concept possibly having a coded answer with corresponding units and a reference range that may need validation. 
    Units and reference ranges are closely related to interface terminology, and not reference terminology, so where these two reside
    could depend on which service model the TS uses.

    In an unstructured query service scenario, the IOL would make a call to the TS for each data item in the incoming message.
    The IOL would loop through these items and validate that terminology standardization had been achieved.
    In this scenario, the TS only has to respond to simple queries for terminology and does not need to understand
    the complexities of the message being processed.

     In a structured query service scenario, the incoming message could be passed off to the TS, which could perform predefined terminology validation protocols
    on the message.  These predefined protocols would be based on message structure (HL7 v2, HL7 v3, etc) and message type (ORU, CDA, CCD, etc).
    An example protocol might be:  If it is a HL7 v2 ORU message, validate OBX 3, OBX 5, OBX 6, and OBX 7.    Where these protocols would reside will need to be
    determined.   The terminology validation needed would be the same as in the unstructured query service scenario,
    but the structured service would not rely on the IOL and instead do all the parsing necessary to validate the terminology in the message.
    The TS may also be required to append/integrate the standardized codes to the message and return the "normalized" message to the SHR.

    Much of the same parsing and validating machinery needed for this would be the same as the IOL requires.
    The question becomes whether this terminology parsing and validation machinery should reside in the TS as well as the SHR.


  2.  Approaches to Scalability
    -How to improve message throughput
    As larger countries implement OpenHIE, one important metric to consider is message throughput.
    Message volumes of greater than 250 messages/minute might be expected.  This should influence the
    architectural questions that relate to message  throughput.
    The following two approaches are being discussed.

    Applications access the TS in real time
     In this scenario, the TS is fully integrated in the operational infrastructure of the HIE, and must be online and available
    any time other components are in operation.   Applications don't maintain a copy of standard terminology, but query the
    TS in real time when needed.   This is easiest in terms of overall terminology standardization and deployment, but adds
    some complexity in terms of network connectivity and the need for real-time terminology updates to a live system.

    Applications use curated copy of terminology
     In this scenario, the TS is still the source of truth for all terminology,  but it is not required to be online and available all the time.
    It would be an integral part of disseminating terminology throughout the enterprise by creating curated copies of terminology that
    would be consumed and incorporated by applications.  The two important applications that would use these curated copies
    would be the SHR and the Point-of-Care systems.

    One advantage of this curated copy approach is that network infrastructure is not as critical to operations, since Point-of-Care systems
    could continue without a network connection to the central system.   They could continue with the latest update they had received from
    the central system.  Another advantage of this approach is that updates by terminologists can be staged for deployment with less
    interference with the live system.

    Real-time access optimized for certain pipelines
    In this scenario, as with the real-time scenario, the TS is an integrated component of the HIE that's always online and available. Most applications will interact with the TS directly. However the pipelines in the HIE that have high-performance requirements are optimized by using cache servers and/or curated copies of the terminology (as with the second scenario). One example of such a high-performance pipeline is terminology validation for incoming encounters. One way in which to optimize this validation could be to use an in-memory cache (such as memcached) placed in between the IOL and the TS. A validation query would therefore hit the cache first for a code, and only query the TS in the event of a cache miss. Since terminology tends to be static (in the sense that the content doesn't change multiple times per second, but rather periodically (hours, days or even months)) caching may be highly suitable. A downside to this approach however is that when a change does occur on the TS, the cache will be out of date for a certain period of time.

  3. Change management
    -How to handle life cycle of terms/codes. 
    All standard Code Systems change over time. While most modern Code Systems no longer actually DELETE concepts/codes, many do change concept names and/or inactive/retire codes. A retired code should not be used past its effective date. Thus a TS must maintain a history of all codes in a code system and be able to respond to queries based on the code's date, or version. The type of queries used will typically vary according to the specific use cases of the HIE (or SHR). Validation of a new code entry, for example, would normally be performed against on the most recent ("active") version of a Code System, but validation/conversion for an historical (longitudinal) query may depend on the date the code was originally entered into the system. It is generally not practical, or clinically correct, to convert historical codes "en-mass" to their their "modern" counterparts. 

     -How to mark terms that are deprecated or superceded.
    The TS will typically maintain a Status attribute on code/concepts to determine their state. This Status will be associated with a date or version and will be updtaed when the Code System is updated as a result of a release from its SDO. In addition to updating the Code System history, these releases often invoke a workflow to update any  existing mapping to (from) the Code System (if such mappings are not supplied by the Code System developer.) So -called "local" mappings are a good case in point. If any targets of a local mapping are retired, the workflow supports clinical review and curation of updates to the mapping to bring it in line with the new state of the Code System. The reverse process can occur with updates to the local terminology. Different TS implementations will typically address this workflow in different ways. .

 
-Process for continual periodic updates to common coding systems (ICD 9, ICD 10, LOINC, RXNORM, SNOMED CT, etc).
The TS is typically updated via an external, and unique to the TS implementation, load process. Input data files can be taken directly from an SDO distribution, or post-processed by an application that puts the varied source formats into a standard load file format appropriate for the TS. Update scheduling is usually driven more by HIE operational procedures and policies than by the (greatly varying) distribution cycles of the SDOs. In complex environments, loads may be cycled through separate Q/A, Testing, and Production TS platforms to ensure data integrity.

  1. Deployment
    - How terminology is disseminated throughout enterprise
    Even if applications access TS in real time, a mechanism is still needed for exporting a current snapshot of terminology.
    External systems may need to consume a current snapshot for various purposes.  Point-of-Care systems may need it to
    keep up to date.   External mapping applications may need to consume it to help automate the mapping process.
    The follow two non-exclusive approaches are being discussed.

    Deployment via sftp file transfer
    This approach requires minimal technology and minimal monitoring.   Sftp sites would be set up where interested parties could
    pull the current terminology snapshot as created by the terminologist.  It is assumed that larger sections of terminology would be
    deployed this way, and ad-hoc one of queries would not be considered.

    Deployment via API call
    This approach requires a higher technological level from the requester, but offers more query options.  The TS could respond
    to query requests for items like the following:
    date of last update to TS
    a list of all concepts that have been updated from a specific date
    details for a battery level concept and all its child elements
    details for an order set and all its child elements
    a list of all NSAID concepts
    a list of all xxx concepts

  2. SHR's persistence model
    - Storing a single code or multiple codes
    The OpenHIE SHR group has indicated that the SHR will store three kinds of data:
     coded data, text data, and text data with accompaning metadata (data about the text data).
     There has been discussions about the various ways to store coded data.

    Store only 1 reference code in SHR
     This approach assumes that only 1 cardinal code will be stored in the SHR.   Other codes, such as the original local code in the
     message and other equivalence codes would not be stored. 

    Store both 1 reference code plus equivalence code(s) in SHR
    This approach assumes that the one reference code would be stored, along with other equivalence code(s) and also the
    delivered local code as received in the message.   Storing multiple codes offers a couple of advantages.   First, in the unlikely
    event that a local code was mapped in error, it is much easier to fix the incorrect mapping when the delivered local code
    is part of the data stored in the SHR.   Second, queries are much easier to do when say the Ministry of Health uses one
    coding system, while Insurance companies use another coding system.   The data can be queried directly, without the need
    to employ crosswalk tables to arrive at the other coding system.

  In either of these models, it is important that in order to maintain clinical fidelity, the original ("verbatim") clinical entry must always be saved.

  1. Point of Care Systems
    Terminology mapping is a fundamental task that the TS is expected to perform. This service is available to any registries in the HIE, as well as the IOL. Point of care systems however are not expected to perform concept mappings against the TS before submitting clinical encounters. It presumed that power and internet connectivity will not always reliable at these clinics and hospitals and therefore data traffic should be minimized. However the terminology used in the encounters will still need to be mapped to appropriate reference terminology. There are several ways in which this mapping can take place:

    - The mapping could be initiated by the IOL when an encounter is sent by a PoC. This means that the TS would need to manage the PoC's interface terminology as well the mappings to the reference terminology. The IOL takes the responsibility of providing the SHR with a normalized encounter.

    -- If the TS manages the interface terminology for the PoC systems, these systems can then use the TS to setup/maintain there own internal concept models (non-realtime).
    -- This may not a suitable approach in an heterogeneous environment where several types of systems are used at the point of care, and especially if these systems aren't maintained by the same organizations (the PoC is "black box"), since it would be challenging for the TS to maintain the various terminology sets.

    - The PoC itself takes on the burden of mapping the codes it sends to the IOL by storing the reference terminology mappings in it's own data model. These mappings are informed by the TS Implementers supporting these PoC systems would need methods for periodically querying the TS for code updates or exports of terminology spaces.

    It presumed that any outside access (including from PoC systems) will be routed through the IOL, and the TS directly in order to ensure central security and auditing for the HIE.



  • No labels