You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 28 Next »

Below is a list of presumptions around the terminology services layout that the OHIE-Terminology Services community is debating and cultivating.

This is a DRAFT list and is under construction.

 

The Data Terminology Service (DTS) is assumed to be the source of truth for all terminology, nomenclature, and coded data within the OpenHIE shared health record.   It will play a central role in terminology standardization and implementation throughout the HIE.   However, it is only one component of the several components that comprise the OpenHIE.   There are various ways the other components could interact with the DTS.   Here are some areas that the OHIE-Terminology Services community has been discussing.

 

  1. Dumb vs. Smart Service
    - Is DTS a dumb service (told what to do) or smart service (sees message and knows what to do) 

    It is assumed that the interoperability layer (IOL) will orchestrate much of the interaction with the DTS.
    Incoming data destined for the SHR will need to be validated against existing terminology standards in the DTS.
    Outgoing data destined for external consumers may need to be translated to other coding systems via the DTS.
    So the DTS may need to be consulted many times for an incoming message or an outgoing message. 
     

    For discussion purposes, consider that a typical patient encounter may have say 5 to 20 individual concepts, with each

    concept possibly having a coded answer and corresponding units and a reference range that may need validation.

    In the DTS dumb service scenario, the IOL would make a call to DTS for each data item in the incoming message.
    The IOL would loop through these items and validate that terminology standardization had been achieved.
    In this scenario, the DTS only has to respond to simple queries for terminology and does not need to understand
    the complexities of the message being processed.
     
     In the DTS smart service scenario, the incoming message could be passed off to DTS, which could perform predefined validation protocols on the message.
    These predefined protocols would be based message structure (HL7 v2, HL7 v3, etc) and message type (ORU, CDA, CCD, etc).
    An example protocol might be:  If HL7 v2 ORU message, validate OBX 3, OBX 5, OBX 6, and OBX 7.  The validation needed would be the
    same as in the dumb service scenario, but the smart service would not rely on the IOL and instead do all the parsing necessary to validate the
    terminology in the message.

    Much of the same parsing and validating machinery needed for this would be the same as the IOL requires.
    The question becomes where should this terminology parsing and validation machinery reside.


  2.  Approaches to Scalability
    -How to improve message throughput
    As larger countries implement OpenHIE, one important metric to consider is message throughput.
    Message volumes of greater than 250 messages/minute might be expected.  This should influence the
    architectural questions that relate to message  throughput.
    The following two approaches are being discussed.

    Applications access DTS in real time
     In this scenario, the DTS is fully integrated in the operational infrastructure of the HIE, and must be online and available
    any time other components are in operation.   Applications don't maintain a copy of standard terminology, but query the
    DTS in real time when needed.   This is easiest in terms of overall terminology standardization and deployment, but adds
    some complexity in terms of network connectivity and the need for real-time terminology updates to a live system.

    Applications use curated copy of terminology
     In this scenario, the DTS is still the source of truth for all terminology,  but it is not required to be online and available all the time.
    It would be an integral part of disseminating terminology throughout the enterprise by creating curated copies of terminology that
    would be consumed and incorporated by applications.  The two important applications that would use these curated copies
    would be the SHR and the Point-of-Care systems.

    One advantage of this curated copy approach is that network infrastructure is not as critical to operations, since Point-of-Care systems
    could continue without a network connection to the central system.   They could continue with the latest update they had received from
    the central system.  Another advantage of this approach is that updates by terminologists can be staged for deployment with less
    interference with the live system.

    Real-time access optimized for certain pipelines
    In this scenario, as with the real-time scenario, DTS is an integrated component of the HIE that's always online and available. Most applications will interact with the TS directly. However the pipelines in the HIE that have high-performance requirements are optimized by using cache servers and/or curated copies of the terminology (as with the second scenario). One example of such a high-performance pipeline is terminology validation for incoming encounters. One way in which to optimize this validation could be to use an in-memory cache (such as memcached) placed in between the IOL and the TS. A validation query would therefore hit the cache first for a code, and only query the TS in the event of a cache miss. Since terminology tends to be static (in the sense that the content doesn't change multiple times per second, but rather periodically (hours, days or even months)) caching may be highly suitable. A downside to this approach however is that when a change does occur on the TS, the cache will be out of date for a certain period of time.

  3. Change management
    -How to handle life cycle of terms.
    -How to mark terms that are deprecated or superceded.
    -Process for continual periodic updates to common coding systems (ICD 9, ICD 10, LOINC, RXNORM, SNOMED CT, etc).

  4. Deployment
    - How terminology is disseminated throughout enterprise
    Even if applications access DTS in real time, a mechanism is still needed for exporting a current snapshot of terminology.
    External systems may need to consume a current snapshot for various purposes.  Point-of-Care systems may need it to
    keep up to date.   External mapping applications may need to consume it to help automate the mapping process.
    The follow two non-exclusive approaches are being discussed.

    Deployment via sftp file transfer
    This approach requires minimal technology and minimal monitoring.   Sftp sites would be set up where interested parties could
    pull the current terminology snapshot as created by the terminologist.  It is assumed that larger sections of terminology would be
    deployed this way, and ad-hoc one of queries would not be considered.

    Deployment via API call
    This approach requires a higher technological level from the requester, but offers more query options.  The DTS could respond
    to query requests for items like the following:
    date of last update to DTS
    a list of all concepts that have been updated from a specific date
    details for a battery level concept and all its child elements
    details for an order set and all its child elements
    a list of all NSAID concepts
    a list of all xxx concepts

  5. SHR's persistence model
    - Storing a single code or multiple codes
    The OpenHIE SHR group has indicated that the SHR will store three kinds of data:
     coded data, text data, and text data with accompaning metadata (data about the text data).
     There has been discussions about the various ways to store coded data.

    Store only 1 cardinal code in SHR
     This approach assumes that only 1 cardinal code will be stored in the.   Other codes, such as the original local code in the
     message and other equivalence codes would not be stored. 

     Store both 1 cardinal code plus equivalence code(s) in SHR
    This approach assumes that the one cardinal code would be stored, along with other equivalence code(s) and also the
    delivered local code as received in the message.   Storing multiple codes offers a couple of advantages.   First, in the unlikely
    event that a local code was mapped in error, it is much easier to fix the incorrect mapping when the delivered local code
    is part of the data stored in the SHR.   Second, queries are much easier to do when say the Ministry of Health uses one
    coding system, while Insurance companies use another coding system.   The data can be queried directly, without the need
    to employ crosswalk tables to arrive at the other coding system.

  6. Point of Care Systems
    Terminology mapping is a fundamental task that the TS is expected to perform. This service is available to any registries in the HIE, as well as the IOL. Point of care systems however are not expected to perform concept mappings against the TS before submitting clinical encounters. It presumed that power and internet connectivity will not always reliable at these clinics and hospitals and therefore data traffic should be minimised. However the terminology used in the encounters will still need to be mapped to appropriate reference terminology. There are several ways in which this mapping can take place:

    - The mapping could be initiated by the IOL when an encounter is sent by a PoC. This means that the TS would need to manage the PoC's interface terminology as well the mappings to the reference terminology. The IOL takes the responsibility of providing the SHR with a normalised encounter.

    -- If the TS manages the interface terminology for the PoC systems, these systems can then use the TS to setup/maintain there own internal concept models (non-realtime).
    -- This may not a suitable approach in an heterogeneous environment where several types of systems are used at the point of care, and especially if these systems aren't maintained by the same organisations (the PoC is "black box"), since it would be challenging for the TS to maintain the various terminology sets.

    - The PoC itself takes on the burden of mapping the codes it sends to the IOL by storing the reference terminology mappings in it's own data model. These mappings are informed by the TS. Implementers supporting these PoC systems would need methods for periodically querying the TS for code updates or exports of terminology spaces.

    It presumed that any outside access (including from PoC systems) will be routed through the IOL, and the TS directly in order to ensure central security and auditing for the HIE.



  • No labels