Skip to end of metadata
Go to start of metadata

Some thoughts on the subjects of policy and legislative framework...

There will need to be an overall policy that the IL "does not talk to strangers" – this is really a policy around a nation being a good steward of the personal health information of its citizens. Such a policy is operationalized via a number of different technology pieces (e.g. ATNA, OpenID, OAuth, etc.). But, this is also the operationalization of a set of policies that a country will need to have in place to support the secure, authorized sharing of protected health information (PHI) for the purposes of care delivery. These policies need to describe a number of aspects of sharing PHI; such as the following.

What is the consent framework for the sharing of PHI? This is where the idea of "opt-in" versus "opt-out" comes into play. Are applications allowed to share PHI up the SHR by default (opt-out) ... or does a patient need to explicitly provide informed consent before this can happen (opt-in)? From a nuts and bolts standpoint, opt-out is the right answer. But one of Canada's largest provinces actually went "opt-in" at first (and later had to change their legislation because it was just too hard to do it that way for a population of over 8 million people).

How does a patient withdraw their consent to have their information shared? Can they finely articulate this consent (for example, to only apply to some of their information... or to only allow sharing with some of their providers)? Again... there are a lot of things that appear technically feasible but, in fact, are not "sociotechnically" feasible. In my experience, the right answer is: a patient can be "all in" or "all out". That is to say, it is operationally do-able to enable a particular client ID to not allow sharing of his/her PHI via a single big on/off switch – a "consent directive" that indicates do not share anything with anyone. Importantly, this does not affect the posting of content up to the SHR... it affects the way the IL responds to queries against that SHR. Operationally – the consent to collect cannot be withdrawn, only the consent to view. (Again... one Canadian province tried its hand at supporting withdrawing consent to collect, but it was a disaster and they had to fall back to the do-able case of only supporting the withdrawal of consent to share).

What about the medico-legal rights of providers? If a person is brought into an ER and life-saving care is required to save them, can a provider "break the glass" (BTG) and override a patient's consent directive? Can they see the patient's PHI so that care can be provided? All Canadian provinces have a BTG capability built in. This is a policy-based willingness to err on the side of care rather than on the side of privacy. It also is a nod to the rights of clinicians to be able to have their professional choices informed by the necessary information about a patient to support decision-making.

How is the sharing of PHI governed? Who is allowed to see it? This is where role-based access control may be leveraged to enable "classes" of care providers to have access to PHI but prevent others from seeing private content. In practice, this must be done at a very coarse level... if at all. The reason to consider providing full access to ALL care providers is that there are many team mates in a care team and almost all of them will require access to the PHI. Where to draw the line of access/no-access is truly difficult... and in any healthcare system, the primary imperative is health and healthcare. This means it is ALWAYS better to err on the side of care provision rather than on the side of privacy. Such ideas affect what happens when the IL is trying to determine whether access should be granted or not granted and the RBAC logic isn't providing a clear indication (e.g. the provider class is ambiguous or the the class' privileges are not t defined in the RBAC logic tree). From a process control standpoint – this is either a "fail open" or a "fail closed" design. Accepting that information is crucial to care, and that the primary imperative is care provision, a "fail open" design likely to be preferred. But this will have to be reflected in national policy.

What is the role of auditing in the context of policy and policy enforcement? One of the illusions of IT systems is that access permissions can be very finely defined. In truth, such finely defined permissions make a system pretty much unusable and they are a nightmare to maintain over time. It is, however, very easy – both technically and practically – to log who has accessed what PHI in the audit trails. (It is much easier than in the paper world, that's for sure!) The use of audit logs to enforce policy is actually very powerful. Whereas preventing unauthorized or inappropriate access to PHI is really hard (because of the "care imperative"), catching unauthorized or inappropriate access is relatively easy. The same sort of "outlier" algorithms that so readily identify credit card fraud may be used to quickly find patterns of non-policy-adherent PHI access. The key is to then be very clear about the penalties and to ENFORCE them. If the penalty for a privacy breach is light, then it will happen a lot. If the penalty is swift and stiff, then privacy policies will be respected. There was a case, early on in Ontario (Canada's) eHealth implementation when a famous musician was in hospital. Within a few days, over a dozen hospital staff were disciplined (and dismissed, I think) for breach of privacy of the PHI in the hospital records. It sent exactly the right signal.

How does the IL do it's part in enforcing PHI policy and legislation? The "don't talk to strangers" imperative is operationalized, in part, by profiles such as ATNA. But what about the "trust network"; how does the policy enforcement reach back into the POS system? The simple answer is: it doesn't. That's why it is called a trust network; the POS systems themselves are trusted by the IL to do what they need to do to appropriately secure PHI and to only provide PHI access to folks who are supposed to access it – including PHI that might be delivered to the POS via its connection to the IL. The place where this gets "tricky" is in answering the question – "how as this trust established in the first place?". Here is where there needs to be a policy context within which a POS is granted access to the IL and a technical context within which we "make it go". For the trust to be genuine, POS systems will need to be conformance-tested and this testing will need to be rigorous enough that the trust relationship between the IL and the POS is warranted. (e.g. the POS can send well-formed, standards-based messages. the POS has appropriate login capabilities, so the IL can trust the submitted transactions really are from who the POS purports they're from. etc...). This will be seen as a barrier, by some – but it is the only way for the trust network to be trustworthy, and it should be part of national policy that this trust is a thing of value and must be invested in.


Where can we find information on some of these concepts and ideas? Here are some sources:

WHO document (2012) on the "legal" aspects of eHealth:

epSOS (EU) document on the legal framework of eHealth and sharing PHI:

DfID (UK) white paper on policy/legislation "harmonization" issues for implementing eHealth in Africa:

PHI privacy legislation example (Ontario, Canada):


  • No labels