Hello! I’m Roy of Person Centered Tech. We know that you want to focus on your clients, so we provide articles, tools, and continuing education on how to best serve clients in the digital world.
(Sign up for other free articles addressing topics such as: telemental health, HIPAA, and practical technology tools!)
A common error in preparing a practice for HIPAA compliance — or just for protection from bad guys — is in thinking only about how to prevent breaches from happening. There are a number of human cognitive biases that contribute to this error, and it takes some forethought and experience to get past those biases and do the slighter harder thing: prepare for what you’ll do when you inevitably experience a security breach.
Security experts, and the HIPAA People themselves (that’s the Office of Civil Rights, or “OCR”), have noted many times that a safe and solid practice is one that is ready to respond effectively when a breach happens.
This is because, simply enough, breaches happen. Trying to ensure that they won’t is a fool’s errand. What works better is a combination of preventing the problem in the first place and also setting yourself up for success when it does happen.
Part of that success prep is knowing how HIPAA and your state view breaches and how they require you to handle breach notification. Notification means informing all interested parties when a breach occurs. We’ve never seen a law or standard that didn’t include informing affected clients when a breach occurs, but many also require that you inform some governmental body.
At the time of writing, 47 states and all US territories have their own breach notification rules. (Mintz Levin, 2016) The rules differ rather widely, but a general trend is the requirement to inform all affected clients and a government agency.
Most licensing boards have some form of breach notification rule. For example, they may require licensees to make a report to the board if records are permanently lost.
HIPAA has a rule called the Breach Notification Final Rule. We will describe it in detail below.
What Makes Something a Breach?
The answer is not always obvious. HIPAA’s definition of a breach is “…an impermissible use or disclosure… that compromises the security or privacy of the protected health information.”
But breaches are something that evolve, like so:
A breach starts out as an incident. An incident is any event that comes to your attention that could indicate that a breach occurred. It can be things like noticing that a client’s file folder has gone missing, having your computer or smartphone stolen, realizing that someone else has logged in to your email account, and the like.
The date on which an incident comes to your attention is important, because the breach notification clock starts counting down once you become aware of the incident.
An incident becomes a breach when either of the following happens.
- You discover that the incident was, in fact, a bona fide breach or…
- You are unable to prove that the incident is not a breach within some reasonable time period.
I want to emphasize item 2 above, because it indicates a very common error in thinking about security and compliance in our practices. When an incident occurs, you must demonstrate that it isn’t a breach. It’s not the other way around. No one has to prove that a breach happened once an incident has been discovered — the burden is on you. This will generally be true across state and licensing board laws, as well.
This fact has a great impact on how we prepare for security incidents. In the next section, we’re going to explore a few strategies that help us prevent the need for notification when breaches occur. All these strategies are made with the idea in mind that they have to provide us with the ability to demonstrate that a breach did not occur.
How Do I Demonstrate That a Breach Didn’t Occur?
The precise rules will vary depending on the specific law you’re dealing with. Some are more nuanced than others. A relatively consistent fact is that you’ll need to argue convincingly that the incident was not likely to have resulted in a compromise of the information involved in your breach.
Often, you will argue technical things. For example, you may argue that because of how you set up encryption and password policies for your lost computer, it’s not possible for the data on it to be breached.
Or your argument may be more circumstantial. E.g. you may argue that the file folder that was left out out on a desk for 12 hours cannot have suffered a security breach because it could not have left the office suite during that time, no unauthorized people were in the office during that time, and it still had all the proper pieces of paper in it when the incident was discovered.
After an incident is discovered, we are allowed some amount of time to try to mitigate and investigate the incident. HIPAA gives us 60 days (that’s a long time!) before we have to make a final determination and, if necessary, report the breach. A lot of states give us less time than that, however. For example, California only gives businesses 15 days.
During this period of time, you might take actions to try to prevent a breach. For example, if your smartphone goes missing, you’ll use some of this time to try and track it down. If you’re able, you may use the remote wipe feature of the smartphone to wipe all of the PHI from it.
For another example: if it looks like a bad guy might have gotten into your email, you’ll use this time to change your email password and take any other available measures to lock any bad guys out. You’ll also use the time to get a technical expert to help you verify whether or not your email account was actually hacked or if it just looks like it was hacked.
To help us assess whether or not a breach has occurred, the HIPAA Breach Notification Final Rule requires us to use this 4-point assessment rubric. (Office of Civil Rights, n.d.) Note that this rubric does not always result in definitive answers. It just helps you make a judgement call which you’ll need to be ready to defend if you’re ever challenged on it.
The HIPAA rubric has you document these pieces of info about the security incident:
1) The nature and extent of the protected health information involved, including the types of identifiers and the likelihood of re-identification.
So what information was misused or disclosed and can it be used to compromise a client’s privacy?
If an anonymous thief took your calendar book that just contains client initials, maybe you and your expert helper could decide that the likelihood of identifying those clients is low and there is no need for breach notification. If the calendar was taken by a client’s abusive spouse, however, we definitely have a breach (because the spouse can identify the client based on initials.)
The kicker, though, is that you have to be able to determine for sure if it was an anonymous thief or a client’s abuser. And you have to determine for sure that it never found its way into anyone else’s hands. How would you discover such things? And how can you demonstrate that client initials are the only identifiers in the address book?
If you can’t verify what information was breached or which clients’ information was breached, you have to assume that the greatest reasonably possible breach occurred. As such, it is wise to have something in place that lets you know which clients’ information is kept where. Without that in place, you may end up having to report, out of an abundance of caution, large breaches that never actually occurred.
2) The unauthorized person who used the protected health information or to whom the disclosure was made
This speaks to the differences pointed out in item 1 between the anonymous thief and the abusive spouse. Another example would be a misdirected FAX that ends up in the wrong doctor’s office. The unauthorized person in that case would be a doctor’s office staff member. That’s a lot less threatening to your PHI than an anonymous thief.
3) Whether the protected health information was actually acquired or viewed
A number of security incidents reflect a situation where someone who isn’t authorized might be able to view PHI. E.g. a folder left out on a desk, a lost laptop or smartphone, a computer screen that is visible to people in the waiting room, etc.
If you can demonstrate that no one actually acquired or viewed any PHI as part of your security incident, then you’re golden. No need to notify.
There are a couple ways to prepare for this assessment item ahead of time.
One is to set up ways for you to verify who accessed what information and when.
Keeping everything on cloud services is a great way to set yourself up for success on this assessment item. Think about two stories: in one story, you download all your emails onto your smartphone. One day, the smartphone gets stolen. How do you know whether or not the thief read the emails on your phone? You can’t. There’s no way.
Consider a second story: instead of downloading emails to your phone, you leave them up on the email server (on the “cloud.”) Your smartphone app simply lets you view emails that are left up there, but doesn’t permanently download them. One day, your smartphone gets stolen. You can now simply go to your email service online and look at the access logs to see if the smartphone thief has used the phone to access your emails through the cloud service. You can also lock the lost phone out of the email account and prevent the thieves from accessing it in the future. The email service access logs then provide evidence that your emails were never viewed or acquired by an unauthorized party.
The second way to prepare for this assessment item is to take advantage of the safe harbor provision for full device encryption. HIPAA’s Breach Notification Final Rule includes an assumption that properly encrypted data is unviewable.(Office of Civil Rights, n.d.) That means that even if someone acquires the encrypted data, we can assume that the encryption keeps it protected from them and so we can say that data was never acquired or viewed. For details, see our article on using full device encryption to prevent the need for breach notification.
4) The extent to which the risk to the protected health information has been mitigated.
This assessment item reflects how well you did on mitigating the security incident. If you can demonstrate that you mitigated the incident enough to ensure that there is a low likelihood that any PHI was breached, you’ll put that demonstration here.
You may or may not need to hire a specialist to help you do the 4-point assessment. Sometimes the results are obvious (either way.) But if you aren’t qualified to judge, which is often the case for technical breaches, you should bring in an expert to help you make a decision.
It is always good to consult a knowledgeable lawyer in these situations, as well. We’ve seen a lot of breach stories here at Person-Centered Tech, and ones that involved an attorney generally went better than those that did not. Considering the costs that can accrue when practices botch their breach investigations and reporting, it’s worth the cost of an attorney’s consultation.
We become concerned, when we write articles like these, that they may create elevated levels of anxiety in readers. It is certainly more comforting to think about preventing problems and thus avoiding the anxiety of dealing with a breach.
We can tell you, however, that we can think of breaches as inevitable in the same way that we can think of parasuicidality among our clients as inevitable. No matter what population we work with, we always work with people in pain and some of them may find themselves wanting to self-harm. These situations are always stressful for us. But if we prepare ourselves, our practices, and our clients, we come out okay. The same can be said of security breaches.
So breathe, take your time, and get prepared. That’s the best anyone can ask of you!