SCGhealth Blog


We Fired Amazon's Alexa

Wednesday, February 21, 2018

By Clay Dubberly, Intern 

Amazon’s Alexa is being criticized by the healthcare industry, not because of a design error, but because of its passive listening ability. This function led Jennifer Searfoss, CEO of SCG Health to ban Alexa from its premises.

Alexa is an “intelligent personal assistant” capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, playing audiobooks, and offering other real-time information.

The way Alexa works is by listening for its wake word (its name) which prepares it to analyze a command. It then listens and responds to everything that it hears afterward. You can ask it questions about the weather, converting measurements, or even for help shopping. It can even be used as an intercom.

In a medical environment, it can be used to help physicians take notes, remotely monitor patients, or allow them to ask health-related questions.

Passive listening and hacking: The Downsides to Alexa

The problem is that Alexa is listening to its surroundings at all times. This means that 24/7, she can be picking up personal information, which is sent back to Amazon or a potential hacker.

“There’s too much risk to be hacked,” Jen Searfoss says. “SCG Health used to have the device in its building,” but “We kicked Alexa out of our office after considering the vulnerabilities of the passive listening technology.”

There isn’t just a “possibility” of being hacked; it’s a reality. There are already several documented instances of Alexa being compromised. One way is through a “Dolphin Attack,” which is when it picks up frequencies which humans are unable to hear.


In this type of attack, hackers increase the frequency of a voice command to over 20,000hz and can play it through another phone’s speaker. While humans can’t hear this, smartphones will pick it up. Another concern for users is that a device that’s been compromised looks no different from one that hasn’t been compromised.

After picking up the frequencies, Alexa can carry out the command without the user’s permission. All that’s needed to do this is a battery, a smartphone, an ultrasonic transducer and an amplifier. All of this is readily sold online for a low price.

After a successful attempt, invaders can open your garage door (granted the right technology is installed) or make calls.

Another way Alexa can be hacked involves pre-installing software onto the device which transforms it into a wiretap that records any sound picked up onto a computer at another location.

Forbes successfully tested this out. One of the disadvantages (to the hacker) is that it takes several hours of installation on the hacker’s part, but this still poses a threat to anyone that buys Alexa from a secondhand source.

In one of those less-concerning instances when hacking is used for something good (or at least something funny), Alexa was hacked into a Big Mouth Billy Bass -- one of those wall-mounted fish that sings songs like “Don’t Worry Be Happy” or “Take Me To The River.”

Alexa isn’t HIPAA compliant. Here is how Amazon plans to fix it.

Another big concern for Amazon’s Alexa (as if being hacked wasn’t big enough) is that it’s not HIPAA compliant. As such, its use in healthcare is extremely limited.

The idea of having a device which could be recording patient data presents a clear threat: “It’s collecting info that has PII,” Ms. Searfoss says.

To help Alexa reach HIPAA compliance guidelines, Amazon recently hired a HIPAA Compliance Agent to help them reach legal requirements, including Business Associate Agreements (BAA), federal and state laws, and standards and regulations. The Compliance Agent is expected to help ensure that “technology and business processes meet [Amazon’s] HIPAA BAA requirements, as well as all applicable federal and state laws, regulations and standards.”

Some healthcare organizations have begun testing the device’s capabilities despite the risk. WebMD allowed Alexa to deliver its web content to users at their own homes for example. The Beth Israel Deaconness Medical Center (BIDMC) ran a successful pilot study in an inpatient setting (without actual patient data). It eventually plans to use it in a clinical setting, but not until Amazon signs a BAA.

The Boston’s Children’s Hospital (BCH) also experimented with using Alexa to give info to its clinical staff, but because it didn’t have a BAA only non-identifiable health information was used. The BCH also created an Alexa skill called KidsMD, which allows users to ask advice for when their kids have a fever.

SCG Health will continue to stand strong and enforce its ban on Alexa -- at least until Amazon approves a business associate agreement.


Don’t come back! Ensuring that former employees can’t access HIPAA-protected confidential patient information

Wednesday, December 13, 2017

By Marla Durben Hirsch

One of the most common – and frustrating – ways that patient information is compromised in violation of HIPAA is by a provider’s own staff. The problem is so prevalent that the Department of Health and Human Services’ Office for Civil Rights (OCR) has issued a new alert about it. 

The alert, released in a newsletter November 30, focuses on the risks posed by former employees who still have the ability to access electronic protected health information (ePHI) “with evil motives.” 

“Effective identity and access management (IAM) policies and controls are essential to reduce the risks posed by these types of insider threats. IAM can include many processes, but most commonly would include the processes by which appropriate access to data is granted, and eventually terminated, by creating and managing user accounts. Making sure that user accounts are terminated, so that former workforce members don’t have access to data, is one important way IAM can help reduce risks posed by insider threats,” OCR says. 

OCR recommends a number of common sense steps that covered entities and business associates should take to protect patient information from former employees: 

  • Have standard procedures of all action items to be completed when an individual leaves –these action items could be incorporated into a checklist. These should include notification to the IT department or a specific security individual of when an individual should no longer have access to ePHI, when his duties change, he quits, or is fired.
  • Consider using logs to document whenever access is granted (both physical and electronic), privileges increased, and equipment given to individuals. These logs can be used to document the termination of access and return of physical equipment.
  • Consider having alerts in place to notify the proper department when an account has not been used for a specified number of days. These alerts may be helpful in identifying accounts that should be permanently terminated.
  • Terminate electronic and physical access as soon as possible.
  • De-activate or delete user accounts, including disabling or changing user IDs and passwords.
  • Have appropriate audit procedures in place. Appropriate audit and review processes confirm that procedures are actually being implemented, are effective, and that individuals are not accessing ePHI when they shouldn’t or after they leave.
  • Change the passwords of any administrative or privileged accounts that a former workforce member had access to.
  • Address physical access and remote access by implementing procedures to:
    • take back all devices and items permitting access to facilities (like laptops, smartphones, removable media, ID badges, keys);
    • terminate physical access (for example, change combination locks, security codes);
    • effectively clear or purge ePHI from personal devices and terminate access to ePHI from such devices if personal devices are permitted to access or store ePHI;
    • terminate remote access capabilities;
    • terminate access to remote applications, services, and websites such as accounts used to access third-party or cloud-based services.

Alert leaves some gaps
While the alert provides practice advice, there are some issues that it did not address: 

The legal consequence of dropping the ball. The alert doesn’t remind readers that unauthorized access by a current or former employee is presumed to be a breach of PHI reportable to HHS and likely a HIPAA violation if the entity hadn’t taken adequate steps – i.e., the ones listed in the alert – to protect the records. 

For instance, earlier this year Memorial Health Care System in South Florida paid $5.5 million to settle allegations it had violated HIPAA. The ePHI of more than 80,000 patients was impermissibly accessed by employees and disclosed to affiliated physician office staff by using the login credential of a former employee of an affiliated physician’s office. Memorial had failed to implement procedures regarding reviewing, modifying and /or terminating users’ right to access. It also failed to promptly detect the activity, even though it had identified this issue as a vulnerability. Some of these disclosures led to federal charges relating to selling the patient information and the filing of fraudulent tax returns. 

The need to include access control in security risk analyses. The alert doesn’t point out a fundamental tenet: the provider needs to know not only who has access and equipment but also what kind of risk that poses. Are smartphones encrypted? Can computers be remotely accessed? Are passwords changed regularly? Therefore it’s important to include this in your HIPAA-required security risk analysis

Failure to comply can create additional headaches. Providers need to worry about more than HIPAA compliance. For example, a former employee of a perinatal medical practice who took a new job with a competitor hacked the computer of his former employer and took all of the patient information to create a direct mail campaign for the competitor. To add insult to injury, he wiped the patient information from the former employer’s system. He was convicted of the cybercrime and went to prison, but caused tremendous damage to the practice. 

Takeaway: Implement these safeguards, many of which are easy and cost-effective. If OCR is flagging a particular issue, that means it’s occurring often and a priority of the agency.


OCR issues new warning about protecting patient information on mobile devices

Monday, November 13, 2017

By Marla Durben Hirsch, contributing writer

Tread carefully if you store, send, receive or transmit electronic patient protected health information (ePHI) via a laptop, iPhone or other portable electronic device. The Department of Health and Human Services’ Office for Civil Rights (OCR) has issued a new alert about the vulnerability of such information.

The alert, released in a newsletter October 31, notes that while mobile devices are convenient and easy to use, the ePHI is particularly difficult to keep secure. The devices are usually on default settings, enabling them to connect to unsecure Wi-Fi, Bluetooth, cloud storage or file sharing network services, where others can access the data. It is common for users to inadvertently download malware or viruses, which can hack into or corrupt the data on the device. And the devices themselves are frequently lost or stolen. 

For example, more than 27% of the breaches of 500 or more patient records archived on HHS’ HIPAA breach “wall of shame” were due to the loss or theft of a laptop or other portable device. That figure doesn’t even include potentially related breaches, such as emails containing ePHI sent from a mobile device to the wrong recipient or that were intercepted. 

“As mobile devices are increasingly and consistently used by covered entities and business associate[s] and their workforce members to store or access ePHI, it is important that the security of mobile devices is reviewed regularly, and modified when necessary, to ensure ePHI remains protected,” OCR says in the alert. 

OCR recommends that entities: 

  • Include mobile devices when conducting their HIPAA-required security risk analyses to identify vulnerabilities that could compromise patient data and take action to reduce any vulnerabilities or risks found.
  • Implement policies and procedures regarding the use of mobile devices in the work place, especially when used to create, receive, maintain, or transmit ePHI.
  • Consider using Mobile Device Management (MDM) software to manage and secure mobile devices.
  • Install or enable automatic lock/logoff functionality.
  • Require authentication to use or unlock mobile devices.
  • Regularly install security patches and updates.
  • Install or enable encryption, anti-virus/anti-malware software, and remote wipe capabilities.
  • Use a privacy screen to prevent people close by from reading information on your screen.
  • Use only secure Wi-Fi connections.
  • Use a secure Virtual Private Network (VPN).
  • Reduce risks posed by third-party apps by prohibiting the downloading of third-party apps, using whitelisting to allow installation of only approved apps, securely separating ePHI from apps, and verifying that apps only have the minimum necessary permissions required.
  • Securely delete all ePHI stored on a mobile device before discarding or reusing the mobile device.
  • Train staff on how to securely use mobile devices. 

Don’t expect leniency on this one
OCR’s concern about protecting patient data on portable devices is not new. The agency has previously published resources on this topic, including videos, checklists and FAQs. But the fact that OCR felt a need to repeat itself indicates that it believes that entities are not “getting the memo” and are not taking proper precautions. 

It also shows that this is a front burner issue for the agency, and that entities which fail to comply despite the plethora of repeated guidance are more likely to face harsher punishment. 

Takeaway: Entities that suffer a breach of ePHI related to a mobile device are not going to be able to defend themselves on the ground that there was no guidance to help them protect the data. Now’s a good time to assess your office’s use of portable devices and reduce the risk that patient information will be exposed.


This doctor’s lawsuit against a patient for defamation is worth some extra attention

Monday, October 30, 2017

By Marla Durben Hirsch, contributing writer

More patients than ever are turning to the Internet to share their provider experiences, causing more providers to consider how to deal with negative reviews. One of the latest lawsuits may provide some much-needed guidance in this area.

Ohio plastic surgeon Bahman Guyuron sued former patient Marisa User regarding negative reviews she posted anonymously on several websites after he performed her nose job. According to his lawsuit, User posted false and disparaging information that injured him and damaged his reputation. Some of the information she posted stated that he was untrustworthy and unprofessional, her nose is now twice the size than it was originally, there was no follow-up care, that Dr. Guyuron posts or encourages others to post false positive web reviews, and that there was no informed consent to the procedure. 

In Dr. Guyuron’s lawsuit he asked for money damages, an injunction to keep User from posting about him on the web and an order to remove the existing statements, which are still on the internet. 

User claims that the statements were either truthful or her opinion, so she’s not liable for defamation. 

What’s somewhat unique about this lawsuit is that it appears to be going to trial. Most of these lawsuits get settled or thrown out, in this case however, the court in April refused to rule for either party as a matter of law, stating that there were too many questions that first needed to be hashed out. The trial is slated for early 2018. 

Whether a defamation lawsuit will be successful depends on the facts involved. Some of the most interesting questions in this one include:

  • Whether these particular posts are protected as free speech under the Constitution. Are User’s negative posts truthful? Did Dr. Guyuron really make her nose twice as large? Is the doctor really a liar or was that the patient’s opinion?
  • Whether Dr. Guyuron is a public figure. If so, it will be harder for him to demonstrate defamation since he needs to show that User had published her statements with “actual malice.” He’s arguably just a physician in a private practice. However, Dr. Guyuron maintains a website touting his expertise, accomplishments and affiliations. In addition, the website states that he’s one of the world’s top plastic surgeons, an internationally recognized teacher and innovator in plastic surgery, and author of several books. At what point does a physician cross the line and become a “public” figure?
  • Whether HIPAA will be an issue. Sometimes providers who try to defend themselves against a negative review reveal patient identifying information, which can lead to a violation of HIPAA’s privacy rule. It’s okay if a patient discloses her own identifying information, it’s not if the physician does so without permission.

Takeaways:
Negative reviews can hurt a physician’s practice. Yet litigation is always a risky business, especially when there’s a jury trial, which Guyuron has requested. 

The best defense is prevention. Try to assess patient concerns early on before they fester. Some physicians provide surveys or other opportunities for patients to provide feedback before they’ve even left the office. 

If you do come across a negative post about you on the internet, you may be able to take action depending on what it says. For example, if it’s clear that it wasn’t posted by an actual patient, the website may remove it. If a patient has complained that you spent too much time inputting into your electronic medical record and not enough time face-to-face, perhaps private outreach and an apology will mollify him.

It may also be a good time to reassess whether there’s truth to the negative review and modify operations accordingly. For instance, if patients are posting that the front office is surly, that may be worth looking into.


Warning: responding to a patient’s online review could be more trouble than it’s worth

Monday, October 23, 2017

By Marla Durben Hirsch, contributing writer

It may be tempting to defend yourself against a negative review of your practice on Yelp, Facebook or other online forums by responding to it. However, in many instances that move can hurt a practice further.

Patients have always shared their experiences with physicians with friends and family. The internet enables these opinions to reach a much wider audience. In addition to online reviews of physicians posted by affiliated hospitals and health plans, consumers are increasingly posting on physician rating sites like ZocDoc and Rate MDs as well as general sites such as Angie’s List or an individual’s own blog.

A negative review can adversely affect a practice’s reputation and subsequent business.

But often responding to the post can make the problem worse.

One of the biggest problems is violating the Health Insurance Portability and accountability Act’s (HIPAA) privacy rule. While the patient isn’t subject to HIPAA, providers are, and invariably in trying to defend themselves they reveal identifying patient information in violation of the law. An analysis last year of Yelp ratings conducted by ProPublica and published in the Washington Post found more than 3,500 one star reviews (the lowest rating) where patients mention HIPAA or privacy. The patients feel doubly slammed, first by the inadequate care they received, and then by the privacy violation incurred when the physician responded to the negative online post.

But HIPAA is not the only legal problem you may run into when responding to an online post. Other issues include:

Malpractice claims. Responding to a negative online post may increase the risk that you’ll be sued by the patient for providing substandard care. For example, where a patient had been content to simply bad mouth a provider and let it alone, if the provider responds, with “I didn’t recommend an MRI because it wasn’t necessary” and it turns out that an MRI actually would have been helpful, then the provider is stuck with his statement which can be used against him.

Defamation allegations. If the provider responds with something that the patient refutes, the patient may then decide to defend herself by suing the physician for defamation. Even if what you’ve posted is correct, you still need to spend the time and resources to defend yourself.

State privacy law violations. HIPAA provides no private right of action, so a consumer can’t sue a provider for an alleged violation, only file a complaint with the Department of Health & Human services’ Office for Civil Rights. Despite this, many state laws allow consumers to bring state breach of privacy actions in court.

There are also nonlegal risks to responding publicly. For example, it brings the post back to the forefront. It can also further anger the patient, leading to more posts.

So, what should a physician do when dealing with a negative post on the internet? A lot depends on exactly what was posted. Here are some recommendations:

  • Ignore the negative post. Often one negative post is offset by positive ones, and the negative one gets buried.

  • Use the negative post to evaluate your practice. For instance, if there are several posts decrying the cleanliness of your facilities, you may want to address and resolve those issues.

  • Consider responding to the patient privately, if you know who it is. For instance, if a particular patient posted that she had to wait two hours in your waiting room, and you contact her to explain that your car broke down on the way to the office or you were called into emergency surgery that morning, she may be more forgiving and willing to amend the post or take it down.

  • Ask the site to delete the post. Most sites have no obligation to do so; however, if you have compelling evidence the site may acquiesce. For instance, one negative post about a doctor turned out to have been posted by a competitor, not a patient. The site removed the post.

  • Weigh any legal action on your part very carefully. The internet is rife with stories of physicians and dentists that have sued patients for defamation. Sometimes the provider is successful; sometimes this tactic backfires; it depends on what the patient posted. There’s also the cost of bringing a lawsuit, not to mention the fact that the litigation itself can create negative publicity.

  • Protect yourself if a post raises a legal issue. For example, if a patient has alleged that you’ve committed malpractice, you may need to notify your insurance carrier.


Think Twice Before Using a Voice Assistant at Work

Monday, October 16, 2017

By Marla Durben Hirsch, contributing writer

Voice assistants like Apple’s Siri or Amazon’s Echo Dot can be convenient and easy to use, but tread very carefully if you’re going to use them in your practice, since they are fraught with risk.

A recent survey found that almost a fourth of physicians are already using a voice assistant for work-related reasons, including drug dosing queries, diagnostic information searches, communication and dictation. That number is expected to rise as the devices become more popular, but these tools have shortcomings that many physicians are not aware of, according to attorneys Elizabeth Litten and Michael Kline, with the law firm of Fox Rothschild in Princeton, New Jersey.

Some of these risks include:

Privacy. Since these are voice powered devices, the physician’s query and the device’s response can be easily overheard by others, says Kline. If a physician provides too much patient-specific information so as to make the patient identifiable, speaks more loudly than necessary, or positions the device so that it’s not in a private area, it could be a violation of the Health Insurance Portability and Accountability Act (HIPAA)or state law privacy rights.

Security. Most people don’t realize that voice assistants store users’ queries, making them subject to hacking. In addition, different voice assistants deploy different levels and types of security of the stored information, warns Litten. For instance, Siri keeps recordings and transcripts but ties them to random numbers, making the users more anonymous. Amazon’s Alexa is much less secure; it stores full transcripts which can be viewed by the user – as well as anyone else who can access the user’s account. “You’ve created data. Be aware this is another place where the data is stored,” says Litten.

There’s also the vulnerability of the devices themselves. Voice assistants are part of the “Internet of Things,” like smart TVs, wireless insulin pumps and baby monitors. Unfortunately, they are easily exploited by cybercriminals, who eavesdrop and collect information to use against users, disable or reprogram the devices, download malware which can then affect other electronic systems, and other wrongdoing. The FBI has issued several warnings about the security risks of things that use the Internet, including one this past summer.

Unreliable, inaccurate or wrong content. Just because a voice assistant is programmed to search the internet does not mean that the information it locates and spits back is reliable and trustworthy. “You don’t know how good the information is and whether it’s validated,” warns Litten [for example, ask Siri to divide one by zero]. This is particularly concerning when asking for and relying on clinical information.

Medical record problems. Some voice assistants that input automatically into the medical record may do so in a way or location that you don’t want or is hard to later find. Some physicians neglect to input clinical information received from a device into the patient’s medical record, rendering the record faulty or incomplete, warns Kline.

Malpractice concerns. Relying on questionable or wrong information provided by a voice assistant and failing to keep complete medical records are malpractice risks. And since the tools store past queries, they can be discoverable in malpractice litigation.

Six tips to protect your practice:

Some entities have banned the use of voice assistants in the workplace. However, an outright ban may not be feasible or desired; it may also be difficult to enforce, says Kline. If these devices will be allowed, at least take some steps to reduce your risks:

  • Understand the security of the voice assistant you want to use. Know how it stores and retrieves data, says Litten. If you have a choice, use one that’s more secure. For instance, the assistant in your electronic health record or iPhone may be more secure than Amazon’s Alexa.

  • Use the tool cautiously. Don’t rely on it for clinical information without corroboration, and don’t provide it with identifiable patient information. “Be careful how you frame inquiries,” suggests Litten.

  • Make sure that all physicians and staff understand the limitations and risks of using voice assistants in the office. Hopefully this will cause users to be more careful. For example, all searches should be viewed as nonprivate, says Litten.

  • Take steps to not violate HIPAA or state privacy and security rules. Include these devices as part of the practice’s overall HIPAA compliance efforts. For instance, don’t use the tools in a way that others can overhear the query or the response. “Don’t yell to Alexa and be a bigmouth. Use [HIPAA’s] elevator rule [and don’t discuss patient information in public],” says Litten.

  • Adopt the FBI’s suggestions to reduce the tool’s vulnerability. For example, change the password on the voice assistant from the manufacturer’s default password so that it’s less likely to be exploited; apply security update patches when applicable.

  • Document applicable clinical information into the medical record. “If it’s not documented it didn’t happen,” warns Kline.



SCG Health blog by Email

Recent Posts


Archive


Tags

SCG Health is a tradename of the Searfoss Consulting Group, LLC. You may reproduce materials available on this site for your own personal use and for noncommercial distribution. For more information, please read the Content Sharing Policy. Art & design by SCG Health. DISCLAIMER: You should consult an attorney for individual advice regarding a particular set of facts and circumstances. SCG Health reserves the right to change the information on this website without notice.