Learning Center Home > Guide > SecurityMetrics Guide to HIPAA Compliance

SecurityMetrics Guide to HIPAA Compliance

What Healthcare Covered Entities and Business Associates Need to Know

This post contains part of the text from the SecurityMetrics Guide to HIPAA Compliance. To view the full text, download the PDF below.

Download the latest guide to HIPAA Compliance

Download now


Despite advances in security technology and increased governmental cybersecurity initiatives, attackers will not abandon their pursuit of patient data. Patient data is valuable. It can be used to file false claims, acquire prescription drugs, or receive medical care. Patient data often includes enough information to steal a person’s identity entirely, allowing criminals to open credit accounts, file fraudulent tax returns, or receive government-issued ID cards. 

This past year, healthcare entities accounted for 29.2% of reported data breaches.

In light of recent data breaches, it’s clear that the healthcare industry is less prepared with HIPAA compliance than patients would expect. HIPAA compliance, especially to the Security Rule, has never been more necessary as the value of patient data continues to rise on the dark web.

Far too often, it’s the simple, easy-to-correct things that go unnoticed and create vulnerabilities that lead to a data breach. Even healthcare organizations with layers of sophisticated IT defenses can be tripped up by an employee who opens an errant email or uses a less-than-complex password.

This guide is not intended to be a legal brief on all aspects of the HIPAA regulations. Rather, it approaches HIPAA from the perspective of a security analyst, focusing on how to protect electronic patient data. This guide will examine the policies, procedures, and security controls recommended to keep electronic patient data private and secure as described under HIPAA’s Privacy and Security Rules. It also discusses Breach Notification and Enforcement Rules.

This guide includes recommendations from experienced HIPAA audit professionals.



  • Executive Summary
  • How to Read This Guide
  • HIPAA Compliance Overview
  • Forensic Investigator Perspective


  • Security Rule Introduction
  • Risk Analysis and Risk Management Plan
  • PHI Destruction
  • PHI Encryption
  • HIPAA Compliant Emails
  • Mobile Device Security
  • Physical Security
  • HIPAA Compliant Firewalls
  • Wireless Networks (Wi-Fi)
  • System Configuration Standards
  • Secure User Access
  • Secure Remote Access
  • Logging and Log Management
  • Data Breach Prevention Tools
  • Vulnerability Scanning
  • Penetration Testing


  • Breach Notification Rule Introduction
  • Incident Response Plan Overview
  • Develop Your Incident Response Plan


  • Privacy Rule Introduction
  • Uses and Disclosures of PHI
  • Implement Privacy Rule Policies


  • HIPAA Documentation
  • HIPAA Training
  • HIPAA Audits
  • HIPAA Budget


  • Top-Down Security
  • Contributors
  • Terms and Definitions

Download the latest guide to HIPAA Compliance

Download now



SecurityMetrics conducted five surveys in 2018 to gather information specific to HIPAA’s Security, Breach Notification, and Privacy Rules.

We received responses from over 240 different healthcare professionals responsible for HIPAA compliance. These professionals primarily belonged to organizations with fewer than 500 employees, but these statistics are important for organizations of any size because most (if not all) healthcare organizations share patient data with smaller organizations (e.g., hospitals send patient data to specialty clinics). Whenever patient information is shared, the security of one organization could impact the security of the other, regardless of size. 


  • 60% of respondents train employees yearly; 8% train employees semi-annually; 10% never train employees; 13% don’t know how often they train employees.
  • 35% of respondents provide HIPAA Privacy Rule related-training; 31% provide HIPAA Security Rule related-training; 20% provide HIPAA Breach Notification Rule related-training.
  • 47% of respondents test employees on HIPAA-related training.


  • 46% of respondents don’t conduct a formal HIPAA risk analysis; 25% of respondents don’t know if they conduct a risk analysis.
  • 18% of organizations conduct a risk analysis at least annually.
  • 51% of respondents don’t have a formal risk management plan; 32% don’t know if they have a risk management plan.
  • 11% of organizations review their risk management plan at least annually.


  • 38% of respondents felt at least moderately prepared to handle a data breach; 12% of respondents said their organization was not at all prepared.
  • 29% of respondents don’t have any response plan policies in place (e.g., incident response plan, disaster recovery plan, business continuity plan).
  • 26% of respondents reviewed their response plan policies at least annually.
  • 45% of organizations test their incident response plan (e.g., tabletop exercises).


  • 54% of respondents encrypt patient data.
  • 58% of respondents delete or destroy sensitive data.
  • 20% of respondents don’t know if their organization uses multi-factor authentication; 40% don’t use multi-factor authentication.
  • 10% of organizations’ employees share ID credentials.
  • 78% of organizations have automatic timeouts/logouts enabled on all workstations.


  • 25% of organizations allow employees to use personal mobile devices to access patient data.
  • 13% of respondents have employees that use organization-owned mobile devices for non-office related activities (e.g., checking personal email, downloading apps).
  • 38% of respondents have a mobile device policy (e.g., BYOD policy).
  • 23% of respondents don’t use mobile encryption; 18% don’t know if they use mobile encryption.


  • 9% of organizations send emails containing unencrypted patient data; 9% don’t know if patient data sent in emails is encrypted.
  • 17% of organizations’ employees send patient data to patients; 28% send patient data to doctors outside of their network.
  • 39% of organizations send patient data through encrypted email services; 13% send patient data through patient portals.


  • 31% of respondents don’t know what firewall(s) their organization uses.
  • 60% of respondents use a security professional or third party to manage their network’s firewall(s).
  • 50% of respondents don’t know how often their firewall rules are reviewed.
  • 63% of respondents have third-party vendors respond to firewall notifications (e.g., logs, alerts).
  • 33% of organizations segment their network (e.g., through firewall segmentation, VLANs, SDN).


  • 58% of organizations store system logs.
  • 45% of organizations use log monitoring systems; 40% don’t know if they do.
  • 25% of respondents review these logs at least monthly; 50% don’t know how often logs are reviewed.
  • 8% of respondents use file integrity monitoring (FIM) software; 25% use intrusion detection or intrusion prevention systems (IDS/IPS).
  • 60% of organizations don’t know how often data prevention tool logs are reviewed.


  • 68% of respondents conduct vulnerability scans.
  • 11% of respondents conduct both internal and external vulnerability scans; 7% only conduct internal vulnerability scans; 39% only conduct external vulnerability scans.
  • 55% of organizations conduct vulnerability scans at least quarterly.


  • 24% of respondents perform penetration tests; 45% don’t know if they perform penetration tests.
  • 20% of respondents have third-party vendors perform penetration tests.
  • 8% of respondents perform penetration tests at least annually; 53% don’t know how often penetration tests are performed.
  • 14% of respondents perform network penetration tests.


Whether you’re a new employee with limited HIPAA knowledge, an experienced system administrator, or a compliance officer, our guide aims to help you secure your environment, become compliant with applicable HIPAA requirements, and protect the privacy and security of patient information. 

Depending on your background, role, and organization’s needs, some sections may be more useful to you than others. Rather than reading our guide cover to cover, we recommend using it as a resource for your HIPAA compliance efforts. 

Also, if you aren’t familiar with a term, please refer to the Terms and Definition at the end of this guide. 

The chart below discusses the skill levels necessary for policy and procedure implementation:

Have a HIPAA Deadline?

Request a Quote


The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a federal law for the United States of America. It was primarily established to:

  • Improve portability and continuity of health insurance coverage. Portability means insurance coverage is maintained when an individual takes a job with a new employer.
  • Combat waste, fraud, and abuse in health insurance and health care delivery. This includes implementing the Privacy Rule, Security Rule, and Breach Notification Rule.
  • Promote the use of medical savings accounts by standardizing the amount that may be saved per person in a pre-tax savings account.
  • Improve access to long-term care services and coverage. This includes coverage of individuals with pre-existing conditions. 
  • Clarify tax deductions for employers and other tax revenue items.

HIPAA has come to be associated with the HIPAA Privacy and Security Rules. The HIPAA Act is composed of five parts (or titles). These align with the purposes for the law’s enactment in the previous list:

  • Title I: Health Care Access, Portability, and Renewability
  • Title II: Preventing Health Care Fraud and Abuse; Administrative Simplification; Medical Liability Reform
  • Title III: Tax-Related Health Provisions
  • Title IV: Application and Enforcement of Group Health Plan Requirements
  • Title V: Revenue Offsets

You might be more familiar with Title II of HIPAA, since this is where the privacy and security of patient data is described.

The chart below shows an example of how HIPAA requirements are broken down, with the example showing where the HIPAA Rules are described.


The Privacy Rule establishes standards to protect an individual’s medical records and other protected health information (PHI). It concerns the uses and disclosures of PHI and defines the right for individuals to understand, access, and regulate how their medical information is used. 

The Privacy Rule strives to assure that an individual’s health information is properly protected. At the same time, it allows access to the information needed to ensure high-quality health care and to protect the public. The Privacy Rule strikes a balance that permits important uses of information, while protecting the privacy of people who require health care services. 


While the Privacy Rule outlines what information is to be protected, the Security Rule operationalizes the protections contained in the Privacy Rule by addressing the technical and non-technical safeguards that organizations must put in place to secure individuals’ ePHI. 

The Security Rule protects a subset of information covered by the Privacy Rule. The Privacy Rule includes all individually identifiable health information, while the Security Rule includes a covered entity creates, receives, maintains or transmits in electronic form. The Security Rule does not apply to PHI transmitted orally or in writing. 

HIPAA was established to help address the increased risks that arose when the health care industry began to move away from paper processes and rely more heavily on the use of electronic information systems to conduct administrative and clinically based functions.


The Breach Notification Rule provides instructions for dealing with an impermissible use or disclosure of protected health information. Collectively, the Privacy, Security, and Breach Notification Rules are known as the HIPAA Rules. 


HIPAA Rules are enforced by the HHS Office for Civil Rights (OCR). Covered entities (CE) and business associates (BA) that create, receive, transmit, and/or maintain Protected Health Information (PHI) in any way must be HIPAA compliant.

If organizations are breached and not compliant with HIPAA requirements, they can face serious financial consequences.

In general, fines, costs, and losses may include:

  • HHS settlements: up to $1.5 million per violation per year
  • Implementation of new systems and processes: Varies
  • On-going credit monitoring for affected patients: $10 per individual
  • Federal Trade Commission: $40,000 per violation
  • Class action lawsuits: $1,000 per record
  • State attorneys general: $150,000–$6.8 million

Based on the size of breach and severity of noncompliance, these estimates can vary widely. With the possibility of these and other consequences, it’s important to take HIPAA compliance seriously.


To start your compliance efforts, you need to know where you fit in with HIPAA requirements.

A covered entity is a health plan, healthcare clearinghouse, or healthcare provider that electronically transmits health information, such as doctors, dentists, pharmacies, health insurance companies, and company health plans. While a member of a covered entity’s workforce is not considered a business associate; a healthcare provider, health plan, or healthcare clearinghouse can be considered a business associate of another covered entity.

A business associate is a person or entity that performs certain functions that involve the use or disclosure of PHI. Business associates can be from IT, legal, actuarial, consulting, data aggregation, management, administrative, accreditation, and financial organizations. Some business associate functions include:

  • Claims processing or administration
  • Data analysis, processing, or administration
  • Utilization review
  • Quality Assurance
  • Billing
  • Benefit management
  • Practice management
  • Repricing

For example, a business associate could be a third-party administrator that assists a healthcare organization with claims processing, or a consultant who performs utilization reviews for a hospital.


The covered entities we work with often understand and follow most of the Privacy Rule. For example, privacy practices are usually posted throughout offices and hospitals, and workforce members are typically trained on uses and disclosures. However, many covered entities have gaps in their understanding and implementation of the Privacy Rule requirements. For example, it’s common for them to not implement or update business associate agreements (BAA), which are required for all relationships wherein a business associate creates, receives, maintains, and/or transmits PHI.

Even with a BAA in place, covered entities retain responsibility for how their business associates protect PHI. This is why it’s necessary to monitor your business associates’ privacy and security practices, and make sure you’re only sending business associates the minimal amount of data necessary to perform their assigned tasks.

Covered entities often struggle with Security Rule requirements (e.g., firewalls, secure remote access, encryption). To start addressing issues, find your patient data. Examine every single process data goes through, every computer it sits on, every person who touches it, and every technology that has access to it.

Next, complete a risk assessment and implement a risk management plan to address any discovered technological or physical vulnerabilities.

After addressing vulnerabilities, make sure to hold regular employee training, which will teach your staff how to best protect patient data. We recommend training employees on HIPAA compliance at least once a year; though you may want to break training up into monthly or quarterly sessions.

Continuous training will help your staff keep up with and remember HIPAA rules and regulations.


When it comes to responsibility, business associates sometimes think they’re exempt from HIPAA compliance, especially those who don’t consider themselves part of the healthcare industry.

However, the HHS considers any entity that creates, receives, transmits, and maintains PHI on behalf of a covered entity to be a business associate, and requires them to be HIPAA compliant. Business associates are legally bound to protect PHI by following the Security and Breach Notification Rules, and to follow the Privacy Rule established by the covered entity or entities with which they do business.

If your system handles PHI, it needs to be fully HIPAA compliant. This is why business associates should consider implementing network segmentation to separate devices that interact with PHI from your other devices. Segmentation is one of the easiest ways to reduce cost, effort, and time spent on getting your systems HIPAA compliant.

If your covered entity terminates your contract, you need to make sure that any PHI you have created, received, transmitted, and maintained is:

  • Returned to the covered entity
  • Protected by adequate safeguards and security
  • Not used or disclosed
  • When possible, permanently deleted

If it isn’t possible to return or destroy PHI, you will need to continue to protect the information from impermissible uses and disclosures.

You also need to assess your responsibilities concerning minimum necessary requirements, making sure to limit the amount of PHI you use, disclose, and request to the minimum amount necessary to accomplish the intended purpose. Specifically, every time you grant employee access to PHI and receive PHI from another organization or individual, ask yourself what the minimum amount of information is required to accomplish the requested task.

Think You've Had a Data Breach?

Click for Incident Response


SecurityMetrics Forensic Investigators thoroughly analyze the environment of organizations that suspect or discover a data breach. Through forensic examination of the in-scope computer systems related to the handling of PHI, data acquired from the breach site can reveal when and how the breach occurred and contributing vulnerabilities.


The window of compromise is a term about the time between when an intruder accesses a critical network and when the breach is contained by security remediation. Based on data collected by SecurityMetrics Forensic Investigators from breaches discovered in 2018, organizations were vulnerable for an average of 166 days before an attacker compromised their system. The average organization was vulnerable for 275 total days.Nearly every organization will experience system attacks from a variety of sources.

Nearly every organization will experience system attacks from a variety of sources.

Due to inherent security weakness in systems or technology, some organizations have systems, environments, software, and website weaknesses that can be exploited by attackers from the day their environment is set up. In other cases, an organization becomes vulnerable because they fail to apply a security patch or make system modifications without properly updating related security protocols.

Once attackers successfully compromised a network, they were able to capture sensitive data for an average of 237 days. This may be attributed to aggregation methods employed by data thieves. Attackers have been known to save patient data from malware scraping (or other tools), without using or selling the data for months to years.

Using this aggregation method prevents organizations from quickly identifying malicious account activity, which would expose the data breach much sooner and greatly limit the amount of patient data that attackers could acquire.

Often, it’s the small, easy-to-correct things that go unnoticed that lead to data compromise.


  • The average breached organization was vulnerable for 275 days.
  • Sensitive data was captured for an average of 127 days.
  • Sensitive data was exfiltrated for an average of 127 days.
  • 50% of organizations were breached through remote execution/injection.
  • 33% of organizations were breached internally (i.e., employee assisted).
  • 17% of organizations were breached through phishing emails.


  • Vulnerable: A weakness in a system, environment, software, or website that can be exploited by an attacker.
  • Captured: Data is being recorded, gathered, and/or stored from an unauthorized source.
  • Exfiltrated: Unauthorized data is transferred from a system.



Healthcare organizations often struggle to apply the Security Rule, as opposed to the Privacy or Breach Notification Rules. This is why PHI is often leaked or stolen from healthcare organizations that have not been properly following the Security Rule. The HHS OCR Breach Portal shows that over 1600 breaches since 2009 occurred because of electronic device misuse or loss (e.g., laptops, desktop computers, network servers).

According to the HHS, a major goal of the Security Rule is to protect the privacy of individuals’ electronic health information, while allowing organizations to adopt new technologies to improve the quality and efficiency of patient care.

For example, some HIPAA Security Rule requirements try to make it more difficult for attackers to install malware and other harmful viruses onto systems, such as:

  • §164.308(a)(5)(ii)(A) Install periodic security updates.
  • §164.308(a)(5)(ii)(B) Establish procedures for guarding against, detecting, and reporting malicious software (anti-virus).
  • §164.308(a)(5)(ii)(C) Enable logging and log alerting on critical systems.
  • §164.308(a)(5)(ii)(D) Establish password management procedures for creating, changing, and safeguarding passwords.
  • §164.308(a)(5)(i) Implement a security awareness and training program for all workforce members (including doctors and senior management).

The Security Rule was designed to accommodate healthcare organizations of all sizes and technical usage. The path to HIPAA compliance is different for everyone, and each organization must implement security controls that will effectively minimize their unique set of risks. This starts with a risk analysis.



The HHS states, “conducting a risk analysis is the first step in identifying and implementing safeguards that comply with and carry out the standards and implementation specifications in the Security Rule. Therefore, a risk analysis is foundational.” A risk analysis is a way to assess your organization’s potential vulnerabilities, threats, and risks to PHI.

Besides helping you know where vulnerabilities, threats, and risks are in your environment, a risk analysis will protect you in the event of a data breach or audit by the HHS. Organizations that have not conducted a thorough and accurate risk analysis can expect to be hit with severe financial penalties.

The purpose of the risk analysis is to help healthcare organizations document potential security vulnerabilities, threats, and risks.

The HHS has stated on multiple occasions they will make examples of healthcare organizations that put PHI at risk. Given the importance of a risk analysis, you may want to consider working with a HIPAA security expert to conduct a thorough risk analysis.

The HHS recommends that organizations follow industry-standard risk analysis protocols, such as NIST SP 800-30. Make sure that the following elements are in your risk analysis:

  • Scope analysis
  • Data collection
  • Vulnerabilities/threat identification
  • Assessment of current security measures
  • Likelihood of threat occurrence
  • Potential impact of threat
  • Risk level
  • Periodic review/update as needed


Detailed PHI flow diagrams (see example below) are vital for your risk analysis because they show how people, technology, and processes create, receive, transmit, and maintain PHI. Flow diagrams reveal where you need to focus security efforts and training.

Create a diagram that shows how PHI enters your network, the systems it touches as it flows through your network, and any point at which it may leave your network.

For example, patients fill out forms at hospitals, who send patient records to doctors’ offices, who then transfer medical records to pharmacies. Or patients might add sensitive information to third-party patient portals online, which then email a dentist receptionist, who then prints and stores it in a giant file cabinet.



In the PHI lifecycle, it’s important to identify where all PHI enters or is created. By doing this, you know exactly where to start with your security practices.

For PHI entry, think of both new and existing patient records. PHI can begin from patients filling out their own information on physical paper, to the front desk taking messages for their physicians, to business associates faxing you about current or former patient information.

Consider the following sample questions when determining where your electronic PHI is created and enters your environment:

  • Email: How many computers do you have, and who can log on to each computer?
  • Texts: How many mobile devices do you have, and who owns them?
  • EHR entries: How many staff members do you have entering data?
  • Faxes: How many fax machines do you have? Do they accept PHI?
  • Mail: How is incoming mail handled? Does it contain PHI?
  • New patient papers: How many papers are patients required to fill out, and where? Front desk? In the examination room?
  • Business associate communications: How do business associates communicate with you? Do they interact with PHI?
  • Databases: Do you receive marketing databases of potential patients to reach out to? What records and data do you enter into your database?
  • Websites: Do you accept PHI online?

You need to document where PHI is created, how it enters your environment, what happens once PHI enters, and how PHI exits.


You need to know exactly what happens to PHI after it enters your environment. Is it automatically stored in your electronic health record (EHR) or electronic medical record (EMR) system? Is it copied and transferred directly to a specific department (e.g., accounting, marketing)?

Additionally, you must record all hardware, software, devices, systems, and data storage locations that can access PHI.

Here are common places PHI is stored:

  • EHR/EMR systems
  • Mobile devices
  • Workstations
  • Email
  • Servers
  • Laptops
  • Computers
  • Applications
  • Calendar software
  • Operating systems
  • Encryption software
  • Shred bin containers
  • Non-approved storage locations
  • Wireless (networked) medical devices
  • Physical locations/storage (e.g., filing cabinets)


When PHI leaves your organization, it’s your job to ensure PHI is transmitted or destroyed in the most secure way possible. You and your business associate are responsible for how the business associate handles your PHI.

Here are some things to consider when PHI leaves your environment:

  • Business associates: Are you sending data through encrypted transmission? Are they? Is data sent to them kept at a minimum?
  • Email: What procedures are in place for how patients receive data?
  • Flash drives: What policies are in place?
  • Websites: Are patients able to access PHI online? How are those pages protected?

After knowing these processes, you should find gaps in your security and environment, and then properly secure all PHI.



One of the first steps in protecting PHI is determining how much of it you have, what types you have, where it can be found in your organization, what systems handle it, and who you disclose it to. You should take time to interview personnel to document those systems and who has access to them.

You are probably not aware of every task and situation that your workforce members encounter on a daily basis or every aspect of their individual jobs. Interviewing personnel is one of the best ways to get further insight into how you’re interacting with and using PHI on a regular basis. It may help you discover access to systems or certain disclosures that you were not aware of.

For example, we often see large data storage areas where patient data lies around unprotected, and staff members commonly create copies of patient data and leave the copies unattended.

Another common scenario is when IT staff doesn’t fully understand which system components ePHI is being stored on. When this happens, they can’t fully protect the data, which can and does lead to large breaches. Make sure that your IT staff fully understands how you use ePHI and where you are storing it.


SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA


The purpose of the risk analysis is to help healthcare organizations document potential security vulnerabilities, threats, and risks.

A vulnerability might be a flaw in system security controls that could lead to ePHI being improperly accessed. For example, let’s say you have a system that requires your employees to log in using a username and password. That would be a system security control. However, let’s imagine that you don’t have a good process in place for removing account access when an employee leaves the company. That lack of process is a vulnerability.

A threat is the person, group, or thing that could take advantage of a vulnerability. For example, what would happen if you have a disgruntled employee who leaves the company? They might want to get back into the system and obtain ePHI after they were terminated. That disgruntled employee is a threat.

Risk is determined by understanding the probability of a threat exploiting a vulnerability and combining this probability with the potential impact to your organization.

Thinking again about our disgruntled employee, how likely is it in your organization that someone will leave your organization and then gain improper access to ePHI, and what would be the impact to your organization if it happened? That exploit probability combined with exploit impact is your risk.



Consider these categories as you think about your vulnerabilities, threats, and risks:

  • Digital (e.g., weak passwords, shared ID credentials)
  • Physical (e.g., not shredding PHI, accessibility of facility)
  • Internal (e.g., workforce members)
  • External (e.g., hackers, thieves)
  • Environmental (e.g., fires, hurricanes, storms)
  • Negligent (e.g., unknowing employee, accidental loss)
  • Willful (e.g., disgruntled former employee, willfully disregarding risks)


It’s difficult—if not impossible—to find every weakness in your organization on your own. To take your security to the next level and to avoid weaknesses in your system, consider implementing additional security services such as:

  • Internal and external vulnerability scanning: Automated testing for weaknesses inside and outside your network.
  • Penetration testing: Live, hands-on testing of your system’s weaknesses and vulnerabilities.
  • Gap analysis: Consultation on where your gaps in security and compliance exist and what steps need to occur next to close them.


You need to decide what risks could impact your organization, your data, and ultimately, your patients. Risk ranking is a crucial part of your risk analysis that will eventually translate to your risk management plan.

To analyze your risk level, consider the following:

  • Probability: Just because a threat exists, doesn’t mean it will be able to take advantage of a vulnerability in your organization. For example, organizations in Florida and Maine technically could both be affected by a hurricane. However, Florida-based organizations have a higher hurricane risk level, because the likelihood of hurricane landfall is greater in Florida than in Maine.
  • Impact: How would a particular event affect your organization? While you don’t want any PHI to be accessed improperly, the impact to your business for one leaked record will be smaller than if hundreds or thousands of records are exposed. For example, while a computer screen might accidentally show PHI to a patient in the waiting room, it can only show one record at a time, while an attacker accessing your unsecured Wi-Fi could gain access to entire databases.



As we work with individual entities, we find that because they attempt to perform a risk analysis with only in-house skills, a non-security professional, or an unqualified third party, many vulnerabilities and risks are missed.

An in-house risk analysis can be a great first step toward HIPAA compliance, but if your staff is stretched too thin (as they typically are), you probably won’t see accurate and thorough results. Additionally, IT staff is rarely trained to perform a formal risk analysis. Risk analysis is a skill set that requires extensive experience in information technology, business process flow analysis, and cybersecurity, so it is usually unrealistic to expect your staff to be able to accomplish this for you.

A complete and thorough risk analysis is one of the best ways for you and your organization to make intelligent and informed business decisions. Without understanding your risk, how do you best decide where to put your resources?


SecurityMetrics Security Analyst | MSIS | QSA | CISSP


The risk analysis outcome, with its risk rankings, provides the basis for your risk management plan. The risk management plan is the step that works through issues discovered in the risk analysis and provides a documented instance proving your active acknowledgment (and correction) of PHI risks.

There are many ways to approach the Risk Management Plan, but the process will consist of three main steps:

  1. Plan how you will evaluate, prioritize, and implement security controls.
  2. Implement security measures that address the greatest areas of risk first.
  3. Test the security controls you’ve implemented and be sure to keep an eye out for new areas of risk.

The HIPAA Security Rule requires you to complete a risk analysis and risk management plan on a regular basis.


After a plan is created to address risk analysis concerns, it’s time to implement it. Starting with the top-ranked risks first, identify the security measure that fixes that problem. For example, if your risk is that you still use Windows XP (an unsupported system with known vulnerabilities that cannot be patched), your security measure would be to update your computer operating system or work with your vendor to properly mitigate the proposed risk.

Another important part of the risk management plan is documentation. In the event of an audit, the HHS will want to see your risk management plan, your risk management plan documentation, and regular progress on addressing the items identified in your risk management plan.

As far as HHS is concerned, if it’s not documented it never happened.

Although specific items included in a Risk Management Plan vary, the following points are industry best practices:

  • Risk level: Each vulnerability discovered should be assigned a risk level, based upon the probability and impact of associated threats and vulnerabilities.
  • Security measures: You need to determine appropriate security measures and resolutions to mitigate each line item contained in your risk analysis.
  • Date completed: Including a completion date is great for both HHS documentation and your own records.
  • Assigned to: This is beneficial for all organizations, especially in instances where two or more people (e.g., doctor and office manager) are completing a risk management plan together.
  • Notes section: It’s helpful to include a comments section next to each requirement, especially what policy and procedure the item is associated with and how you’ll implement the task.

Updating, implementing, and documenting your risk management plan should be an ongoing process, especially when new systems and processes are added to the PHI environment.

Have a HIPAA Deadline?

Request a Quote


As you work on your risk management plan, place high priority on removing any unnecessary patient data. 

The first step to managing/deleting old data is deciding how long you need to keep the data. Many states have requirements on the amount of time that you must keep patient data. Organizations commonly maintain data for a minimum of a decade. If a patient has passed away, there will be additional requirements for data retention that must also be considered.

If you delete sensitive information (e.g., patient records, Social Security Numbers), it’s still on your computer and accessible to attackers if it isn’t properly wiped. When you empty the Recycle Bin or Trash, it doesn’t actually wipe the file(s) off your computer. It simply marks the file as acceptable to overwrite and is no longer visible to the user.

For the average user, these deleted files are impossible to retrieve because the operating system deletes the references to the file. While your computer can’t find that file for you anymore, the file still exists. For those with more advanced computer skills (e.g., hackers), this deleted data is still accessible by looking at the unallocated disk space.

Think of the Recycle Bin or Trash like putting sensitive documents in the trash can next to your desk. Individuals could easily retrieve these documents if they needed to; all they would need to do is pull them out of the trash can.

HHS regulations, such as 45 CFR §164.310(d)(2)(i) and (ii), states that, “the HIPAA Security Rule requires that covered entities implement policies and procedures to address the final disposition of electronic PHI and/or the hardware or electronic media on which it is stored.”

The HHS has determined that for electronic PHI, overriding (i.e., using software or hardware products to overwrite media with non-sensitive data) is the best way to securely delete sensitive patient data on systems still in use.

When thinking about how to permanently delete files off your network, don’t forget about any archived data, including:

  • Local backups
  • Cloud backups
  • External hard drive backups
  • CD or DVD backups
  • Email backups
  • FTP backups
  • Server backups
  • Mirror backups
  • Offsite backups



Most people know how to destroy physical sensitive data, such as shredding, burning, or pulping, but when it comes to securely destroying electronic data, most healthcare professionals don’t know where to begin (e.g., options, tools, procedures).

If media is magnetic (e.g., tapes, hard drives), it should be degaussed or demagnetized. Make sure to use an appropriately sized and powered professional grade degausser to ensure no data recovery is possible. You can also physically destroy the media in an almost endless variety of ways. For example, one organization ground up their hard drives and dissolved them in a sulfuric acid solution.

If you plan to re-use or sell the media, use a repetitive overwrite method, also known as erasure or wiping. This is when you overwrite the data with randomized 1’s and 0’s. There are many free overwrite tools available and most modern operating systems have features for securely deleting data.

If you use a solid-state drive or flash memory, you have several options. You can use an ATA Secure Erase command to wipe or reset the data; some manufacturers supply software that will enable you to perform secure erasures, but the only sure way to destroy data on a solid-state drive or in flash memory is to physically destroy it.


SecurityMetrics Security Analyst | CISSP | QSA


If you need to keep PHI for any period of time, you must encrypt it. Encryption renders files useless to attackers by masking them as unusable strings of indecipherable characters.

With this in mind, HIPAA requires healthcare entities to “implement a [method] to encrypt and decrypt electronic Protected Health Information” in requirement §164.312(a)(2)(iv). All electronic PHI that is created, received, transmitted, and maintained in systems and on work devices (e.g., mobile phone, laptop, desktop, flash drive, hard drive) must be encrypted. 

Some organizations argue that encryption is an Addressable requirement, and mistake this to mean that it is optional. However, if your organization determines that encryption is not feasible in your environment, you must document why and put other protections in place to protect PHI to the same degree as encryption would (or better). 

As a security organization, we view encryption as critical to all PHI stored or transmitted by your organization.

As previously mentioned, you need to make sure that you map out where PHI is created, when/where it enters your environment, how/where it is stored, and what happens to it after it exits your environment or organization.

Although HIPAA regulations don’t specify the necessary encryption, industry best practice is to use these encryption types: AES-128, AES-256, or better.

Due to the complexity of encryption rules, healthcare organizations often use third parties to ensure PHI encryption. This is partly because organizations should keep the tools for decryption on another device or at a separate location.


Historically, one of the largest reported threats to electronic PHI has been loss or theft of a physical device (e.g., a laptop). While employing adequate physical security and media movement procedures is the first line of defense to prevent these types of incidents; loss and theft still sometimes occur despite an organization’s best efforts.

Full disk encryption is the best way to protect you from penalties associated with a breach when a device is lost or stolen. The HITECH act of 2009 modified the HIPAA Breach Notification Rule by stating that if a device is lost or stolen and it can be proven that the data is unreadable by either secure destruction or encryption, the loss is not reportable as a breach.

Full disk encryption for laptops and desktops is fairly easy to implement and usually comes with no additional cost, as most current operating systems come equipped with this capability.



Even though HIPAA regulations indicate that encryption is an addressable item (§164.312(a)(2)(iv), §164.312(e)(1), §164.312(e)(2)(ii)), HHS has made it very clear that encryption is viewed as required.

Sometimes, things you think are a valid method for encryption may be far from it. We have run into entities who produce a spreadsheet with PHI or other sensitive information in it, then say, “See, I encrypt it when I make the cell smaller and the numbers change to ‘###’.” Just to be clear, this is not encryption. The data is still there and easy to access even if you can’t see it.

There are three common data handling processes that are often confused: masking, hashing, and encrypting. Let me break them down for you:

  • Masking is hiding part of the data from view. It’s still there in clear text, you just can’t see all of it on the screen. You use this to hide parts of the patient information not needed by specific workforce members.
  • Hashing is running the data through a mathematical algorithm to change it into something indecipherable. You cannot undo a hashed value to get back to the original data. Generally, healthcare entities don’t hash PHI.
  • Encrypting is similar to hashing because data is run through a mathematic algorithm; however, you use an encryption key that has a paired decrypting key. This way the data is safely stored and the only way to see the data is by using the decryption key to unlock it. Currently, the strongest, most common encryption algorithm is AES-256. Whenever implementing encryption, always use the strongest algorithm your system can handle. Remember that many older algorithms are not acceptable (e.g., RC4, DES, 3DES).

You should have encryption anywhere PHI is stored so the data requires a decryption key to view it. Most computer systems can automatically handle encryption if they’re properly configured.


SecurityMetrics Security Analyst | CISSP | QSA


Patient data needs to be encrypted, especially when you send it outside of your organization or across public networks within your organization. According to the HHS Breach Portal, about 15% of reported healthcare breaches have been caused because of inadequate email encryption. Healthcare organizations must “implement a mechanism to encrypt electronic Protected Health Information whenever deemed appropriate” (requirement §164.312(e)(2)(ii)), such as when sending unencrypted PHI in unprotected email services (e.g., Gmail, Outlook).

Organizations can send PHI via email if it’s secure and encrypted. According to the HHS, “the Security Rule does not expressly prohibit the use of email for sending ePHI. However, the standards for access control, integrity and transmission security require covered entities to implement policies and procedures to restrict access to, protect the integrity of, and guard against unauthorized access to ePHI.”

Due to how interconnected emails are and the difficulty of properly securing it through encryption, we strongly recommend avoiding the transmission of PHI via email whenever possible.

When possible, use patient portals to send information to patients. Covered entities should use secure file transfer protocol (SFTP) options for covered-entity-to-covered-entity or covered-entity-to-business associate communications.

As a general rule, free Internet-based web mail services (e.g., Gmail, Hotmail) are not considered secure for the transmission of PHI.

If you must use an Internet-based email service, make sure that this service signs a business associate agreement with you.

However, a BAA only goes so far, and ultimately, you are still responsible. The Omnibus Rule states the covered entity is still responsible for ensuring the business associate does their part to protect patient data. If found in violation of HIPAA, both parties are liable for fines. The BAA typically only discusses the business associate’s systems that touch PHI; you’re in charge of protecting the rest of the chain.

Free Data Security Education

Sign Up for Academy



Make sure access to your email account is protected by complex, strong passwords (e.g., pass phrases). For example, your password should not be found in a dictionary in any language. It should contain at least 10 upper- and lower-case letters, numbers, and special characters, or follow the NIST guidance for password management.


Email disclaimers and confidentiality notices do not give you a free pass to send PHI-filled, unencrypted emails. That’s not their purpose. A disclaimer on your emails should merely inform patients and recipients that the information is PHI and should be treated as such.

Your legal department can assist with this verbiage. The key to remember is that no disclaimers will alleviate your responsibility to send PHI in a secure manner.



Emails sent on your own secure server do not typically need additional encryption measures during transmission. Be mindful of where these emails reside when they’re not in motion. Any email with PHI that is sitting on an employee’s computer or your email server will need to be encrypted. This encryption for data at rest is not typically built in to an email server or email client. Additionally, options like Outlook Web Access can easily leak PHI and are difficult to properly secure, which is why they should be avoided.


Do you have to encrypt an email if it’s going to another doctor? The answer is: yes. Any copy of that email that resides on your computer or your email server needs to be encrypted while it is at rest. In addition, any email that is sent to a doctor that is not in your office or on your own secure network and email server will need to have additional encryption measures in place to protect PHI in transit.

Remember, you’re in charge of proper encryption during transmission.


Doctors sometimes work on cases using home computers, and then they email the PHI back to their work email. Unless each of these emails is secured with encryption both at rest and in transmission, personal emails can open up your network to additional vulnerabilities.

Healthcare providers can exchange emails with patients and still be HIPAA compliant, as long as emails are sent securely.


Don’t send emails containing PHI. If you need to send mass email messages, use a mail merge program or HIPAA compliant service that creates a separate email for each recipient. The danger of using BCC is that email addresses aren’t usually hidden to hackers, even when they’re part of a blind copy group.


If someone replies to your email, is this communication secure? Technically, that’s not your concern. HIPAA states that the entity or person conducting the transmission is the liable party. This means that if the replier is not a covered entity or business associate, it’s impossible for the replier to violate HIPAA. If the replier is a covered entity or business associate, the protection of that data is now their responsibility, not yours. As soon as you reply back, however, then you’re again liable for the security of that transmission.

For example, if a patient sends you an email containing PHI (e.g., treatment discussions) and you reply to this email, you’re now liable for protecting that data.


How do you protect messages initiated by patients? According to the HHS, the healthcare provider can assume (unless the patient has explicitly stated otherwise) that email communications are acceptable to the individual. Providers should assume the patient is not aware of the possible risks of using unencrypted email. The provider should alert the patient of those risks, and let the patient decide whether to continue email communications.

Remember, you must provide alternate secure methods of providing the information to the patient.


Due to the nature of email and the struggle to properly secure emails, we recommend avoiding sending emails whenever possible. Some alternatives include: patient portals, cloud-based email servers, and encrypted email services.


The use of patient portals is preferred for sending information to patients, and SFTP options are preferred for covered-entity-to-covered-entity or covered-entity-to-business-associate communications.

Patient portals are designed for healthcare professionals to safely access their PHI online whenever necessary.

Not only do patient portals allow covered entities to securely communicate with other covered entities and business associates, they also allow patients to easily access their own information (e.g., medication information). Some portals even allow patients to contact their healthcare provider about questions, set up appointments, and request prescription refills.


You can also use a secure cloud-based email platform (e.g., Office365, Neo-Certified), which hosts a HIPAA compliant server. You should connect to the server via hypertext transfer protocol over secure socket layer (HTTPS) that way you have an encrypted connection between you and your email server.

Unfortunately, this option does not control the email transmission from the cloud server to the recipient’s server or workstation. We only recommend this option when all senders and all recipients have accounts on the same cloud-based email service.


Encrypted email services (e.g., Brightsquid, Zixmail, and Paubox Encrypted Email) encrypt the message all the way from your workstation to the recipient’s workstation. If the recipient isn’t an email service client, the system will notify the recipient of the email; they can then connect securely to the encrypted email server to retrieve the message.


Like sending emails, using mobile devices requires additional security measures to make sure patient data is secure. Mobile devices often don’t have the same security policies as workstations and servers. Because of this, mobile devices may not be protected with technology like firewalls, encryption, or antivirus software.

In addition, when a healthcare provider uses their own personal smartphone or tablet to access patient data (i.e., BYOD procedures), these devices are vulnerable due to other apps on the device. With each downloaded app, the risk grows.

Think about others accessing that mobile device outside the office. For example, sometimes physicians, dentists, office managers, etc., let their kids play with their personal/work smartphone, then someone accidentally downloads a malicious app that can read the keyboard patterns of the user. The next time the doctor accesses his patient data, that malware may steal the EHR/EMR system password.


Because of all these issues that come along with a Bring Your Own Device (BYOD) policy, you need to follow a few precautions in order to comply with HIPAA requirements and ensure patient data security.

The best mobile security practice is: don’t implement a BYOD strategy. That said, we realize that can be impractical.

Protecting and securing health information while using a mobile device is a healthcare provider’s responsibility. To address these concerns, consider using the National Institute of Standards and Technology (NIST) mobile guidelines for healthcare security engineers and providers.


There are some practices you should and shouldn’t follow with your patient data while using your mobile device. For example:

  • Accept OS and app updates regularly and quickly. Just like computers, mobile devices must be patched regularly to eliminate software and hardware vulnerabilities found after initial release.
  • Use discretion when downloading apps. Even if apps look legitimate, they may be infected with malware that could compromise patient data and cause a serious data breach.
  • Don’t jailbreak/root your device. Jailbreaking/rooting your device makes your device less secure. While this may let you do more with your device, it also leaves it more vulnerable to attacks.
  • Make sure the devices you plug your mobile device into (e.g., your home computer, work laptop) are secure. If your computer/network isn’t secure, it could act as a portal for hackers to gain access to your mobile device.
  • Implement a 10-character password/pin with a special character, letters, and numbers on your mobile device, when applicable.
  • Connect to your EHR/EMR system via secured remote access, either through a virtual private network (VPN) or using multi-factor authentication (MFA).
  • Encrypt your data. If you have sensitive data on your mobile device, make sure it’s encrypted.
  • Use mobile vulnerability scanning. A vulnerability scanner like SecurityMetrics Mobile for your mobile device can help you discover weaknesses.
  • Establish mobile device policies. Whether your organization owns the devices or your employees use their own devices, you need to establish security policies that address the use of mobile devices.
  • Train employees on mobile device policies and security best practices. Your employees should know about malware and take the right measures to avoid it.
  • Remote wipe devices immediately after they have been lost and/or stolen, when applicable. This process remotely erases the sensitive data on mobile devices.

Even though it can be hard to fit mobile devices into a traditional network or data security model, you need to consider them. It’s critical to include mobile devices in your information security planning.


If you can, avoid storing sensitive information on mobile devices to limit the threat of a data breach altogether.

Mobile encryption services are typically not as secure and reliable as encryption services for other devices (e.g., laptops) because most mobile devices themselves aren’t equipped with the most secure encryption. Plus, mobile technology is only as secure as a device’s passcode.

For example, Apple’s Data Protection API encrypts the built-in mail application on iPhones and iPads, but only after you enable a passcode. Encryption might not apply to calendars, contacts, texts, or anything synchronized with iCloud. Some third-party applications that use Apple’s Data Protection API are also encrypted, but this is rare.

If someone were to jailbreak your mobile device, information protected by the Data Protection API would remain encrypted only if the thief didn’t know the decryption key. Android’s encryption program works similarly, requiring a password to decrypt a mobile device each time it’s unlocked. Additionally, if you backup your mobile device to your hard drive, ensure the backups are encrypted.

Although HIPAA regulations don’t specify the required encryption, industry best practice is to use AES-128 or AES-256 encryption (or better).



In addition to protecting your electronic PHI (ePHI), make sure to protect physical PHI. Over the years, SecurityMetrics Security Analysts have reported that many healthcare organizations don’t worry as much about their physical security. While they may address many foundational security issues, they’re likely to overlook details such as:

  • Unlocked office/storage doors
  • Open window blinds
  • Unattended reception desks
  • Lack of screen savers and privacy monitors
  • Theft of devices/hardware
  • Malware in left-behind devices (e.g. USB flash drives)

Employees may think physical security only applies after hours. However, most data thefts occur in the middle of the day, when staff is too busy with various assignments to notice someone walking out of the office with a server, work laptop, or phone.

The majority of physical data thefts take only minutes in planning and execution.

To help control physical threats, create a physical security policy that includes all rules and processes involved in preserving onsite business security. For example, if you keep confidential information, products, or equipment in the workplace, you should secure them in a locked area. If possible, limit outsider office access to one monitored entrance, and (if applicable) require non-employees to wear visitor badges at all times.

Don’t store sensitive information or documents in the open. For example, reception desks are often covered with information like passwords written on sticky notes, computers without privacy monitors, and patient records lying out in the open.

You also need to control employee access to sensitive areas, which must be related to an individual’s job function. To comply with this requirement, you must document:

  • Who has access to secure environments and why they need this access
  • What, when, where, and why devices are used
  • A list of authorized device users
  • Locations where the device is and is not allowed
  • What applications can be accessed on the device

Access documentation must be kept up to date, especially when individuals are terminated or their job role changes.

Keep an up-to-date inventory of all removable devices, including a list of authorized users, locations the device is assigned or is not allowed, and what applications are allowed to be accessed on the device.

Best practice is to not allow these devices to leave the office, but if they must, consider attaching external GPS tracking technology and installing/enabling remote wipe on all laptops, tablets, external hard drives, flash drives, and mobile devices.

In addition, make sure all workstations have an automated timeout/log out on computers and devices (e.g., a password-protected screen saver after a set amount of time). This helps discourage thieves from trying to access data from these workstations when employees aren’t there.


Most physical security risks can be prevented with little effort. Here are some suggestions:

  • While working on your risk analysis, look for physical security risks
  • Lock all office doors when not in use day and night
  • Require passwords to access computers and mobile devices
  • Use screen savers and privacy monitors on computers
  • Install and use blinds in all office windows
  • Keep logs of who goes in and out
  • Keep track of devices that go in and out of the office
  • Have policies in place for stolen equipment (e.g., remote wipe)
  • Encrypt your data or don’t store sensitive data on mobile devices
  • Establish an incident response plan
  • Train staff against social engineering
  • Limit access to PHI through role-based access
  • Have staff report suspicious people and devices
  • Make sure all reception desks protect PHI from prying eyes
  • Monitor sensitive areas with video cameras and store the video logs for appropriate durations


While you may understand how to protect sensitive information and your own proprietary data, your employees might not. That’s why regular security trainings are so important.

Social engineering is a serious threat to both small and large organizations. A social engineer uses social interaction to gain access to private areas, steal information, or perform malicious behavior. Employees fall for their tricks more often than you think.

For example, if someone walked into your office and said they were there to work on your network and needed you to lead them to the server room, would your employees think twice to further identify and verify their presence?

Train your employees to question everything. Establish a communication and response policy in case of suspicious behavior. Train employees to stop and question anyone who does not work for your organization, especially if an individual tries to enter the back office or network areas.

Employees should be trained and tested regularly, so they understand your organization’s security policies and procedures.


Network firewalls (e.g., hardware, software, and web application firewalls) are vital for your HIPAA compliance efforts. A firewall’s purpose is to filter potentially harmful Internet traffic to protect valuable PHI. Simply installing a firewall on your organization’s network perimeter doesn’t make you HIPAA compliant.


A hardware firewall (or perimeter firewall) is typically installed at the outside edge of an organization’s network to protect internal systems from malware and threat actors on the Internet. Hardware firewalls are also often used inside the environment to create isolated network segments and separate networks that have access to PHI from networks that don’t.

In summary, a hardware firewall protects environments from the outside world. For example, if an attacker tries to access your systems from the Internet, your hardware firewall should block them.

Most robust security option
Rules need to be carefully documented
Protects an entire network
Difficult to configure properly
Can segment internal parts of a network
Needs to be maintained and reviewed regularly


Many personal computers come with pre-installed software firewalls. This feature should be enabled and configured for any laptop computers that commonly connect to sensitive data networks. For example, if a receptionist accidentally clicks on a phishing email scam, their computer’s software firewall can help prevent malware from propagating through the corporate network, if properly configured.

Protects mobile workers when outside the organizational network
Should not replace hardware firewalls for network segmentation
Doesn’t protect an entire network
Easier to maintain and control
Fewer security options


A web application firewall (WAF) should be implemented in front of public-facing web applications to monitor, detect, and prevent web-based attacks. WAFs aren’t the same as network firewalls because they work at the application layer rather than the network layer and they specialize in one specific area: monitoring and blocking web-based traffic.

A WAF can protect web applications that are visible or accessible from the Internet. Your WAF must be up to date, generate audit logs, and either block cyberattacks or generate a cybersecurity alert if it suspects an imminent attack.

Immediate response to web application security flaws
Requires more effort to set up
Protection for third-party modules used in web applications
Possibly breaks critical business functions (if not careful)
Deployed as reverse proxies
May require some network re-configurations



After installation, you need to spend time configuring your firewall. The best way to configure your firewall is to restrict and control the flow of traffic as much as possible, specifically around networks with PHI access.

If your firewall isn’t configured and maintained properly, your network isn’t secure.

Depending on how complex your environment is, your organization may need many firewalls to ensure all systems are separated correctly. The more controls you have, the less chance an attacker has at getting through unprotected Internet connections.

Take time to establish your firewall rules or access control lists (ACLs). The ACLs will help the firewall decide what it permits and denies into and out of your network. Firewall rules typically allow you to whitelist, blacklist, or block certain websites or IP addresses. Some firewalls deny all access unless it’s specified in the rules.

If you don’t configure any ACLs, your firewall might allow all connections into or out of the network. Rules are what give firewalls their security power, which is why they must constantly be maintained and updated to remain effective. Remember, your firewall is your first line of defense, so you should dedicate time to make sure they’re set up correctly and functioning properly.


No matter the size of your environment, things change over time. Firewall rules should be revised over the course of a few months when first implemented and reviewed at least every 6 months afterwards.

To find weaknesses in your network, use vulnerability scans and penetration tests. Regular vulnerability scans offer consistent, automated insight into your network security, while penetration tests are a more thorough way to examine network security.


  1. Create firewall configuration standard: Before implementing firewall settings and rules on the hardware carefully document settings and procedures, such as hardware security settings, port/service rules needed for business, justify need for rules, consider both inbound and outbound traffic, etc.)
  2. Trust but verify: After implementing firewall rules/settings, test the firewall appropriately externally and internally to confirm settings are correct (pen test, scans, etc.)
  3. Limit Outbound Traffic: Often we worry too much about blocking inbound ports/services and forget that outbound traffic from inside the network should be limited to just what is needed, this limits hackers’ paths for exfiltrating data
  4. Personal firewalls: Configure personal firewalls on mobile computing platforms to limit attack surfaces and minimize propagation of malware when on unsecured networks.
  5. Management: Unless it’s part of a secure managed firewall infrastructure, only manage the firewall itself from within your network and disable external management services.


Healthcare organizations often set up large flat networks, where everything inside the network can connect to everything else. They may have one firewall at the edge of their network, but that’s it. This is risky because the more places that have access to patient information, the higher your chances for a HIPAA violation or data breach.

Firewalls can be used to implement segmentation within an organization’s network. When you create networks with PHI access (e.g., EHR/EMR systems) firewalled off from the rest of the day-to-day traffic, you better ensure patient data is only sent to known and trusted sources.

For example, you install and configure a multi-interface firewall at your network’s edge (see example below). From there, you create an interface on the firewall solely dedicated to systems that create, receive, transmit, and maintain PHI. If there’s no traffic into or out of this interface, this is proper network segmentation.

Segmentation can be extremely tricky, especially for those without a technical security background. Consider having a security professional double-check all of your segmentation work.



Large healthcare organizations typically have firewalls in place, at least at the perimeter of their network (e.g., hardware firewalls). But be careful when selecting firewalls; make sure they support the necessary configuration options to protect critical systems and provide segmentation between the networks that do and do not have PHI access.

Smaller organizations sometimes struggle to understand firewall basics, and they often don’t have the necessary in-house expertise to configure and manage them correctly and securely. If this is the case, a third-party service provider should be contracted to provide assistance, rather than simply deploying a default configuration and hoping for the best.

It may seem obvious, but leave as few holes as possible in your firewall. Rules should be as specific as possible for your network(s); don’t just allow access to all Internet connections. For example, if you have third parties that remotely support your network(s), limit their inbound access and the time-frames within which they can access your network. Then spend time reviewing your firewall rules and configuration.

Firewalls are the first (and often the only) line of defense, and strict attention needs to be given to the logs and alerts they generate. Often, the volume of log data can be overwhelming, so organizations don’t look through them.

But it’s important (and required) to review firewall logs in order to identify patterns and activity that indicate attempts to breach security. There are many good software packages available to help organizations deal with the volume of log data and to more easily pick out the important data that requires you to take action.

For firewall implementation and maintenance, remember to follow these three practices:

  • Write strict firewall rules.
  • Pay attention to what logs tell you.
  • Review firewall configurations frequently, adjust as necessary, and document everything.


SecurityMetrics Security Analyst | CISSP | QSA


Most healthcare organizations have wireless networks (i.e., Wi-Fi), with Wi-Fi access becoming a waiting room norm. The problem is many offices don’t have their Wi-Fi set up correctly with adequate encryption and network segmentation, turning this free patient amenity into a liability.

If you don’t segment guest networks from non-guest wireless networks with a firewall, you have probably already allowed impermissible disclosure of patient data and don’t even know it. Guest wireless networks should always be segmented from your non-guest wireless network by a firewall.

For example, if your Wi-Fi network name was DrSwenson, you should set up another Wi-Fi network exclusively for patients named DrSwensonGuest. Nurses, office managers, and physicians should only use DrSwenson, and patients should only use DrSwensonGuest. Both Wi-Fi networks should be secured.

In addition, make sure that only staff can connect to your non-guest network(s) with approved devices that follow your BYOD policies.



Security best practice is to set up your Wi-Fi with Wi-Fi Protected Access II (WPA2). Since 2006, WPA2 has been the most secure wireless encryption standard (despite the recent KRACK vulnerability). For additional protection, use a VPN to encrypt your Internet traffic.

Avoid using outdated wired equivalent privacy (WEP) encryption because it’s easy to compromise.


Another important safety aspect is to make sure the Wi-Fi password is secure. Don’t use the default password or username that comes with the wireless router. 


Rogue wireless access points can allow attackers unauthorized access to secure networks, granting them the access to attack your network remotely. Consequently, it’s vital to scan for rogue wireless access points, particularly if they’re attached to your non-guest network. This scanning helps you identify which access points need to be changed.

Download the latest guide to HIPAA Compliance

Download now



Any system with PHI access needs to be hardened before use; the goal of hardening a system is to remove any unnecessary functionality and to configure the system in a secure manner.

Organizations should address all known security vulnerabilities and be consistent with industry-accepted system hardening standards. Some good examples of hardening guidelines are produced by the following organizations:


Consistency is key when trying to maintain a secure environment. Once system hardening standards have been defined, it’s critical that they are applied to all systems in the environment in a consistent fashion.

After each system or device in your environment has been appropriately configured, you still aren’t done. Many organizations struggle to maintain standards over time, as new equipment and applications are introduced into the environment.

This is where it pays to maintain an up-to-date inventory of all types of devices, systems, and applications connected to PHI.

However, the list isn’t useful if it doesn’t reflect reality. Make sure someone is responsible for keeping the inventory current and based on what is actually in use. This way, applications and systems that are not approved to access PHI can be discovered and addressed.

Many organizations, especially larger ones, turn to one of the many system management software packages on the market to assist in gathering and maintaining this inventory. These applications are able to scan and report on hardware and software used in a network and can also detect when new devices are brought online.

These tools are often also able to enforce configuration and hardening options, alerting administrators when a system is not compliant with your internal standards.



Use industry accepted configuration or hardening standards when setting up your servers, firewalls, and any system in-scope for HIPAA. Examples of system hardening practices include disabling services and features you don’t use, uninstalling applications you don’t need, limiting systems to perform a single role, removing or disabling default accounts, and changing default passwords and other settings.

Permitting anything unnecessary to remain on a system opens you up to additional risk.

The key to system configuration and hardening is consistency. Once you have documented a standard that meets your environment’s requirements, make sure processes are in place to follow your standard as time goes on. Keep your standard and processes up to date to consider changes to your organization and requirements.

Automated tools can simplify the task of enforcing configuration standards, allowing administrators to quickly discover systems that are out of compliance.


SecurityMetrics Security Analyst | CISA | QSA


Application developers will never be perfect (and technology constantly changes), which is why updates to patch security holes are released frequently. Once a hacker knows they can get through a security hole, they often pass their knowledge on to the hacker community, who can then exploit this weakness until the patch has been updated. Consistent and prompt security updates are crucial to your security posture.

Patch all critical components in your PHI flow pathway, including:

  • Internet browsers
  • Firewalls
  • Application software
  • EHR/EMR systems
  • Databases
  • Operating systems

Older Windows systems in particular can make it difficult for organizations to remain secure, especially when the manufacturer no longer supports a particular operating system or version (e.g., Windows XP, Windows Server 2003). Operating system updates often contain essential security enhancements specifically intended to correct recently exposed vulnerabilities. When organizations fail to apply such updates and patches to their operating systems, the vulnerability potential increases exponentially. Be vigilant about consistently updating the software associated with your system. Don’t forget about critical software installations.

To help you stay up to date, ask your software vendors to put you on their patch/upgrade email list.

The more systems, computers, and apps your organization has, the more potential weaknesses there are. Vulnerability scanning is one of the easiest ways to discover software patch holes that cybercriminals would use to exploit, gain access to, and compromise an organization.


If you develop in-house applications, you must use very strict development processes and secure coding guidelines. Don’t forget to develop and test applications in accordance with industry accepted standards like the Open Web Application Security Project (OWASP).

Be vigilant about consistently updating the software associated with your system.



This requirement is made up of two parts. The first part is system component and software patching, and the second part is software development.

System administrators have the responsibility to ensure all system components (e.g., servers, firewalls, routers, workstations) and software are updated with critical security patches within 30 days of when they’re released to the public. If not, these components and software are vulnerable to malware and security exploits.

One reason systems or software might be excluded from updates is because they simply weren’t able to communicate with the update server (e.g., WSUS, Puppet), possibly resulting from a network or system configuration change that inadvertently broke communication. It’s imperative that system administrators are alerted when security updates fail.

If there’s a legitimate reason an update can’t be applied, it must be documented. There are scenarios where a critical update doesn’t apply and actually introduces security issues when applied. This has happened in Cisco environments and emphasizes the importance of proper functionality and organizational testing prior to wide update deployment(s).

When developing software (e.g., web applications), it’s crucial that organizations adopt the OWASP standard. This standard will guide them in their web application development process, helping to enforce secure coding practices and keep software code safe from malicious vulnerabilities (e.g., cross-site scripting (XSS), SQL injection, insecure communications).

Insecure communications, for example, regularly evolve as exploits are discovered. SSL and early TLS are no longer considered acceptable forms of encryption when data is being transmitted over open, public networks.

Organizations need to embrace the idea of change control for their software development and system patching/updating. There are four requirements of what a proper change control process should contain:

  1. All changes must have a documented explanation of what will be impacted by the change.
  2. All changes must have documented approval by authorized parties.
  3. Any changes to an organization’s production environment must undergo proper iterations of testing and QA before being released into production.
  4. The change control process must always include a back-out or rollback procedure in case the updates go awry.


SecurityMetrics Security Analyst | CISSP | CISA | QSA | PA-QSA | CISM



Unknown to many organizations, medical devices are often installed with default passwords and never get changed. However, most default passwords and settings are well-known throughout hacker communities; also, defaults can often be easily found via a simple Internet search.

When defaults are not changed, it provides attackers an easy gateway into a system. Changing vendor defaults on every system with exposure to patient data protects against unauthorized users.

In one SecurityMetrics forensic investigation, we discovered that a third-party IT vendor purposely left default passwords in place to facilitate easier future system maintenance. Default passwords might make it easier for IT vendors to support a system without having to learn a new password each time, but convenience is never a valid reason to forgo security, nor will it defray liability.


USERNAMES: admin, administrator, username, test, admin1, office, sysadmin, default, guest, public, 123456, user

PASSWORDS: 123456, passw0rd, password1, admin1234, monkey!, test1234, changeme!, letmein1234, qwerty, login


Even if default passwords are changed, if a username and password aren’t sufficiently complex, it will be that much easier for an attacker to gain access to an environment. An attacker may try a brute-force attack against a system by entering multiple passwords (via an automated tool entering thousands of password options within seconds) until a password works.

Remember, secure passwords should have at least 10 characters including an upper and lower-case letter, number, and special character, or it should follow current NIST guidance for passwords. Passwords that fall short of these criteria can easily be broken using a password-cracking tool. In practice, the longer a password is and the more characters it has, the more difficult it will be for an attacker to crack.

You should also establish an account lock that is set to 6 consecutive failed login attempts within a 30-minute period. Requiring an administrator to manually unlock accounts will prevent attackers from guessing hundreds of passwords consecutively. If an attacker only has 6 chances to guess the correct password, their attempts are more likely to fail. Once locked out, they will move on to an easier target.

Although organizations may have account credential policies in place (e.g., requiring a unique ID credential and complex password), employees often do not follow these policies.

Employees might have unique account credentials, but they often share it with other workforce members, thinking that they can share usernames and passwords with individuals that have access within their system, such as nurses, providers, and receptionists.

For example, if a doctor has shared their credentials with their receptionist(s) to help with documentation or access information for patients, these employees don’t really have unique account credentials.

Convenience is never a valid reason to forego security, nor will it defray your liability.



This requirement is all about having unique account information. For example, you must have your own unique ID and password on your laptop, with strong password cryptography. Don’t use generic accounts, shared group passwords, or generic passwords.

Today, we see broader adoption of multi-factor authentication even outside of the HIPAA realm, which is great for security. This can include your personal email, social media accounts, personal file sharing, and other services.

Security professionals recognize that passwords are no longer a great way to secure data. They are simply not secure enough, but passwords are still required. You need to set strong, long passwords. A password should be at least 10 characters long and complex with both alphabetic and numeric characters, or it should follow current NIST guidelines.

An easy way to remember complex passwords is by using pass phrases. Pass phrases are groups of words that might include spaces and punctuation (e.g., “We Never Drove Toward Vancouver?”). A pass phrase can contain symbols, upper- and lower-case letters, and doesn’t have to make sense grammatically. Pass phrases are generally easier to remember, but harder to crack than passwords.

In addition to strong pass phrases, password manager software can help you use different passwords for all of your accounts. Some password managers can even work across multiple devices by using a cloud-based service.

Use different passwords for every service used, so if one service gets compromised, it doesn’t bleed into other passwords for other sites and software. For example, if your social media account password is compromised, and you use the same password for your email, you could have a major security problem on your hands.


SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA

Have a HIPAA Deadline?

Request a Quote


According to HIPAA requirement §164.312(a)(1), you’re required to have a role-based access control (RBAC) system, which grants access to PHI and systems to individuals and groups on a need-to-know basis. Configuring administrator and user accounts prevents exposing sensitive data to those who don’t have a need to know.

HIPAA requires a defined and up-to-date list of all roles with access to PHI. On this list, you should include each role, the definition of each role, access to data resources, current privilege level, and what privilege level is necessary for each person to perform normal responsibilities. Users must fit into one of the roles you outline.

User access isn’t limited to your normal office staff. It applies to anyone who needs access to your systems or the area behind the desk, like that IT professional you hired on the side to update your EHR/EMR software. You need to define and document what kind of user permissions they have.


  • Receptionist
  • Provider
  • Medical student
  • Staff nurse
  • Nursing manager
  • Third-party IT
  • Physician assistant
  • Night security
  • Specialist
  • Radiologist
  • Administrator
  • Dentist
  • Volunteer

Have a defined and up-to-date list of the roles with access to systems with PHI access.


Electronic systems access: Usernames are a great way to segment users by role. It also gives you a way to track specific user activity. The first question you should ask yourself is: Does each staff member have a unique user ID? If not, that’s a great place to start.

Physical access: Make sure anyone not on your regular staff is escorted around the office by a staff member. For patients, don’t leave them unattended with logged-in equipment. For everyone else, document their name, the reason for being at your organization, where they’re from, and what they look like. If you haven’t worked with this person before, call the company and verify their name and physical description.



Remote access applications (e.g., GoToMyPC, LogMeIn, pcAnywhere, RemotePC) allow healthcare employees to work from home. Doctors often prefer to access patient data outside of the office, and some IT and billing teams use remote access to access the healthcare network offsite.

Remote access is great for workforce convenience, but it can cause issues for security. Often, remote access isn’t properly implemented with adequate security, such as implementing multi-factor authentication (e.g., a password and an auto-generated SMS).

Attackers target organizations that use remote access applications. This attack is common because if a remote access application is vulnerable, it allows an attacker to completely bypass firewalls and gain direct access to office and patient data.

A remote access attack typically looks like the following:

  • Scan the Internet for vulnerable IP addresses
  • Run a password-cracking tool on each open port found
  • Upload malware
  • Copy PHI
  • Potentially use the compromised system to attack other computers or networks

HIPAA Security Rule §164.308(a)(4) and HIPAA Privacy Rule §164.508 require organizations to “develop and implement policies and procedures for authorizing ePHI access,” such as only allowing staff to have PHI access if they have proper authorization and need to access PHI.

The HHS further explains that organizations must “establish remote access roles specific to applications and business requirements. Different remote users may require different levels of access based on job function.” The HHS recommends that organizations using remote access should implement multi-factor authentication if employees access systems containing PHI.

If remote access application configuration only requires the user to enter a username and password, the application has been configured insecurely.


Remote access can be secure as long as it uses strong encryption and requires at least two independent methods of authentication. Be sure to enable strong/high encryption levels in your remote access configuration.

Configuring multi-factor authentication (also known as two factor authentication) requires at least two of the following three factors:

  • Something the user knows (e.g., a username and password)
  • Something the user has (e.g., a cellphone, barcode, an RSA SecureID token)
  • Something the user is (e.g., a fingerprint, ocular scan, voiceprint, other biometric)

A few examples of effective multi-factor authentication include:

  • The remote user enters their username and password, and then must enter an authentication code that is given to them through an RSA token in their possession.

  • The remote user enters a password and biometric to log in to a smartphone or laptop. The individual then provides a single authentication factor (e.g., another password, digital certificate, signed challenge response) to connect to the corporatenetwork.

Multi-factor authentication makes things difficult for attackers.

For example, if you implement a password and four-digit PIN sent through SMS to your phone, an attacker would have to learn your password and have your cell phone before being able to gain remote access to your systems.



In this day and age we want everything at our fingertips, including our work computers. Additionally, IT needs to be able to provide immediate support when issues arise with our workstations or back-end servers. For these reasons there are several tools available that allow us or our IT staff instant remote access to computers anywhere in the world.

This access, while vital to providing efficient, quality care, opens the door to malicious individuals. Remote access services that are left open to the public and left unsecured are quickly picked up by malicious groups. These bad actors, within minutes, can infiltrate an entire network with this single point of access.

For this reason, it is important to ensure all remote access is properly secured. To do this, do the following:

By employing these controls, you will greatly improve your security posture and attackers will find you a much more difficult target.


SecurityMetrics Security Analyst | CISSP | CISA | PA-QSA | QSA



Event, audit, and access logging are all requirements for HIPAA compliance. HIPAA requires you to keep logs of each of your systems for a total of 6 years. These three HIPAA requirements apply to logging and log monitoring:

  • Log-in monitoring. Procedures for monitoring log-in attempts and reporting discrepancies.
  • Audit controls. Implement hardware, software, and procedural mechanisms that record and examine activity in information systems that contain or use PHI.
  • Information system activity review. Implement procedures to regularly review records of information system activity, such as audit logs, access reports, and security incident tracking reports.


System event logs are recorded tidbits of information about the actions taken on computer systems like firewalls, operating systems, office computers, EHR/EMR systems, and printers. The raw log files are also known as audit records, audit trails, or event logs.

Log monitoring systems oversee network activity, inspect system events, alert of suspicious activity, and store user actions that occur inside your systems. They’re like a watchtower alerting you to future risks, providing data that informs you of a data breach.

Most systems and software generate logs including operating systems, Internet browsers, workstations, anti-malware, firewalls, and IDS. Some systems with logging capabilities do not automatically enable logging, so ensure all systems have logs turned on. Other systems generate logs, but don’t provide event log management. Be aware of your system capabilities and potentially install third-party log monitoring and management software.


From a security perspective, the purpose of a log alert is to act as a red flag when something bad is happening. Reviewing logs regularly helps identify malicious attacks on your system.

Organizations should review their logs daily to search for errors, anomalies, and suspicious activities. Then have a process in place to quickly respond to security anomalies.

Given the large of amount of log data generated by systems, it’s impractical to manually review all logs each day. Log monitoring software takes care of this task by using rules to automate log review and only alert you about events that might reveal problems. Often this is done using real-time reporting software that alerts you via email or text when suspicious actions are detected.

Often, log monitoring software comes with default alerting templates to optimize monitoring and alerting functions. However, not everyone’s network and system designs are exactly the same, and it’s critical to take time to correctly configure your alerting rules during implementation.

Here are some event actions to consider when setting up your log management system rules:

  • Password changes
  • Unauthorized logins
  • Login failures
  • New login events
  • Malware detection
  • Malware attacks seen by IDS
  • Scans on your firewall’s open and closed ports
  • Denial of service attacks
  • Errors on network devices
  • File name changes
  • File integrity changes
  • Data exported
  • Running processes stopped
  • Shared access events
  • Disconnected events
  • New service installation
  • File auditing
  • New user accounts
  • Modified registry values

To take advantage of log management, look at your security strategy and make sure these steps are addressed:

  • Decide how and when to generate logs
  • Secure your stored logs so they aren’t maliciously altered by cybercriminals or accidentally altered by well-intentioned employees
  • Assign an employee you trust to review logs daily
  • Set up a team to review suspicious alerts
  • Spend time to create rules for alert generation (don’t just rely on a template)
  • Store logs for at least 6 years (or as long as legally required), with at least three months readily available
  • Frequently check log collection to identify necessary adjustments

Regular log monitoring means a quicker response time to security events and better security program effectiveness. Not only will log analysis and daily monitoring demonstrate your willingness to comply with HIPAA requirements, it will also help you defend against insider and outsider threats.


Audit logging and log monitoring is a commonly missed, but important step in every healthcare organization’s compliance effort. While it may appear that audit logging and log monitoring offer very little reward for the effort required, this is not true. Audit logging and log monitoring aren’t just for forensic purposes – they offer critical insight into ongoing attempts from attackers trying to penetrate your environment as well as suspicious activity from within.

Tools used for audit logging and log analysis are called system information and event management (SIEM). Common items an effective SIEM would alert you to include virus and malware activity on workstations and servers, invalid remote access attempts, and suspicious user activity. These alerts can be indications of much larger problems and shouldn’t be ignored.

In years past, effective SIEM solutions were only geared toward large enterprise organizations. Today, SIEM solutions are available to fit all organizations and are much more affordable. Some solutions are fully managed by the end-user and some are completely outsourced requiring little effort to get setup and running.

Regardless of the complexity or size of the organization, effective audit logging and analysis are an important part of compliance. Remember that logs from all sources are important, including workstation and server operating systems, firewalls, network devices, applications and services, remote access software, anti-virus software, authentication systems, and intrusion detection/prevention systems.

SecurityMetrics Security Analyst | CISSP | CISA | PA-QSA | QSA

Organizations should review their logs daily to search for errors, anomalies, and suspicious activities that deviate from the norm.

Think You've Had a Data Breach?

Click for Incident Response



File Integrity Monitoring (FIM) software is a great companion for your malware prevention controls. New malware comes out so frequently you can’t just rely on anti-virus software to protect your systems. It often takes many months for a signature of newly detected malware to make it into the malware signature files allowing it to be detected by anti-virus software.

Configure FIM software to watch critical file directories for changes. FIM software is typically configured to monitor areas of a computer’s file system where critical files are located. The FIM tool will generate an alert that can be monitored when a file is changed.

Even if your anti-virus software cannot recognize the malware files signatures, FIM software will detect that files have been written to your computer and will alert you to check and make sure you know what those files are. If the change was known (like a system update), then you are fine. If not, chances are you have new malware added that could not be detected and can now be dealt with.

FIM can also be set up to check if web application code or files are modified by an attacker.

Here are examples of some places where FIM should be set up to monitor:

  • OS critical directories
  • Critical installed application directories
  • Web server and/or web application directories
  • User areas (if an employee facing computer)


One of the reasons data breaches are so prevalent is a lack of proactive, comprehensive security dedicated to monitoring system irregularities, such as intrusion detection systems (IDS) and intrusion prevention systems (IPS).

Using these systems can help identify a suspected attack and help you locate security holes in your network that attackers used. Without the knowledge derived from IDS logs, it can be very difficult to find system vulnerabilities and determine if cardholder data was accessed or stolen.

By setting up alerts on an IDS, you can be warned as soon as suspicious activity is identified and be able to significantly minimize compromise risk within your organization. You may even stop a breach in its tracks.

An IDS could help you detect a security breach as it’s happening in real time.

Also, forensic investigators (like the SecurityMetrics forensic team) can use information gleaned from client IDS tools, as well as all system audit logs, to investigate breaches.

Keep in mind that an IDS isn’t preventive. Similar to a private investigator, an intrusion detection system doesn’t interfere with what it observes. It simply follows the action, takes pictures, records conversations, and alerts their client.

For more preventative measures you might consider and intrusion prevention system, which also monitors networks for malicious activities, logs this information, and reports it; but it can prevent and block many intrusions that are detected. Intrusion prevention systems can drop malicious packets, block traffic from the malicious source address, and resetting connections.


In addition to these, you should have data loss prevention (DLP) software in place. DLP software watches outgoing data streams for sensitive or critical data formats that should not be sent through a firewall, and it blocks this data from leaving your system.

Make sure to properly implement it, so that your DLP knows where data is allowed to go, since if it’s too restrictive, it might block critical transmissions to third party organizations.



Here are the steps you should follow to correctly use an IDS:

  • First, purchase an IDS. There are a variety of different tools on the market and each tool will need to be carefully reviewed before a decision is made. I often ask my clients if they want a host intrusion detection system or network intrusion detection system. I advise that a combination of both should be used for any organization looking to take their security seriously. When choosing your IDS/IPS it’s best to get help from a security consultant and make sure that your security team is always involved.
  • Install it on the outside of your network to detect external attacks. Don’t just integrate your IDS to secure your EHR/EMR system. Using pivot attacks, cybercriminals can hack into unrelated and unprotected areas of your network and easily hop onto more secured areas of your network (like your EHR/EMR system) from there.
  • Tune your IDS. Initially, you’ll find that there are a large number of alerts that could number in the thousands out of the box, depending on your network. You need to remove outdated/deprecated rules and find out if your environment generates false positives. Things like replication mechanisms can be mistaken as attacks. You can also subscribe to rule sets from vendors like Proofpoint that currently provide the popular “emerging threats” rule sets.
  • Don’t forget about internal attacks. Whether the threat is a fired workforce member who wants to get back at the organization or an attacker who plugs a malware-filled USB into an exam room computer after nonchalantly walking in the office, an internal IDS should be configured to detect internal activities to prevent an attack from the inside of your network.
  • Configure alerts. Configure your IDS to alert you as soon as suspicious activity occurs. Discuss what alerts should be configured with your security advisor, internal team, and vendor.
  • Form a task force. You need a team to manage your IDS. Whether it’s the responsibility of your data loss prevention team, IT team, or a mash up of security-related department heads, a group must be formed to take charge. Their activities could include identification of suspicious activity alerts, ensuring regularly scheduled IDS updates, incident response planning, and alert monitoring.
  • Constant alert monitoring. Many hospital IT departments may already have a network intrusion detection system in place, but they aren’t regularly checking it. This is a huge mistake and can cost you a swift breach recovery. If you don’t check your IDS or alerts aren’t being sent to you, you might as well not have an IDS.
  • Have an action plan. What happens when your IDS actually identifies an attack? You may also have an intrusion prevention system in place that may or may not be active and preventing illicit traffic. If not, your task force must form an action plan and follow your tested and approved incident response plan (e.g., how to identify the threat, which appropriate persons to notify, how to contain the threat).


SecurityMetrics Security Analyst | CISSP | CISA | QSA | PA-QSA | CISM



Not only should you use security tools to monitor your systems in real time (e.g., logging), you need to know your network environment and find weaknesses through tools like vulnerability scans.

Vulnerability scans assess computers, systems, and networks for security weaknesses (also known as vulnerabilities). These scans are typically automated and give an initial look at what could be exploited. Vulnerability scans can be instigated manually or on an automated basis. Scans typically take 1-3 hours to perform.

Vulnerability scans are a passive approach to vulnerability management because they don’t go beyond reporting on vulnerabilities that are detected. It’s up to the organization’s risk or IT staff to patch discovered weaknesses on a prioritized basis or confirm that a discovered vulnerability is a false positive (i.e., looks like a vulnerability but isn’t applicable to your environment), then re-run the scan until it passes.

Vulnerability scanning is considered by security experts to be one of the best ways to find potential vulnerabilities.

Quick, high-level look at vulnerabilities
False positives
Very affordable compared to penetration testing
Organizations must manually check each vulnerability before testing again
Automatic (can be automated to run weekly, monthly, and quarterly)
Does not confirm a vulnerability is possible to exploit


Because cybercriminals discover new ways to hack organizations daily, organizations are encouraged to regularly scan their systems. External vulnerability scans should be ongoing and/or completed at least quarterly to help locate vulnerabilities. You should also ensure an external vulnerability scan occurs when your system is changed or updated in any way.

After scan completion, a report will typically generate an extensive list of vulnerabilities found and give references for further research on the vulnerability. Some scanning services even offer directions on how to fix the problem. Remember that the vulnerability scan doesn’t change your system or fix problems, so make sure that you fix any required changes necessary for your system.

A vulnerability scan report reveals identified weaknesses, but reports sometime include false positives. Sifting through real vulnerabilities and false positives can be a chore, but it’s important to manually check each vulnerability to make sure you’re not at risk.

Vulnerability scanning isn’t just about locating and reporting vulnerabilities. It’s also about establishing a repeatable and reliable process for fixing problems, based on risk and effort required.

Failing scan results that aren’t remediated render security precautions worthless.



Vulnerability scans can and should be run frequently (e.g., monthly or quarterly). These non-intrusive scans run against and analyze all your internal and external ports for exploitable vulnerabilities.

Attackers constantly scan your systems looking for new vulnerabilities, so you should do the same. Any issues found should be remediated immediately and rescanned as quickly as possible.

Based on what I see when meeting with organizations, a high percentage of breaches could have been prevented through regular scanning and remediation.

Accountability must be part of your process, or IT fires will take priority over fixing potential vulnerabilities and remediation efforts will suffer. Upper management must be part of the escalation process if critical vulnerabilities are not addressed in a timely manner.


SecurityMetrics Security Analyst | CISSP | CISA | QSA | PA-QSA | CISM


Some people mistakenly think that vulnerability scanning is the same thing as a professional penetration test.

Here are the two biggest differences:

  • A vulnerability scan is automated, while a penetration test includes a live person actually digging into the complexities of your network and actively trying to exploit its vulnerabilities.
  • A vulnerability scan typically only identifies vulnerabilities at a high-level, while a penetration tester digs deeper to identify, then attempt to exploit vulnerabilities to get access to secure systems or stored sensitive data.

Vulnerability scans offer great weekly, monthly, or quarterly high level insight into your network security, while penetration tests are a more thorough way to deeply examine network security.

Do You Need a Penetration Test?

Find out Here



In addition to performing vulnerability scans, it’s strongly recommended that you perform penetration testing to identify vulnerabilities. Penetration testers analyze network environments, identify potential vulnerabilities, and try to exploit these vulnerabilities (or coding errors) just like a hacker would.

In simple terms, penetration testers ethically attempt to break into your organization’s network to find security holes.

Specifically, penetration testers will first run automated scans and then manually test these vulnerabilities. They can also test your employees, website, patient portal, and other Internet-facing networks and applications to see if there’s a way into your systems using common hacking tools or social engineering tactics. If found, the testers report these vulnerabilities to you with recommendations on how to better secure your systems and sensitive data.

Penetration testing is particularly helpful for organizations developing their own applications because it’s important to have code and system functions tested by an objective third party. This testing helps find vulnerabilities missed by developers.

Depending on your security needs, you may want to perform both an internal and external penetration test. An internal penetration tests your systems within your organizational network (i.e., with the perspective of someone inside your network). An external penetration test tests your network from outside of your network (i.e., perspective of a hacker over the Internet).

A penetration test is a thorough, live examination designed to exploit weaknesses in your system.

Typically, professional penetration test reports contain a long, detailed description of attacks used, testing methodologies, and suggestions for remediation. Make sure to take adequate time to address the penetration test report’s advice and fix the located vulnerabilities on a prioritized basis.

Live, manual tests mean more accurate and thorough results
Time (1 day to 3 weeks)
Rules out false positives
Cost (around $15,000 to $30,000)



You need to decide who will perform your penetration test (e.g. in-house or third party).

Penetration testers should be well versed in:

  • Black hat attack methodologies (e.g., remote access attacks, SQL injection)
  • Internal and external testing (i.e., perspective of someone within the network, perspective of a hacker over Internet)
  • Web front-end technologies (e.g., Javascript, HTML)
  • Web application programming languages (e.g., Python, PHP)
  • Web APIs (e.g., restful, SOAP)
  • Network technologies (e.g., firewalls, IDS)
  • Networking protocols (e.g., TCP/UDP, SSL)
  • Operating systems (e.g., Linux, Windows)
  • Scripting languages (e.g., Python, Pearl)
  • Testing tools (e.g., Nessus, Metasploit)
  • Segmentation testing

If you use an in-house penetration tester, they should use correct penetration testing methodologies (e.g., NIST 800-115, OWASP Testing Guide) when conducting your test. They also should be aware of prevalent vulnerabilities and threats in the industry, and design tests to check for these issues in your networks and applications accordingly.

If you hire a third party, make sure the penetration tester you select uses the correct methodology (e.g., good report structure, thorough testing) and that you act on the report they give you, addressing the issues they find.

Collect information for your penetration tester such as: have you experienced an exploit in the past 12 months (e.g., ransomware)? Did you make changes? Tell your penetration tester about all this information so they can design tests to validate your changes.

Perform a penetration test at least yearly and after major network changes.

First, establish what your organization considers a major change. What might be a major change to a smaller organization is only a minor change for a large environment. For any organization size, if you bring in new hardware or start receiving patient data in a different way, this constitutes a major change.

Whenever major changes occur, you’ll want to perform a formal penetration test to see if that change added any new vulnerabilities, in addition to annual penetration tests.



The objective of a network penetration testing is to identify security issues with the design, implementation, and maintenance of servers, workstations, and network services.

Commonly identified security issues include:

  • Misconfigured software, firewalls, and operating systems
  • Outdated software and operating systems
  • Insecure protocols


The objective of a segmentation check is to identify whether there’s access into a secure network because of a misconfigured firewall. Segmentation checks confirm whether segmentation was set up properly or not.

Commonly identified security issues include:

  • TCP access is allowed where it should not be
  • ICMP (ping) access is allowed where it should not be


The objective of an application penetration test is to identify security issues resulting from insecure development practices in the design, coding, and publishing of the software.

Commonly identified security issues include:

  • Injection vulnerabilities (e.g., SQL injection, cross-site scripting (XSS), remote code execution)
  • Broken authentication (i.e., the log-in panel can be bypassed)
  • Broken authorization (i.e., low-level accounts can access high-level functionality)
  • Improper error handling


The objective of a wireless penetration test is to identify misconfigurations of authorized wireless infrastructure and the presence of unauthorized access points.

Commonly identified security issues include:

  • Insecure wireless encryption standards
  • Weak encryption pass phrases
  • Unsupported wireless technology
  • Rogue/open access points


The objective of a social engineering assessment is to identify workforce members that don’t properly authenticate individuals, follow processes, or validate potentially dangerous technologies. Any of these methods could allow an attacker to take advantage of staff and trick them into doing something they shouldn’t.

Commonly identified issues include:

  • If employee(s) clicked on malicious emails
  • If employee(s) allowed unauthorized individuals onto the premises
  • If employee(s) connected a randomly discarded USB to their workstation



Many organizations don’t fully understand what a penetration test is, how it differs from vulnerability scanning, and what benefits it offers. A pen test will give you a holistic view of what your security system truly looks like. Companies and merchants with poor security practices across their environment leave themselves vulnerable. If a company has an immature network with un-patched systems, it’s likely that the desktop systems are probably in a similar state.

Network pen tests are a necessary part of a healthy security culture. And, don’t forget other types of pen tests like segmentation checks, application penetration tests and wireless penetration tests. It helps to think of your pen tests and vulnerability scans as a way to cover as much of your environment as possible. Diversify your tests and scans for a more robust security practice. Repeating tests is okay, but trying a new type of test will add even more security.


SecurityMetrics Security Analyst | CISPP | CISA | QSA | PA-QSA | CISM

Do You Need a Penetration Test?

Find out Here



Without proper care and upkeep of data security programs, organizations can easily go the way of recent data breach victims. Last year, healthcare organizations accounted for 29.2% of reported data breaches.

The HIPAA Breach Notification Rule (45 CFR §164.400-414) requires HIPAA covered entities and their business associates to provide notification following a breach of unsecured patient data.

If you’re a covered entity, your statements must be sent to affected patients by first-class mail (or email if the affected individuals agreed to receive notices) as soon as reasonably possible. This notification must be no later than 60 days after breach discovery.

If 10 or more individuals’ information is out of date, insufficient, or the breach affects more than 500 residents of a state or jurisdiction, post the statement on your website for at least 90 days and/or provide notice in major print or broadcast media in affected areas.

Covered entities also need to notify the Secretary of the HHS about the breach. If a breach affects fewer than 500 individuals, the covered entity may notify the Secretary of such breaches on an annual basis. But if a breach affects 500 or more individuals, covered entities must notify the Secretary of the HHS within 60 days following a breach (if not immediately).

If you’re a business associate, notify affected covered entities after discovering a data breach immediately (and no later than 60 days after discovering the data breach). Identify each individual affected by the breach and send this information to all affected covered entities.

Covered entities are just as liable if their business associate is found to be in breach of HIPAA requirements.



Unfortunately, every organization will experience system attacks, and some of these attacks will succeed.

If your organization is breached, you may only be liable for a few of the following fines, losses, and costs:

  • HHS settlements: Up to $1.5 million/violation/year
  • State attorney generals: $150,000 – $7 million
  • Business associate changes: $5,000+
  • Lawyer fees: $5,000+
  • Technology repairs: $2,000+
  • Class action lawsuits: $1,000/record
  • Breach notification costs: $1,000+
  • Ongoing credit monitoring for affected patients: $10-$30/record
  • ID theft monitoring: $10-$30/record


To help minimize a data breach, establish well-executed incident response plan which can minimize breach impact, reduce fines, decrease negative press, and help you get back to normal operations more quickly.

If you’re properly following HIPAA requirements, you should already have an incident response plan prepared and your employees should be trained to quickly deal with a data breach. However, most organizations–large and small–don’t have their incident response plan and associated training adequately established.

Without an incident response plan, employees scramble to figure out what they’re supposed to do, and this is when mistakes can occur.

Think You've Had a Data Breach?

Click for Incident Response


An incident response plan should be set up to address a suspected data breach in a series of phases. The incident response phases are:

  • Phase 1. Prepare
  • Phase 2. Identify
  • Phase 3. Contain
  • Phase 4. Eradicate
  • Phase 5. Recover
  • Phase 6. Review



Preparation often takes the most effort in your incident response planning, but it’s by far the most crucial phase to protect your organization. This phase includes the following steps:

  • Ensure your employees receive proper training regarding their incident response roles and responsibilities.
  • Develop and regularly conduct tabletop exercises (i.e., incident response drill scenarios) to evaluate your incident response plan.
  • Ensure that all aspects of your incident response plan (e.g., training, hardware and software resources) are approved and funded in advance.


Identification (or detection) is the process that determines whether you’ve actually been breached by looking for deviations from normal operations and activities. This is why technologies, preparation, and proper security are so important; otherwise, you may not know where your baseline is.

Organizations normally learn they’ve been breached in a few ways:

  1. The breach is discovered internally (e.g., review of intrusion detection system logs, alerting systems, system anomalies, anti-virus scan malware alerts).
  2. Law enforcement discovers the breach while investigating the sale of patient data.
  3. A complaint is filed with HHS, which investigates and discovers evidence of a breach.

It’s important to discover a data breach quickly, identify where it’s coming from, and pinpoint what it has affected.


When a healthcare organization becomes aware of a possible breach, it’s understandable to want to fix issues immediately.

However, without taking the proper steps and involving the right people, you could inadvertently destroy valuable forensic data. Forensic investigators use this data to determine how and when the breach occurred, as well as to devise a plan to prevent future attacks.

When you discover a breach, remember:

  • Don’t panic.
  • Don’t make hasty decisions.
  • Don’t wipe and re-install your systems (yet).
  • Contact your forensic investigator to help you contain the breach.

Steps to consider during containment and documentation: 

  • Stop the leakage of sensitive data as soon as possible
  • Unplug affected systems from the network, rebuild clean new systems and keep old systems offline. This is the best option if it’s possible, it allows a forensic investigator to evaluate untouched systems. This is easier to do in virtual server environments but can be costly otherwise.
  • If system replacement is not possible, the next main task would be documentation. This means you need to preserve as much information as possible for forensic analysis. If you know how to take a complete image of your system, please do so. If you know where the virus files are, copy that directory to a backup. Resort to screenshots or phone videos of behaviors as a last resort before taking action to change the systems.
  • Call in a professional forensic investigator to help learn about the breach. In some industries, this may be a required step–for example, when credit card data is stolen–but it’s always recommended to get forensic analysts involved, so you can develop better future processes.


After containing the incident, you need to find and modify policies, procedures, or technology that led to the breach.

Malware should be securely removed, systems should again be hardened and patched, and updates should be applied. Whether you do this internally or get help from a third party, make sure your eradication actions are thorough.

Your incident response plan needs to be put in motion immediately after learning about a suspected data breach.


Recovering from a data breach is the process of restoring and returning affected systems and devices back into your environment. During this time, it’s important to get your systems and organizational operations up and running again with confidence that your network will withstand the next cyberattack.

After a breach’s cause has been identified and eradicated, ensure all systems have been tested before you reintroduce the previously compromised systems into your production environment.


After your forensic investigation has concluded, meet with all incident response team members to discuss what everyone learned from the data breach, and review the events in preparation for a future attack.

This is when you’ll analyze everything about the breach. Afterwards, revise your incident response plan by determining what worked well and what failed.


Developing an incident response plan will help your organization handle a data breach quickly and efficiently while minimizing possible damage. This section will help you create your own incident response plan.


Start off by identifying and documenting where your organization keeps its crucial data assets (which should also be included in your risk analysis). You should assess what data would cause your organization to suffer heavy losses if it was stolen or damaged.

After identifying critical assets, prioritize them based on importance and highest risk, quantifying your asset values. This will help justify your security budget and show management what needs to be protected and why it’s essential to do so.


Determine what risks and attacks are the greatest current threats against your systems. Your risk analysis should contain this information. Keep in mind that these will be different for every organization.

For organizations that process data online, improper coding could be their biggest risk. For healthcare organizations that offer Wi-Fi to their customers, their biggest risk may be Internet access. Other organizations may place a higher focus on ensuring physical security, while others may focus on securing their remote access applications.

Here are examples of a few possible risks:

  • External or removable media: Executed from removable media (e.g., flash drive, CD)
  • Attrition: Employs brute force methods (e.g., DDoS, password cracking)
  • Web: Executed from a site or web-based app (e.g., drive-by download)
  • Email security: Executed via email message or attachment (e.g., malware, Ransomware)
  • Impersonation: Replacement of something benign with something malicious (e.g., SQL injection attacks, rogue wireless access points)
  • Loss or theft: Loss or theft of computing device or media (e.g., laptop, smartphone)


If you don’t have established procedures to follow, a panicked employee may make detrimental security errors that could damage your organization.

Your data breach policies and procedures should include:

  • A baseline of normal activity to help identify breaches
  • How to identify and contain a breach
  • How to record information on the breach
  • Notification and communications plan
  • Defense approach
  • Employee training

Over time, you’ll need to adjust your policies according to your organization’s needs. Some organizations might require a more robust notification and communications plan, while others might need help from outside resources.

In any case, all organizations need to focus on employee training (e.g., your security policies and procedures).


Organize an incident response team that coordinates your organization’s actions after discovering a data breach. Your team’s goal should be to coordinate resources during a security incident to minimize impact and restore operations as quickly as possible.

Some of the necessary team roles are:

  • Team leader
  • Lead investigator
  • Communications leader
  • C-suite representative
  • IT director
  • Public relations
  • Documentations and timeline leader
  • Human resources
  • Legal representative
  • Breach response experts

Make sure your response team covers all aspects of your organization and that they understand their particular roles in the incident response plan. Each team member will bring a unique perspective to the table with a specific responsibility to manage the crisis.


Your incident response team won’t be effective without proper support and resources to follow your plan.

Security is not a bottom-up process. Management at the highest level (e.g., CEO, VP, CTO) must understand that security policies–especially your incident response plan–must be implemented from the top and be pushed down. This is true for organizations of any size, from dentist offices to multi-winged hospitals.

For larger organizations, executives need to be on board with your incident response plan. For smaller organizations, management needs to be ready for additional funding and resources dedicated to incident response.

When presenting your incident response plan, focus on how your plan will protect your patients’ data and benefit your organization.

The more effectively you present your goals, the easier it will be to obtain necessary funding to create, practice, and execute your incident response plan.


Just having an incident response plan isn’t enough. Employees need to be properly trained on your incident response plan and know what they’re expected to do in a data breach’s aftermath.

The regular routine of work makes it easy for employees to forget crucial security lessons and best practices.

Employees also need to understand their role in maintaining organizational security. To help them understand their responsibilities, regularly train employees on how to identify attacks, such as phishing emails, spear phishing, and social engineering attacks.

Test your employees through tabletop exercises (i.e., simulated, real-world situations led by a facilitator). Tabletop exercises play a vital role in your staff’s preparation for a data breach.

These exercises help familiarize your employees with their particular incident response roles by testing them through a potential hacking scenario.

After testing your employees, you can identify and address weaknesses in the incident response plan and help your staff see where they can improve, with no actual risk to your organization’s assets.


An incident response plan is only useful if it’s properly established and followed by employees. To help staff, regularly test their reactions through real-life simulations, also known as tabletop exercises. Tabletop exercises allow employees to learn about and practice their incident response roles when nothing is at stake, which can help you and your staff discover gaps in your incident response plan (e.g., communication issues).

Think You've Had a Data Breach?

Click for Incident Response



When it comes to the HIPAA Privacy Rule, healthcare organizations often think they have everything covered. For the most part, this is true. You likely have your privacy practices posted throughout your workplace, and there are usually limited instances where employees leak PHI to the public (such as in football star Jason Pierre-Paul’s case).

However, if organizations intentionally obtain or disclose PHI in violation of the HIPAA Privacy Rule, they may be fined up to $50,000 and receive up to one year in prison. But if the HIPAA Privacy Rule is violated under false pretenses, the penalties can be increased to a $100,000 fine and up to 10 years in prison.

For example, here are some common HIPAA Privacy Rule violations:

  • Employees post PHI on social media: Employees should never post patient photos or any patient information on any social media platform (e.g., Twitter, Facebook, LinkedIn).
  • Employees illegally accessing PHI: Employees should not be able to access PHI that they don’t need for patient care (e.g., accessing celebrity PHI).

With all the financial consequences and prevalence of HIPAA violations, you need to make sure you have adequate HIPAA Privacy Rule policies and procedures in place and that all relevant staff are trained and following your policies and procedures.

The Privacy Rule addresses appropriate PHI use and disclosure practices for healthcare organizations, as well as defines the right for individuals to understand, access, and regulate how their medical information is used.

The HIPAA Privacy Rule:

  • Clarifies and supports patient rights
  • Spells out administrative responsibilities
  • Discusses the need for and implementation of privacy policies and procedures
  • Details permissible uses and disclosures of patient data
  • Discusses written agreements between covered entities and business associates
  • Describes covered entity responsibilities to train workforce members and implement requirements regarding their use and disclosure of PHI

Have a HIPAA Deadline?

Request a Quote


In healthcare, there are two basic types of patient records: designated records and legal health records. While these two record sets are fairly similar and often contain identical information, there are slight differences that you’ll need to gather from and for patients.


Designated records are medical and/or billing records that are maintained by or for a covered entity. These records are often used in part or in whole to make patient care decisions.

Designated record sets are:

  • Supportive of an individual’s right of access
  • Any PHI stored in any collected medium
  • Directly used in documenting healthcare status
  • Often housed in multiple systems and/or networks
  • Generally broader and more encompassing than legal health records

Designated record sets should also include information about: amendments, restrictions, and authorized access to patient data.


Legal health records are the official business and legal record for an organization, and they contain information about services provided by a healthcare provider to a patient.

Legal health records can and often do include similar PHI as the designated record set, though legal health records are used for different purposes. Specifically, legal health records are used to document and defend an organization’s care decisions.

Legal health records are often used for the following additional purposes:

  • Assist and inform an organization’s internal business decisions (e.g., administrative decisions, employee training)
  • Support decisions that were made in a patient’s care
  • Support revenue sought for third-party payers
  • Legally document the services and treatment provided to patients (e.g., patient’s condition, caregiver’s decisions)



Before sharing patient data, make sure you have thorough policies and procedures established on how you are allowed to use and disclose patient data. For example, you are required to disclose PHI in the following instances: individuals (or their representatives) request this information or the HHS undertakes a compliance investigation or review.

You are allowed (though not required) to use and disclose PHI without an individual’s authorization under the following situations:

  • PHI is disclosed to the patient
  • PHI is used for treatment, payment, or healthcare operations
  • PHI is incidentally used and disclosed (e.g., lobby communication with patients during emergency situations)
  • PHI is used or disclosed for the 12 national priority purposes:
  1. Required by law
  2. Public health activities
  3. Victims of abuse, neglect, or domestic violence
  4. Health oversight activities
  5. Judicial and administrative proceedings
  6. Law enforcement purposes
  7. Decedents
  8. Cadaveric organ, eye, or tissue donation
  9. Research
  10. Serious threat to health or safety
  11. Essential government functions
  12. Workers’ compensation

However, there are several exceptions to this rule. For example, organizations can use or disclose patient data for research purposes without patient authorization, if organizations follow approved research procedures.

Also, you typically must receive patient authorization to use and disclose PHI for marketing purposes, unless it fits within HIPAA-allowed use and disclosure exceptions.


Types of disclosures that require patient authorization are:

  • Psychotherapy notes (unless for treatment, payment, or healthcare operations)
  • Marketing (except for face-to-face communications)
  • Sale of PHI
  • Any other type of disclosure that is not incidental, for treatment, payment, healthcare operations, public health activities, to the secretary of the HHS, or required by law

Although an individual can authorize release of PHI for any reason, organizations should not establish normal business practices that require an individual’s authorization. Organizations may not require a patient to sign authorizations as a condition of:

  • Treatment (unless the treatment is creating PHI required by an authorized third party, such as court requests)
  • Payment
  • Enrollment in a health plan (unless the request is made prior to enrollment and is for underwriting or risk determination, not use of psychotherapy notes)
  • Eligibility of benefits

Individuals can revoke authorizations in writing at any time. However, if a covered entity has already released information based on the original authorization, the revocation wouldn’t apply.

Also, if the original authorization was obtained as a condition of gaining insurance, revocation wouldn’t be possible because the insurer has a right to use this information to contest a claim or the policy.

An authorization to release PHI must contain the following information:

  • A description identifying the specific information to be used
  • Employees’ names (or job roles) authorized to make the disclosure
  • Individual’s name or organization to whom the disclosure is being made
  • A description of each purpose for the disclosure
  • An expiration date relating to the individual’s or disclosure’s purpose
  • An individual’s signature and signature date


If you use and/or disclose patient data for marketing purposes, you need to gain patient authorization. HIPAA defines marketing as “communication about a product or service that encourages recipients of the communication to purchase or use the product or service.”

There are a few exceptions to this rule:

  • Refill reminders or information about a prescribed drug
  • Treatment by healthcare providers (e.g., alternative treatments, therapies, providers, settings of care)
  • Health-related product or service provided by or part of a benefits plan (e.g., participating in a health plan or network, enhancing current health plan or network)
  • Case management or care coordination

If financial payment is received from a third party for making the communication, then patients need to give authorization to contact or market to them (with the exception of refill reminders and if payment only covers communication costs).

If a third party is involved with financial payment, your authorization must say so.


Patients must be notified about your intent to use PHI in directory information, and they must be given an opportunity to object to being part of the directory.

Notification should happen at first encounter, as well as be inside of your Notice of Privacy Practices (NPP). Include what information will be kept and to whom it can be disclosed.

Example directory information:

  • Name
  • Location within the facility
  • Condition in general terms that doesn’t relay specific medical information
  • Religious affiliation

In emergency circumstances, the opportunity for patients to object can be bypassed, but only if it follows and is consistent with a previously expressed permission or is in the patient’s best interest (which is determined by their healthcare provider).

Directory information can be disclosed to clergy members or other individuals who ask for the patient by name.


Types of uses and disclosures that don’t require an opportunity to agree or object:

  • Uses and disclosures required by law
  • Public health activities
  • About victims of abuse or neglect (or any crime)
  • Health oversight activities
  • Judicial and administrative proceedings (e.g., court orders, subpoenas, discovery requests, other lawful orders)
  • Law enforcement purposes (e.g., as required by law, court order, warrant, subpoena/grand jury subpoena, administrative request)
  • Correctional institutions
  • National security and intelligence activities
  • To avert a serious threat to health or safety
  • Government programs providing public benefits
  • Workers compensation
  • Identification or location purposes (though limited amount of information applies)
  • Decedents (e.g., coroners, medical examiners, funeral home directors, organ donating purposes)
  • Research purposes


Unlike other purposes for patient data usage, patient data can be used or disclosed without patient authorization if it’s for research purposes.

However, if you do disclose patient data without authorization, you must follow the Institutional Review Board (IRB) or Privacy Board Waiver conditions, which dictate research committees and how research can be performed.

With 16 different regulatory codes defining proper IRB establishment, compliance to the IRB standards can be tricky. But if you follow research basics, you should be fine.

First, make sure that your IRB has at least 5 research members from a variety of professional backgrounds, which allows for adequate review of the research activities. Specifically, one member’s primary concern should be in scientific areas, another in non-scientific areas, and another should not be affiliated with the organization (nor be a family member of a person connected with the organization).

Board members should be knowledgeable with:

  • Institutional commitments
  • Regulations
  • Applicable law
  • Standards of professional conduct and practice

To meet the waiver requirements for authorization, follow all IRB requirements. For example, you need to document the IRB and the date when the waiver was approved. Include a brief description of the PHI that is necessary. You also need a statement that the waiver meets the requirements in this section.

The use or disclosure of PHI involves no more minimal risk to an individual, including:

  • A plan to protect identifiers from improper use or disclosure
  • A plan to destroy identifiers (unless there’s a health, research, or legal justification for data retention)
  • Written assurances that PHI will not be re-used or disclosed, unless required by law
  • The research couldn’t be feasibly conducted without the waiver
  • The research couldn’t be feasibly conducted without access to and use of PHI

Your waiver should also include a statement that the waiver has been approved under normal or expedited procedures, including the signature of the IRB chair (or a chair-designated member).

Before starting research, the researcher must clarify either orally or in writing that PHI is only to establish a research protocol and that PHI will not be removed from the CE disclosure.

If you use research on deceased individuals, the researcher must explain orally or in writing that PHI is only for research on deceased individuals and necessary for their research. Covered entities can ask researchers to provide information about the individual and how they died.

PHI should be part of a limited data set with a proper data use agreement set in place. However, PHI can also be disclosed for research purposes with patient authorization.


Organizations aren’t allowed to use or disclose patient data outside of what is permitted or required. However, there are also specific instances where you are not allowed to use or disclose patient data.

First, you are not allowed to sell patient data, unless complying with requirement §164.508(a)(4). Sale of patient data means PHI disclosure by a covered entity or business associate, where they directly or indirectly receive compensation from or on behalf of whoever received the PHI.

Selling PHI does not include disclosure when used under the following example circumstances:

  • For public health purposes
  • For research purposes (where compensation is a reasonable fee that covers the cost of PHI preparation and transmission)
  • For treatment and payment purposes
  • For the sale, transfer, merger, or consolidation of all or part of a covered entity
  • To or by a business associate for services undertaken on behalf of a covered entity
  • To an individual
  • When required by law

Next, you aren’t allowed to use or disclose genetic information for underwriting purposes (regarding a health plan) unless this information will help determine:

  • Benefits, coverage, or deductible changes (e.g., deductible changes by completing a health risk assessment)
  • Premium or contribution amounts (e.g., discounts for activities participating in a wellness program)
  • Application of pre-existing condition exclusion
  • Other activities related to the creation, renewal, or replacement of health insurance or benefits


Using or disclosing patient data for fundraising purposes requires patient notification and allowing them an opportunity to object. Your notification must be included in your NPP.

All communication must provide individuals with an opportunity to object, with objections not causing individuals undue burden or cost.

Covered entities may not condition treatment or payment on the decision to agree or object to the communications. An individual’s decision to object must be honored; though, you are allowed to let individuals opt back in to fundraising.

A covered entity can use or disclose the following information to a business associate (or similar organization) to raise funds for its own benefit:

  • Patient demographic information (e.g., name, address, contact info, age, gender, birth date)
  • Dates of healthcare provided
  • Department of service
  • Treating physician
  • Outcome information
  • Health insurance status

When using or disclosing PHI for fundraising purposes, individuals must be allowed an opportunity to object.

Download the latest guide to HIPAA Compliance

Download now


A large part of the Privacy Rule discusses minimum necessary requirements, which states that only those who need to access PHI to do their jobs should getto access PHI, and unless you have a specific need for the information, accessmust be restricted. For example, a receptionist likely doesn’t need to see the X-rays of a patient to do their job.


The HHS states “if a hospital employee is allowed to have routine, unimpeded access to patients’ medical records, where such access is not necessary for the hospital employee to do his job, the hospital is not applying the minimum necessary standard.”

It’s a covered entity’s responsibility to limit who within an organization has access to each specific part or component of PHI. The easiest way to take charge of the data is by creating individual user accounts on a network. In the ideal scenario, each user account in a network, EHR/EMR system, or computer system, would be given certain privileges based on the users’ job and roles.

For example, a doctor’s privilege would allow them access to all PHI in their patient database because they need it to do their job, while an IT administrator would have restricted access to PHI because they’re not involved with patient care.

The minimum necessary requirement also applies to the information shared externally with third parties and subcontractors.

Business associates often think their covered entity holds the sole responsibility of deciding how much data they receive. This is not the case. Both business associates and covered entities have a minimum necessary responsibility under HIPAA requirements.

Business associates should only accept and use the minimum amount of data necessary.

That means you can be fined by the HHS for misapplying (or completely disregarding) the minimum necessary requirement. For example, if you receive or demand more data than is necessary from covered entities, you could be fined for ignoring the rules.

To avoid these issues, covered entities and business associates should assess their responsibilities concerning minimum necessary data accordingly:

  • Covered entity responsibility: Determine what data is the minimum necessary to send and then only send that data and nothing else.
  • Business associate responsibility: Only accept and use the minimum necessary data, but if you receive too much data, let your CE know what data you need and can accept.

On the other hand, minimum necessary doesn’t apply in the following circumstances:

  • Disclosure to or request(s) made by a healthcare provider for treatment
  • Uses or disclosures made to the patient
  • Uses or disclosures that a patient has authorized
  • Disclosures made to the Secretary of HHS
  • Uses or disclosures made to the HHS Secretary by law
  • Uses or disclosures required by other HIPAA regulations

By limiting PHI access to the smallest number of individuals possible, the likelihood of a breach or HIPAA violation decreases significantly.



The minimum necessary requirement is a key part of the HIPAA privacy rule. Its goal isn’t to encourage organizations to perform the minimum necessary, but rather for entities to only use and disclose the minimum amount of PHI necessary. Essentially, if you don’t need to use it or share it – don’t.

When I think of this requirement I envision having a secret recipe for soda. I’m never going to tell anyone that doesn’t absolutely need to know the recipe. If I do have to share it, I’m only going to share the parts I absolutely must – nothing more. And if I’m making it myself, I’m only going to get out the parts of the recipe I need so I don’t risk exposing more than I must. These principles are at the heart of almost every business and carry into many aspects of everyday life.

I recently had the opportunity to speak with a laboratory that designs and creates dental implants. While discussing what PHI they need to perform their function it became obvious that they did not need any PHI. I asked them what they collect and was shown a form requesting very basic information, none of which was PHI. They proceeded to show me several prescriptions from different offices, many of which included full names, photos with full names and many other personal details of the patient. These dentists were divulging their secret recipes to people who did not need it.

This experience highlighted, to me, the need to provide only the minimum necessary information to another organization. These same principles should be applied within the organization as well. Do the front desk staff require full access to patient histories? Does PHI need to be placed on an office-wide file share? Do you need to collect all the PHI you do? These are important questions every organization must ask themselves, then act on.


SecurityMetrics Security Analyst | CISSP | CISA | PA-QSA | QSA

Have a HIPAA Deadline?

Request a Quote


If you need to use patient data for research, public health, and/or healthcare operations (e.g., comparative effectiveness studies, policies, assessments), make sure you properly de-identify PHI. Specifically, you need to make sure to remove all information that could identify an individual, such as the 18 PHI identifiers, which are:

  • Names
  • Geographic information (e.g., address, city, county, zip code, precinct)
  • Dates related to an individual (e.g., birth date, admission date, discharge date, death date, all ages over 89)
  • Phone number
  • Fax number
  • Email
  • Social security number (SSN)
  • Medical record number
  • Health plan beneficiary number
  • Account numbers
  • Certificate/license numbers
  • VIN and license plate numbers
  • Device IDs and serial numbers
  • URLs
  • IP address
  • Biometric identifiers
  • Full face photos and comparable images
  • Any other unique number, characteristic, or code

Once PHI has been adequately de-identified, it’s no longer protected by the Privacy Rule. This means that you can disclose this information to anyone without authorization. 

When using or disclosing de-identified PHI (or limited data sets), don’t share codes or other data that can be used to identify a patient.

However, codes and other data used to re-identify coded or de-identified PHI are considered PHI if disclosed. But these codes are not considered PHI if they’re not related to and cannot be used to identify patients without an appropriate mechanism (which cannot be disclosed).

You can also use a limited data set without patient authorization for the following purposes: healthcare operations, research, and public health. A limited data set is similar to de-identified data, except that a limited data set can exclude the following information:

  • Geographic subdivisions smaller than a state
  • Elements of date except for year (e.g., birth date, admission date, death date, discharge date)
  • Ages over 89 and dates indicative of such age
  • Other unique identifying numbers, characteristics, or codes

If you disclose the limited data set outside of your organization, make sure to have a data use agreement in place with the organization receiving this data. Your data use agreement must include:

  • Permissible uses and disclosures
  • Authorized parties/organizations
  • Duty to safeguard PHI
  • Duty to report security incidents/impermissible disclosures
  • Agreement to not identify or contact the individuals referred to in the data

If this outside organization is one of your business associates, then your business associate agreement can be used as a data use agreement.


In addition to knowing how you can use and disclose data, make sure your organization implements a data retention policy. Start by deciding how long data needs to be kept and when it should be deleted. Specifically, you need to determine how long data needs to be stored for regulatory purposes.

HIPAA retention requirements recommend that you keep the data for at least 7 years; though individual states may require longer retention, typically at least 10 years.

If you decide to keep data for longer than 7 years (or however long your state requires), you need to protect PHI for 50 years after the patient has died. Due to these regulations, organizations often choose to destroy and/or delete data after 7 years (or according to their state regulations).

As previously mentioned, permanently destroying electronic data may require a few different techniques, depending on how you want it done and whether you want to reuse the media on which the data is stored. Here are a few techniques to securely delete your data:


Overwriting data runs over the data with a sequence of 1’s. Other methods use a different set of binary sequences to ensure all the data has been overwritten. But there still could be some type of recoverable data on the media, so this method may not be the most secure.


This method is useful if you have magnet tapes and hard drives. Degaussing uses a powerful magnet to erase data on magnetic media. This method is particularly helpful if you want to reuse the media.


This is one of the most secure methods to permanently delete data. If you don’t plan to use the media again, it’s highly recommended you physically destroy it. You can go to organizations that have industrial-sized shredders to dispose of larger hardware.

Some types of media require physical destruction for secure data deletion. Solid state drives (SSD) and optical media like DVDs and CDs generally must be destroyed physically.


Compliance with the Privacy Rule might seem easy for healthcare organizations, but HIPAA Privacy Rule requirements cover various policies and procedures that may take up an entire shelf or filing cabinet, if not more.

To maintain HIPAA compliance, regularly update your policy and procedure documentation and ensure employees receive proper training.

However, policies and procedures aren’t just paperwork. They outline in writing what you promise to do to protect your patients’ privacy and medical data. In addition to having written policies, make sure that your policies and procedures are frequently updated and stored in a place where it can be easily disseminated to your staff.

Though there are numerous HIPAA Privacy Rule policies, make sure to include the following policies:

  • Notice of Privacy Practices
  • Accounting of disclosures
  • Amending patient records
  • Patient complaints
  • Business associate agreement


Most healthcare professionals are familiar with NPPs as being part of HIPAA. Most patients have seen them, and most covered entities have them in place and know what they’re used for. But the most common errors in NPPs are updating how an organization deals with a refusal to acknowledge receipt of privacy practices by a patient and making sure all foreign language versions (e.g., Spanish NPPs) are up to date.

NPPs are legal documents and are commonly created by organizations other than the entities themselves. NPPs are usually provided to healthcare organizations by insurance companies, malpractice attorneys, or sometimes a healthcare association. While there is nothing wrong with having NPPs supplied by external parties, they do need to accurately reflect your privacy practices and be updated when legal changes occur.

An example of why you need to regularly update your NPPs would be the change to requirements for uses of PHI for marketing purposes that the Omnibus Rule introduced in 2013. Some NPPs created before 2013 have marketing disclosure practices that would now be a violation of the new requirements.

All NPPs need to be displayed in a prominent location at your organization where a patient would encounter them. If you own a website, it must be published there as well. NPPs must be provided to the patient at their first encounter and an attempt to have the patient sign an acknowledgment of receipt form must be made.

A patient is not required to sign the acknowledgment form or waive any right under the Privacy Rule. If a patient refuses to sign, they cannot be denied any service or receive any retaliation as a result of their refusal to sign. When a patient refuses to sign, documentation should show that an attempt was made and the reason it was not accepted.

NPPs must contain how your organization intends to use and disclose PHI, what the individual’s information rights are, and how the individual can exercise their rights, including how to file a complaint to your organization or the Secretary of the HHS. NPPs should include what your legal duties are regarding this information, including a statement that your organization is legally required to protect the privacy of the information. NPPs must also contain contact information (e.g., phone number) for your Privacy Officer.


Patients can request an accounting of your disclosures of their PHI made in the last 6 years. They can receive one free accounting in a 12-month period, but after this request, you can charge patients a fee based on the cost of time and material used to provide this accounting.

You need to provide this information within 60 days of the request, unless you receive a 30-day extension by providing patients a written statement explaining your reasons for the delay and when they will receive your disclosure information.

Your accounting of disclosures must include the following information:

  • Date of disclosure
  • Frequency or number of disclosures made
  • Name and address of entity or person who received the PHI
  • Description of the PHI disclosed
  • Statement describing the purpose of the disclosure

If PHI disclosures were made for research purposes (involving data from more than 50 individuals), make sure to include the following information:

  • Name of research activity
  • Description of the research’s purpose and criteria used for selecting records
  • Description of the PHI disclosed
  • Date of disclosure period
  • Name, address, and phone number of organization that sponsored the research
  • Statement that an individual’s PHI may or may not have been disclosed for a particular protocol or other research activity

However, covered entities don’t have to provide an accounting of disclosures, when healthcare practices and/or information:

  • Did not require specific notification, authorization, or an opportunity to object (e.g., treatment, payment, healthcare operations)
  • Was sent to the patient
  • Was sent to business associates
  • Received formal authorization from the patient


Though patient records cannot have information removed, patients can request to make amendments to their healthcare records, which offers further explanation, clarification, or revision of health information.

If patients request an amendment, the covered entity should have patients fill out forms that include the following:

  • What dates need to be amended
  • What information is incorrect or incomplete
  • What is the reason for requesting the amendment
  • What should the information be amended to contain/look like
  • Authorization for the covered entity to notify individuals that the patient wants to be notified

Covered entities have 60 days from receiving a patient’s request to take action, unless they receive a 30-day extension by providing written notice to the individual detailing your reason for delay and the date which they will take action. On request of a patient, covered entities might also be contacted by other covered entities to amend patient records.

Whether or not you amend patient records upon request, inform the patient about your decision in a timely manner.

A covered entity can deny a patient’s request to amend health information for several reasons. For instance, covered entities can deny a request if the record: is not part of the Designated Record Set, was not created by the covered entity, is not part of their access rights under HIPAA requirement §164.524, or is reviewed and determined to be accurate and complete.

If you deny the request to amend a patient’s record, you must inform the patient that their amendment was denied and why you denied their request.

If covered entities approve to amend a patient’s record, they need to put this amendment in their record or reference a link to this amendment. Make sure to notify: the patient, anyone who the patient requests to be notified, and anyone who would need to know about the information in order to ensure the patient is not negatively affected.

Patients also need to know that they can submit a written statement that they disagree with the denial and have this statement included in their record. If they choose not to submit a statement of disagreement, their request for amendment and subsequent denial will still be included in their record.

In addition to this, make sure to also inform patients on how to file a complaint.


Patients (and any person on behalf of a patient) have the right to file a complaint if they believe their rights and information have been violated or breached in any way. They can choose to file a complaint with the covered entity directly and/or with the Secretary of the HHS.


If a patient files a complaint with the covered entity, the covered entity must have the following information in place about how to file a complaint:

  • A notice in their NPP about a patient’s rights to file a complaint
  • Contact information for the person with whom to file a complaint
  • Information on how to file a complaint when requested by a patient

While the covered entity has no obligation to investigate any complaints, especially within a specific time frame, it’s in your best interest to do so to avoid a complaint with the HHS and for patient satisfaction and trust. Covered entities must also document all complaints received and their response to complaints.

Covered entities are not allowed to intimidate or retaliate against a patient that files a complaint with either the covered entity or the HHS.

When complaints are led to the covered entity, patients don’t have to file a complaint in any specific time frame.


If patients file a complaint with HHS/OCR, their complaints must be written within 180 days of the violation or when the patient reasonably should have known about the violation. In this complaint, patients must include the name of the complaint’s subject and a description of the violation. Patients should use the online OCR complaint tool.

The HHS will conduct a preliminary investigation of ALL complaints. Once a complaint is determined as valid, they will conduct a further investigation, which might lead to an audit (e.g., desk audit, onsite audit).

Download the latest guide to HIPAA Compliance

Download now



After the 2013 HIPAA Final Omnibus Rule, HIPAA compliance for both covered entities and business associates has become an even more important priority. The HIPAA Final Omnibus Rule requires covered entities to implement or update a business associate agreement when the business associate creates, receives, maintains, and transmits electronic patient information.

In these new or revised BAAs, covered entities, business associates, and subcontractors agree to share responsibility for patient data protection and breach notification. Here are a few examples of what should be included in your business associate agreement:

  • A minimum necessary policy: Business associates should not use more data than necessary.
  • Business associate’s permitted use of PHI: PHI should only be used to perform services for a covered entity, unless assurances of confidentiality are obtained or required by law.
  • Prohibited use of PHI: Anything not expressly permitted or that is expressly prohibited cannot be used by the business associate.
  • Covered entity’s responsibility: Covered entities should give business associates their current NPPs.
  • Appropriate safeguards to protect PHI: Establish and clarify security practices to best secure PHI.
  • Breach reporting: Business associates need to notify affected covered entities immediately after discovering a data breach.
  • Termination provisions: Conditions for termination and policies on how PHI should be protected, returned, and/or destroyed upon termination of contract.

Additionally, the HHS has made it clear that covered entities must obtain satisfactory assurance that each business associate safeguards patient data it receives or creates on behalf of a covered entity.

Covered entities must ensure their business associate complies with the terms of their BAA.

Whether compromised from within your system or a business associate’s system, your organization can be liable for up to $50,000 per violation per day as a result of any breach of your patient data. And that’s just HHS penalties. This doesn’t include civil action, cost of mitigation, and loss of patient trust that may come as the result of a data breach.

With these consequences in mind, remember that you should only share data with your business associates on a need-to-know basis. Regularly validate that they’re handling your patients’ PHI in a HIPAA compliant manner. This should keep your liability to a minimum.

Next, covered entities should do all they can to reduce risks by implementing a business associate compliance program. Such a program should gauge your liability through the documentation of what business associates do with your PHI, and then you can help them work towards compliance.


Your business associate plan should evaluate all existing business associates’ security practices in order to help you address the riskiest vendors first. Then, risk and compliance managers should design, implement, and monitor a mass risk evaluation of business associate networks.

A plan that starts with the highest risk business associates and tracks related progress will help you prove your effort to address business associate compliance if the HHS decides to audit your organization.

After determining which business associates you use, make sure you have an adequate BAA in place with every business associate. Then you should identify all parties (e.g., business associates, subcontractors) that still need to comply with your BAA.

Next, ask your business associates for proof that they’ve completed a risk analysis and are up to date with their risk management plan. If they aren’t making HIPAA compliance efforts, either recommend a trusted source to help them or stop using their services.

Patient data is too valuable to deal with business associates that choose to ignore compliance and security best practices.

Next, classify business associates according to their use of patient data. Determine how much liability each business associate holds by asking a set of risk-evaluating questions, such as:

  • Is a business associate’s internal system connected to the Internet? If yes, are those external IPs scanned for vulnerabilities? Are internal or external penetration tests performed?
  • How does a business associate obtain PHI from you and what data is received?
  • What is the quantity of data received?
  • How is data stored, protected, backed up, and destroyed by a business associate?

After this quick risk snapshot, you will be able to clearly categorize individual risk levels that determine which business associates put your patient data in the highest risk. Based on the risk ranking from this preliminary risk analysis, you can decide how to customize compliance measures to help with business associate HIPAA compliance.

Remember that HIPAA regulations require you to take action if you know or believe a business associate is not HIPAA compliant (e.g., stop sending data to said business associate).

If a covered entity terminates a business associate contract, a business associate needs to follow the termination clause.

Basically, a business associate needs to make sure that any PHI they have received, created, or maintained is:

  • Returned to the covered entity
  • Protected by adequate safeguards and security
  • Not used or disclosed
  • Permanently deleted



Every covered entity that uses business associates is required to obtain assurances that their business associates treat patient data the way you and the HHS require them to. Whether you choose to personally audit each business associate or require documented data security procedures, take the initiative to secure the future of your organization and the safety of patient data.

As your business associates progress towards compliance, track their success to ensure an approved level of compliance. As the riskiest business associates reach compliance, begin to reach out toward medium-risk business associates to start this process with them. Don’t forget to reevaluate every business associate’s plan and associated vulnerabilities each year.

Remember, sharing data with a business associate can lead to a large breach of your patient data. However, most people I speak with tell me, “I have BAAs in place, so I don’t need to worry. And even if they do end up getting breached, we have airtight agreements removing our liability.”

However, it’s not just about who’s the responsible party. When patient data is lost or stolen, your patients (and even your organization) could experience serious repercussions. Losing community trust can be devastating for your organization.


SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA

Need Security Consulting?

Request a Quote



If documentation is done correctly, it can create a baseline security standard for every process, workforce member, and system at your organization.

Without a recorded comparison of last year’s security plan, your future efforts become much more difficult.

Here are three reasons to keep proper documentation:

  1. Your future: If you document your HIPAA compliance efforts this year, you’re making next year’s job that much easier. This turns into less overall stress for you and your team because updating existing documentation is much easier than starting from scratch.
  2. Your legacy: If you change your job or role, documentation will give your successor a great view into the environment.
  3. The HHS: If the HHS comes knocking, proper documentation will show your compliance efforts. If you can prove how you’re working toward HIPAA compliance in your documentation, they will likely be more lenient. Remember to make sure you’re actually implementing the policies you document. If you haven’t implemented anything in your documentation, this is a major detriment to you, your PHI, and your organization.

A large part of your HIPAA compliance process should be spent on documentation.


Many organizations are confused about what exactly they should document and how they should document it. Generally speaking, you should record the who, what, when, where, how, and why of everything related to PHI in your environment. Documentation should demonstrate in writing where you are today, where you’ve progressed over the years, and what your plan is for the future.

Your documentation should answer questions like:

  • What is your security stance in general?
  • What are your risks and vulnerabilities?
  • How secure are your workstations?
  • Do your workforce members understand how to safeguard PHI?
  • What is the state of your location’s physical security?
  • How does BYOD factor into your security strategy?
  • What have you learned during your HIPAA compliance process?
  • Who are the responsible parties?
  • How are systems configured?
  • What is your authorization and approval processes?

To answer these broad questions, dive into the detailed answers of more specific and technical questions, such as:

  • Who holds your encryption keys, and how do you secure them? Where are encryption keys stored? What are those key holder’s responsibilities and role-based access level?
  • Who has access to your firewalls? How are your firewalls configured? Which systems do your firewalls surround? Are your firewalls up to date? Do you have a change control process?
  • Do you use FTP or SFTP? How is it configured? Do you have vendor documentation for SFTP?
  • What are the roles and responsibilities of those that impact your PHI environment’s security? Do you have this information detailed for daily, weekly, monthly, quarterly, and yearly tasks (where applicable)?


HIPAA documentation requirements go far beyond policies and procedures. If you’re looking for ideas on what you should document at your organization, here’s a sample list to get you started:

  • HIPAA risk management plan
  • HIPAA risk analysis
  • PHI location documentation (e.g., a PHI flow diagram)
  • Notice of Privacy Practices (NPPs)
  • How you’ve eliminated third-party risks
  • Software development life cycles
  • Business associate agreements (BAA) and/or enforceable consent agreements (ECA)
  • How your environment is coping with identified vulnerabilities
  • Incident response plan (IRP)/breach response plan
  • Current/future goals and milestones
  • Explanation of unimplemented addressable implementation standards
  • Work desk procedures
  • Training logs
  • Compliant processes and procedures
  • List of authorized wireless access points
  • Inventory of all devices including physical location, serial numbers, and make/model
  • Electronic commerce agreements
  • Trading partner security requirements
  • Lists of vendors
  • Lists of employees and their access to systems
  • Diagram of your physical office, including exit locations
  • Disaster recovery book
  • Employee handbook
  • Policies and procedures for the Security, Breach Notification, and Privacy Rules


The biggest disservice you could do while meeting HIPAA documentation requirements is to spend weeks gathering paperwork, and then place it on a shelf until next year.

HIPAA documentation is only as useful as it is accurate.

Just like all of your other weekly activities, documentation should be an ongoing part of your entire business-as-usual security strategy. Try to examine and adjust at least one piece of documentation each week or as you make organizational updates. Don’t pile it into one day or one month at the end of the year.

Have a HIPAA Deadline?

Request a Quote


If you don’t give your workforce members specific rules and train them on those rules, they won’t be able to keep PHI secure. Or if employees are trained only once, they might forget policies.

Workforce member training and education will remind them that both privacy and security are important, and it will show them how to stop bad security behaviors.

HIPAA workforce member training also keeps workforce members aware of the most up-to-date security policies and practices. Threats to the healthcare industry are constantly changing, which means security practices should follow suit. If workforce members are only trained once, they probably won’t be able to keep up to date with your constantly changing security best practices and certainly won’t keep up with threats.

Workforce members are considered the weakest link in PHI security and HIPAA compliance by most security professionals.

You should train your staff regularly (e.g., monthly). Training doesn’t have to be lengthy and detailed. You can break up training into monthly small and simple trainings (e.g., 20 minute PPT presentations), making it easier to remember and implement procedures. For example, consider having specific training about the following topics:

  • Social media compliance
  • Password management
  • Acceptable uses and disclosures
  • Social engineering
  • Phishing emails
  • Physical security (e.g., workstations, active and passive medical devices)
  • Disposal of data, media, and equipment

Specifically, social media use has become even more prevalent. If employees irresponsibly use social media, their actions can easily lead to serious HIPAA violations. Make sure staff understand the consequences of not following your HIPAA policies.

For example, you can share the story of a nurse at Michigan’s Oakwood Hospital who wrote a Facebook post about a patient accused of killinga police officer. Although the nurse didn’t use the patient’s name or socialsecurity number, this was still a breach of the HIPAA Privacy Rule.


Implement a continuous training approach by soaking data security best practice information into messages that go to workforce members.

During new hire training, train new employees about HIPAA compliance and security best practices, Make security training part of the employee newsletter. Send regular emails that run through real-life compliance and security scenarios. Put security tips on bulletin boards.

Your educational campaigns should also remind your staff that HIPAA compliance doesn’t just happen within the walls of your organization. Hackers can steal information on the subway or by eavesdropping a phone call at the grocery store. Even sharing too much information on social media can lead to a cybersecurity attack.

As you set up your training plan, consider the following tips:

  • Provide training as a mandatory part of new hire orientation
  • Require monthly training with all staff members or develop a weekly educational program (annual training isn’t enough)
  • Keep a repository of policies and procedures (keep these updated and inform staff of updates)
  • Develop a verification process to ensure training completion
  • Document dates and times when workforce members complete their training
  • Evaluate your training program effectiveness each quarter
  • Reduce costs by making training part of your comprehensive educational program

In addition to your training plan, make sure you have and follow appropriate sanctions for workforce members that do not comply with your policies and procedures.

The regular routine of work makes it easy for employees to forget crucial security information learned during trainings.



If you think your workforce members know how to secure patient data and what they’re required to do, you’re sadly mistaken. In fact, most HIPAA breaches originate from healthcare workforce members. Although most healthcare workers aren’t malicious, they often either forget security best practices, don’t know exactly what they’re required to do, or make mistakes that stem from their tendency to help others.

Unfortunately, many hackers will take advantage of human error to gain access to sensitive data. For example, thieves can only steal laptops if workforce members leave them in plain sight and unattended. Hackers often access networks because workforce members set up easy-to-guess passwords. Improper disposal only happens if staff decide to throw PHI away instead of shredding it. And the list goes on.

To help protect sensitive data, employees need to be given specific rules and regular training to know how to protect PHI. Regular training (e.g., brief monthly trainings) will remind them of the importance of security, especially keeping them up to date with current security policies and practices. Here are some tips to help get employees prepared:

  • Set monthly training meetings: Focus each month on a different aspect of data security, such as passwords, social engineering, and email phishing.
  • Give frequent reminders: Security reminders can be sent out in an email, newsletter, during stand-up meetings, and/or HIPAA security webinars that include tips for employees.
  • Train employees on new policies ASAP: Newly hired employees should be trained on security and HIPAA policies as quickly as possible.
  • Make training materials easily available: Intranet sites are a great way to provide access to training and policy information.
  • Create incentives: Reward your employees for being proactive in HIPAA compliance.
  • Regularly test employees: Create an environment where employees aren’t afraid to report suspicious behavior.
  • Leverage technology: Whenever possible, technical security controls should be put in place to provide a safety net in case training fails.


SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA


First, remember that HIPAA auditors are not your enemy; they want to help you make your organization more secure for your workforce members and your patients. But if you aren’t prepared, a government-mandated audit can quickly become a nightmare for you.


A HIPAA audit isn’t necessarily the result of a whistleblower or a possible HIPAA violation. It’s mainly for the OCR to assess and gain an understanding of how healthcare providers are doing in HIPAA compliance, and to see if any changes need to be made.

There are a few reasons why your organization may be audited. Here are the primary audit triggers:

  • Complaints: A customer, or even an employee can file a complaint with the HHS, which may lead to an audit.
  • Self-reported breach: If you’ve been breached, you have a much higher chance of being audited.
  • At random: The OCR conducts random audits on organizations to see how healthcare entities are doing with HIPAA compliance.

All covered entities and their business associates are eligible for a HIPAA audit.


When organizations undergo an OCR audit or investigation, OCR auditors often review documented policies and procedures, interview staff, and observe if procedures are actually taking place. If all three of these factors don’t match requirements exactly, organizations may be issued hefty fines, such as:

(A) Did not know
(B) Reasonable cause
(C)(i) Willful neglect - Corrected
C)(ii) Willful neglect - Not corrected

If you’re like most healthcare organizations, you already have organizational policies in place. But they probably haven’t been reviewed or updated in years. Or perhaps you do have policies, but they haven’t been properly documented.


The OCR will do desk and onsite audits. These audits will look at compliance with specific requirements of the Security, Breach Notification, and/or Privacy Rules.

For the desk audit, selected entities will be sent an email, asking for documents and other data. Once you’ve submitted this information, be prepared for an onsite audit.

The onsite audits will involve someone going to your organization and examining how your organization is complying with HIPAA requirements. These audits will examine a broad scope of requirements from the HIPAA rules and will be much more comprehensive than a desk audit.

Auditees will then receive audit reports, and they can respond to any findings that were discovered in the audits. They will then receive a final report, which will describe how the audit was conducted, discuss any findings from the audit, and contain entity responses to the findings. This report should be provided 30 days after the auditee’s response.

Have a HIPAA Deadline?

Request a Quote



This is probably one of the most important things to prepare for your audit. Having the proper documentation ready will make your audit go much faster and help you avoid costly penalties, which is why documentation has been mentioned so much throughout this guide.

HIPAA documentation isn’t something you can create overnight. Here are the top 5 pieces of documentation auditors look for:


Workforce members are likely among your weakest links in your organization, so you should be devoting more time to training. And this training should all be written down. Training helps workforce members remember important security practices to keep PHI secure.

  • Regular training for all employees
  • Verification process to ensure training completion
  • High quality content presented during training
  • Training completion dates for each staff member
  • Evaluation process for training program effectiveness


Policies and procedures aren’t just paperwork. They outline what you promise to do to protect your patient’s medical data.

  • Privacy Rule policies (e.g., use and disclosure of PHI, NPPs)
  • Security Rule policies (e.g., password requirements, encryption, physical security)
  • Breach Notification policies (e.g., incident response plan, breach processes)
  • Frequent updates to policies and procedures
  • Where policies are stored and how they are disseminated to staff


Covered entities and business associates agree to share responsibility for patient data protection, but it’s still the primary responsibility of the covered entity to ensure PHI protection.

  • Recently signed agreements for all business associate relationships
  • Agreements updated to include Omnibus language
  • Satisfactory assurance that each business associate safeguards patient data
  • Business associate risk evaluation
  • Annual re-evaluation of contracts


A HIPAA risk analysis identifies potential security threats that put your patients’ data and your organization at risk.

  • Lists of employees with their PHI access levels
  • Network diagrams and flow diagrams of PHI in your environment
  • Lists of systems with PHI access (e.g., servers, workstations, laptops, active and passive medical devices)
  • Identified vulnerabilities, threats, and risks to patient data
  • Prioritized risks based on likelihood of occurrence and potential impact


A HIPAA risk management plan is simply your outlined strategy for mitigating risks found in your risk analysis.

  • Organizational HIPAA goals
  • Each vulnerability and assigned risk level
  • HIPAA security control to-dos
  • Dates completed, and which employee completed it
  • Notes that address unimplemented guidelines


Conducting audits within your organization can help you find resolvable problems in your security before your audit. It’s best to do internal audits periodically to find new issues that may appear.

Organizations should engage a third-party security expert to help with conducting a proper security assessment. A security assessor will have experience in HIPAA (and many other security mandates) and will be able to see your organization from an external view (which is what malicious attackers are doing).

If you have time, conducting an internal audit is a good idea to find and resolve any problems before your onsite audit.


To comply with HIPAA requirements, you’ll need to spend money. The cost to meet these requirements entirely depends on your organization. Here are a few variables that will factor in to the cost of your overall compliance:

  • Your organization type: Are you a hospital, business associate, electronic health information exchange (HIE), healthcare clearinghouse, or another type of healthcare provider? Each organization type will have varying amounts of PHI and varying risk levels.
  • Your organization size: Typically, the larger an organization is, the more vulnerabilities it has. More workforce members, more programs, more processes, more computers, more PHI, and more departments mean higher HIPAA costs.
  • Your organization’s culture: If data security is a top priority for upper management, increasing security costs probably isn’t a major internal struggle. In other cases, management is very hesitant to dish out budgets to HIPAA; this is because they don’t understand their organization’s security liabilities.
  • Your organization’s environment: There are many system aspects that can affect HIPAA compliance costs, such as the type of your medical devices, the brand(s) of your computers, the types of your firewalls, and the model(s) of your backend servers.
  • Your organization’s dedicated HIPAA workforce: Even with a dedicated HIPAA team, organizations usually require outside assistance and consulting to help them meet HIPAA requirements.

Having the proper security budget protects not just your organization but your patients as well.

The following are estimated HIPAA budgets:

Training and policy development
Risk Analysis and Risk Management Plan

Vulnerability scanning
Training and policy development
Penetration testing
Risk Analysis and Risk Management Plan
Onsite audit
Varies based on where an organization stands in compliance and security
$70K+, depending on your organization’s current environment

Keep in mind, this is far cheaper than paying for a data breach, which can easily cost anywhere from $180,000 to $8.3 million and above.



If you’re having problems communicating budgetary needs to management, conduct a risk analysis before starting the HIPAA process. NIST 800-30 is a good risk assessment protocol to follow.

At the end of this assessment, you’ll have an idea of the probability of a compromise, how much money might be lost if compromised, and the impact a breach might have on your organization.

Find a way to show how much a lack of security will cost your organization. For example, ask yourself, “if someone gains access through a designated system, this is how much it will affect our patients, hurt our ability to provide quality care, and cost our organization.”

Consider asking your accounting or marketing teams for help in delivering your budgetary needs in more bottom-line terms.

If possible, work with your HIPAA team to come up with the following information: security controls that need to be implemented, cost estimates, and how critical your team feels each control might be to your organization’s security.


SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA

Download the latest guide to HIPAA Compliance

Download now



Unless someone from management is tasked with overseeing the HIPAA efforts at your organization, HIPAA compliance won’t happen. In SecurityMetrics HIPAA Security Rule Report, our data showed that C-Suite often believes they are 10% or even 20% more compliant with most HIPAA policies than those individuals who handle HIPAA tasks (i.e., IT, Compliance, and Risk Officers) believe.

Often, C-Suite members expect their staff to be fully compliant with HIPAA standards, but the IT, Compliance, and Risk workforce are not given adequate resources to implement security best practices.

For example, IT may not have the budget to implement adequate security. Some may try to look for free software to fill in security gaps, but this process can be expensive due to the time it takes to implement and manage. In some instances, we have seen that an IT department wanted their third-party auditor to purposely fail their compliance evaluations so they could prove that they needed a higher security budget. Obviously, it would have been better to focus on security from the top-down beforehand.

In some cases, these individuals do not have enough expertise to fully address specific aspects of HIPAA compliance (e.g., external vulnerability scans). This usually forces those in charge of HIPAA compliance to cut corners in their security measures or not even address the issues at all.

Security is not a bottom-up process; you can’t just tell IT to “get us compliant” because this checkbox attitude can lead to data breaches. Management at the highest level (e.g., CEO, VP, CTO, CIO) must understand that HIPAA initiatives should come from the top and be pushed down. If you are a C-level executive, you should be involved with budgeting, assisting, and promoting security best practices from the top level down to foster a strong security culture.

Management needs to be aware of their organization’s security needs and promote a culture of security and HIPAA compliance.



My experience is that executives aren’t listening to their staff about or fully comprehending their current compliance state, and staff don’t fully understand how to translate the HIPAA regulations into specific controls.

Moving forward, entities need to outsource or bring information security experts on-board to obtain solid security advice.

Budgets should have more emphasis on security. I’ve seen large organizations spend hundreds of thousands of dollars on new medical equipment, then balk at an important security tool costing only a few thousand dollars. Some make the argument that equipment saves lives or improves patient well-being.

But what happens if an attacker isn’t just looking to steal information? What happens if they get into medical devices and impact your patients’ healthcare experience or life?

Compliance officers need to better understand these risks, and then find ways to convey this information appropriately to the executive team. Often, a third party can help add credibility. Entities and the executives, in particular, must begin committing the appropriate budget and personnel resources to adequately secure PHI.


SecurityMetrics Security Analyst | MSCIS | CISSP | CISA | QSA

Free Data Security Education

Sign Up for Academy


Access Control List (ACL): A list of instructions for firewalls to know what to allow in and out of systems.

Advanced Encryption Standard (AES): Government encryption standard to secure sensitive electronic information.

Breach: An impermissible use or disclosure of Protected Health Information resulting in significant risk of financial, reputational, or other harm to the affected individual.

Business Associate (BA): A person or entity that performs certain functions that involve the use or disclosure of PHI (e.g., CPA, IT provider, billing services, coding services, laboratories).

Business Associate Agreement (BAA): A contract between a covered entity and business associate to safeguard PHI and comply with HIPAA.

Captured: Data is being recorded, gathered, and/or stored from an unauthorized source.

Chief Information Security Officer (CISO): Similar to a CSO, but with responsibility for IT rather than entity-wide security.

Chief Security Officer (CSO): Company position with responsibility towards HIPAA compliance, PCI compliance, physical security, network security, and other security protocols.

Covered Entity (CE): A health plan, health care clearinghouse, or health care provider that electronically transmits health information (e.g., doctors, dentists, pharmacies, health insurance companies, company health plans).

Electronic Health Record (EHR): Digital chart that contains a patient’s comprehensive medical history from multiple healthcare providers. Often referred to as EHR system when discussing all patient charts.

Electronic Medical Record (EMR): Digital chart that contains a patient’s medical history from a single practice used for diagnosis and treatment. Often referred to as EMR system when discussing all patient charts.

Electronic Protected Health Information (ePHI): Health information sent or stored electronically protected by the HIPAA Security Rule.

Exfiltrated: Unauthorized data is transferred from a system.

Federal Information Processing Standards (FIPS): US federal government standards for computer security that are publicly announced (e.g., encryption standards).

File Integrity Monitoring (FIM): A way of checking software, systems, and applications in order to warn of potential malicious activity (i.e., when a file is changed).

File Transfer Protocol (FTP): An insecure way to transfer computer files between computers using the Internet. (See SFTP)

Firewall (FW): System designed to screen incoming and outgoing network traffic.

Health Information Technology for Economic and Clinical Health (HITECH Act): 2009 legislative act that, among other things, implements a series of fines to enforce HIPAA compliance and requires business associates to adhere to the same level of HIPAA compliance as covered entities. 

Health Insurance Portability and Accountability Act (HIPAA): A federal mandate that, among other things, requires organizations to keep patient data secure through a myriad of privacy and security procedures, policies, and actions.

Hypertext Transfer Protocol (HTTP): A method of communication between servers and browsers. (See HTTPS)

Hypertext Transfer Protocol Over Secure Socket Layer (HTTPS): A Secured method of communication between servers and browsers

Incident Response Plan (IRP): Policies and procedures to effectively limit the effects of a security breach.

Information Technology (IT): Anything relating to networks, computers, and programming, and the people that work with those technologies.

Intrusion Detection System/Intrusion Prevention System (IDS/IPS): A system used to monitor network traffic and report potential malicious activity.

Multi-factor Authentication (MFA): Two out of three independent methods of authentication are required to verify a computer or network user. The three possible factors are:

  • Something you know (e.g., a username and password)
  • Something you have (e.g., an RSA token or cellphone which gives you a new code for each login)
  • Something you are (e.g., fingerprint or iris scan)

National Institute of Standards and Technology (NIST): Federal technology agency that assists in developing and applying technology, measurements, and standards.

National Vulnerability Database (NVD): A repository of all known vulnerabilities, maintained by NIST.

Network Access Control (NAC): Restricts data that users, apps, and programs can access on a computer network.

Office for Civil Rights (OCR): The federal organization responsible for enforcing HIPAA compliance.

Open Web Application Security Project (OWASP): A non-profit organization focused on software security improvement, often heard in the context of “OWASP Top 10”, a list of top threatening vulnerabilities.

Protected Health Information (PHI): Information that can be linked to a particular person (i.e., past, present, or future health condition or healthcare provision) such as patient name, social security number, and medical history.

Risk: The likelihood a threat will trigger or exploit a vulnerability and the resulting impact on an organization.

Risk Analysis (RA): An assessment of the potential vulnerabilities, threats, and possible risk to the confidentiality, integrity, and availability of ePHI held by the covered entity or business associate.

Risk Management Plan (RMP): The strategy to implement security measures to reduce risks and vulnerabilities to a reasonable and appropriate level.

Role-Based Access Control (RBAC): The act of restricting users’ access to systems based on their role within the organization.

Secure File Transfer Protocol (SFTP): A secure way to encrypt data in transit. (See FTP)

Secure Socket Layer (SSL): Internet security standard for encrypting the link between a website and a browser to enable transmission of sensitive information (predecessor to TLS). (See TLS)

Threat: The potential for a person, event, or action to exploit a specific vulnerability.

Transport Layer Security (TLS): A more secure Internet security standard for encrypting the link between a website and a browser to enable transmission of sensitive information. (See SSL)

Two-factor Authentication (TFA): (See MFA)

United States Department of Health and Human Services (HHS): The federal organization that created HIPAA.

Virtual Private Network (VPN): Technical strategy for creating secure tunnels over the Internet.

Vulnerable: A weakness in a system, environment, software, or website that can be exploited by an attacker.

Vulnerability: A flaw or weakness in procedure, design, implementation, or security control that could result in a security breach.

Wi-Fi Protected Access (WPA): Security protocol designed to secure wireless computer networks. (See WPA2)

Wi-Fi Protected Access II (WPA2): A more secure version of WPA. (See WPA)

Wired Equivalent Privacy (WEP): An outdated and weak security algorithm for wireless networks.




We help customers close data security and compliance gaps to avoid data breaches. We provide managed data security services and are certified to help you achieve the highest data security and compliance standards.

We are a PCI certified Approved Scanning Vendor (ASV), Qualified Security Assessor (QSA), Certified Forensic Investigator (PFI), and Managed Security provider with 18 years of data security experience. From local shops to some of the world’s largest brands, we help all businesses achieve data security through managed services, compliance mandates (HIPAA, PCI, GDPR), and security assessments (HITRUST consulting and assessments). We have tested over 1 million systems for data security and compliance. We are privately held and are headquartered in Orem, Utah, where we maintain a Security Operations Center (SOC) and 24/7 multilingual technical support.


Have a HIPAA Deadline?

Request a Quote

We are excited to work with you.


Thank you!

Your request has been submitted.