Posted on Leave a comment

Navigating the Insider Threat Tool Landscape

Mitigating insider threats is a multifaceted challenge that involves the collection and analysis of data to identify threat posed by many different employee types (such as full-time, part-time, or contractors) with authorized access to assets such as people, information, technology, and facilities. The landscape of software and tools designed to aid in this process is almost as wide and varied as the problem itself, which leaves organizations with the challenge of understanding not only the complexities of insider threats, but also the wide array of tools and techniques that can assist with threat mitigation. This post explores some of the recommended tool features and functionality available through use of a combination of tools, as well as a proposed process to implement and operate controls at an organization.

There are various features and functions to prevent, detect, deter, and respond to insider threats. The following list is not exhaustive, but provides several examples that should be considered as part of a robust and comprehensive insider threat prevention platform.

  • Preserve forensic artifacts in the event of litigation
  • Audit network and host-based activity
  • Monitor data and prevent it from leaving authorized locations
  • Correlate and resolve user and system entity activity across various data sources
  • Perform analysis on data being gathered in the form of rule-based alerting, statistical anomaly detection or both, and prioritize those alerts
  • Generate data visualizations to aid in analysis
  • Manage and track the status and resolutions of cases/incidents
  • Analyze text-based data sources for sentiment and affects
  • Mask or anonymize sensitive information that is presented to analysts

The number of features to consider, however, does not necessarily correspond to the number of different tools required, nor does it mean that one or two tools can or should provide all of the desired functionality. There is a great deal of overlap, which may provide a defense in depth strategy but should not require you to purchase multiple tools, in several critical functional areas that can be accomplished by some combination of Data Loss Prevention (DLP), Enterprise Asset Management (EAM), Security Incident and Event Management (SIEM), User Activity Monitoring (UAM), and User (and Entity) Behavior Analytics (UBA/UEBA) tools. Given this overlap, it is important to understand the capabilities offered by existing hardware and software before assuming that procurement is the only method to introduce new functionality. Despite many claims, there is no single tool that will solve a problem as complex as insider threat. Instead of taking a tool-centric approach, begin by enumerating the desired functions to mitigate the threat and work from there. Depending on the industry or government sector, an organization may be subject to different regulations or mandates for specific types of controls. It is crucial to understand the actual requirements that must be fulfilled to avoid over-buying a particular solution that contains unnecessary functionality. However, in the absence of outside requirements, start by adhering to the best practices outlined in the Common Sense Guide to Mitigating Insider Threats.

Figure 1 illustrates a potential iterative process for implementing and operating insider threat controls.
InsTht Control Implementation Operation.PNG

Figure 1: Proposed Insider Threat Control Implementation and Operation

The first step at the top of the diagram is to identify your critical assets and the threats to those assets in order to help prioritize and focus the control efforts. This is a vital first step, as it helps to temper the expectation that every single system and piece of data must be secured to the highest degree. As the adage goes, if everything is critical, then nothing is critical. Start by involving the right people in the decision-making process, such as an existing risk management group, process owners, and high-level management. Identify areas of work including business process and critical services that are paramount to achieve the organization’s mission and survival and focus the remaining steps of the implementation and operation process starting with those areas.

Step two is to implement a baseline set of security controls. It is recommended to start with the tools and capabilities that already exist within the organization. Again, identify any governing standards or regulations or begin by referencing accepted standards, such as the Insider Threat specific controls included in NIST SP 800-53 Revision 4 and above. For example, one might leverage an existing log collection mechanism from host operating systems with a particular focus on auditing user activity.

Once the capabilities of the current toolset have been fully implemented, identify and fill any gaps with new (or modified) security controls. For instance, if there are no controls to prevent or log data movement across network or storage boundaries, then that gap might be filled by a DLP component.

The next step is to measure the effectiveness of the implemented controls. This measurement is not purely the number of actors that have been caught (which may not be greater than zero due to the low base rate of occurrence for insider incidents). However, one goal of this phase, in the absence of a high frequency of true positives, is to reduce false positives and false negatives. This step also includes measuring the control implementation and the effectiveness of the surrounding procedures, such as how long it takes to complete case investigations or how long it takes to engage or receive information from other parts of the organization. The final aspect of this step is to measure the coverage of controls and maximize the percent of machines, users, or data repositories that are actively monitored.

Given these data and measurements, the final step to closing an iteration of the loop is to refine the controls and alerts in order to maximize effectiveness and maintain capabilities. The iterative and continuous nature of this approach is especially important when you are faced with things like changing organizational priorities, missions, personnel, technologies, and risks.

For more information, including a few potential software solutions for smaller organizations or new Insider Threat programs, see the Navigating the Insider Threat Tool Landscape: Low Cost Technical Solutions to Jump Start an Insider Threat Programi presentation at the 2018 Workshop on Research for Insider Threats as part of the 39th IEEE Symposium on Security and Privacy.

Please send questions, comments, or feedback to [email protected].

i Spooner, D., Costa, D., Silowash, G. & Albrethsen, M. Navigating the Insider Threat Tool Landscape: Low Cast Technical Solutions to Jump Start an Insider Threat Program. Software Engineering Institute, Carnegie Mellon University. To be published.

Posted on Leave a comment

GDPR and Its Potential Impacts for Insider Threat Programs

The European Union’s General Data Protection Regulation (GDPR) is a directive that concerns the processing of personal data by private organizations operating in the European Union, whether as employers or as service providers. While many organizations have focused their GDPR readiness efforts on managing data subjects’ personal information on customers, employees are also considered data subjects. This post will focus on an organization’s obligations to its EU employees (inclusive of contractors and trusted business partners, regardless of a formal contract) under GDPR.


GDPR goes into effect on Friday, May 25, which means that the two-year window for organizations to come into compliance is rapidly closing. GDPR impacts organizations conducting business in the EU (e.g., sells to customers in the EU and/or employs EU citizens) and is focused on the protection of EU citizens’ personal information. By extension, insider threat programs operating within the European Union or accessing data on EU citizens need to consider what the GDPR means for their operations.

Key vocabulary included in GDPR that will assist in understanding include:

  • Data subject is “a living individual to whom personal data relates.” A data subject could be a customer or employee.
  • Personal data is “any information relating to an identifiable person who can be directly or indirectly identified in particular by reference to an identifier.” While in the US we may be most concerned and familiar with Social Security Numbers as personal data, this definition could be expanded to include dynamic IP addresses in certain circumstances as they related to citizens of the EU. If the dynamic IP address can be combined with other information held by a third-party, like an ISP, to identify an individual, then it constitutes personal information.
  • Right to erasure or be forgotten applies most often to customer relationships with an organization, but data subjects have the right to request erasure of personal data if certain circumstances apply. For employee relationships, the most relevant circumstance is if an employee’s personal data may have been unlawfully processed or is no longer necessary for processing, e.g., an employee has exited an organization and his / her data is not needed by the insider threat program.
  • Right to rectification means that data subjects have the right to have inaccurate personal data be corrected. For organizations, this means employees can request both access and corrections to personal data collected on them.
  • Personal data breach is “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data transmitted, stored or otherwise processed.” The key difference here compared to more traditional understandings of a breach is that it includes “access,” so personal data breaches could also include scenarios where the data never leaves an organization.

In this blog post, we consider what the GDPR means for some of the best practices discussed in the Common Sense Guide to Mitigating Insider Threats, 5th Edition. There is not enough space in one blog post to review each of the 20 best practices, so we will discuss practices that have the most potential to be impacted by GDPR.

Practice 3: Clearly document and consistently enforce policies and controls.

Documentation of policies and controls is fundamental to the success of any insider threat program, particularly the standard operating procedures for information sharing across the organization. With GDPR, sharing information on employees, even within the confines of an organization, may come under more scrutiny.

Practice 6: Consider threats from insiders and business partners in enterprise-wide risk assessments.

Enterprise-wide risk assessments need to consider not only technologies, but personnel and processes. Third-party businesses and other links along the supply chain add to any organization’s threat landscape. In the context of GDPR, organizations will now need to consider adding confirmation of those business partners’ GDPR compliance to their due diligence research and contractual agreements.

Practice 7: Be especially vigilant regarding social media.

Although social media may serve as a valuable data source for insider threat risk assessments, use of such information may come into question. GDPR grants individuals the ‘right to be forgotten,’ which means that social media providers can, in some circumstances, be compelled to delete an individual’s data at their request. Organizations with EU employees, contractors, or trusted business partners may want to the extent to which they rely on social media as a data source and the likelihood that less social media data may be available for analysis in the future.

Practice 12: Deploy solutions for monitoring employee actions and correlating information from multiple data sources.

While security tools with correlation capabilities are still recommended, organizations will need to consider the implications for the storage of the information correlated by these tools. Given the potential for an employee to request access or corrections to the information collected on them, the data should be in a form that can be shared, edited, or even purged as needed. Consolidating information used by a security tools into a more ‘user-friendly’ format may help not only insider threat analysts perform their analyses, but allow privacy specialists to have more insight and input into the management of this information.

Practice 16: Define explicit security agreements for any cloud services, especially access restrictions and monitoring capabilities.

Security agreements for cloud services should already be part of an organization’s plan for working with such providers. Likewise, organizations should consider the risks associated with countries where its data may be stored, e.g., the level of law enforcement or government access allowable to data without prior notice, cultural differences in what constitutes acceptable levels of privacy, etc. Under GDPR, cloud service agreements need to take into account the potential for any international transfers of data and what constitutes personal information in that region, and that comparable levels of security are provided.

Practice 19: Close the doors to unauthorized data exfiltration.

Under GDPR, in some circumstances organizations can face penalties of the higher between $20 million or up to 4% of global annual revenue (not profit) in the event of a personal data breach. Additionally, organizations have 72-hours to notify impacted individuals once they are aware of the breach. Preventing unauthorized data exfiltration may become more important than ever for some organizations as failure to do could cause significant financial impacts.

Practice 20: Develop a comprehensive employee termination procedure.

Organizations should identify what data on an employee is or is not subject to right to be forgotten. Once this data has been identified for both current and existing employees, organizations should consider how that might be documented as part of a termination procedure or exit process. Although current employees still have a right to erasure, this issue is perhaps more likely to emerge during the exit process. After an employee’s exit, an insider threat program may not have a need to further processing an employee’s personal information and should consider its deletion.

Final Thoughts

While this post is not intended to be exhaustive of all of the considerations an insider threat program must take into account in regards to GDPR compliance, we hope that it will serve as a point of future conversation among insider threat program practitioners. If your organization would like to share some of its experiences in managing GDPR considerations for insider threat programs, please contact us at [email protected].

Posted on Leave a comment

Insider Threat Supply Chain Best Practices

This blog post outlines best practices for establishing an appropriate level of control to mitigate the risks involved in working with outside entities that support your organization’s mission. In today’s business landscape, organizations often rely on suppliers such as technology vendors, suppliers of raw materials, shared public infrastructure, and other public services. These outside entities are all examples of the supply chain, which is a type of trusted business partner (TBP). However, these outside entities can pose significant security risks.

Understanding the Problem

The CERT Division’s National Insider Threat Center (NITC) has found that over 15% of insider threat incidents were perpetrated by someone in the victim organization’s supply chain. Although even more incidents of this kind occur in the private sector, that figure demonstrates that the issue remains relevant in the government sector. A case example of a supply chain incident follows:

The insider was employed as a customer service representative by a TBP of the victim organization, who was responsible for handling the organization’s employees’ healthcare claims. The insider worked with 3 outsiders. While on site and during work hours, the insider used their access over 6 months to steal addresses of medical service providers from the organization’s database, and also manipulated the organization’s system to divert millions of dollars in payouts to fraudulent Medicare claims. The insider was not able to make all of the necessary data modifications, and built a rapport with two employees who were able to do so, enabling themselves to carry out the scheme. The organization performed an internal audit and detected the fraud. The insider was arrested, convicted, and ordered to pay $89,000. The insider was sentenced to about 8 years imprisonment and about 5 years of probation. The incident related impact was $1.2 – $20 million.

By modeling the motivations, methods, and targets of the perpetrators in these incidents, it is possible to identify a set of best practices that can be used to develop and implement a mitigation strategy for supply chain risk management.

Mandates and Regulations

Several existing mandates and regulations provide organizations a given set of standards. Even if an organization is not legally required to follow them, these standards are a great starting point for developing robust and secure supply chain policies and procedures. To begin, your organization should consider how insiders might collude with someone in the supply chain or take advantage of weaknesses in supply chain processes and how that might affect your organization, and you should review existing policies and procedures with those repercussions in mind.

Here are a few examples of the available mandates and regulations your organization can use as a starting point: the International Organization for Standardization (ISO) 28000 series, ISO 20243, ISO/IEC 15408 Common Criteria, National Institute for Standards and Technology (NIST) SP 800-161, NIST SP 800-171, NIST 800-53, and the Defense Federal Acquisition Regulation Supplement (DFARS).

Best Practices

The list below outlines several best practices that are available to assist you with mitigating insider threat risk within the supply chain. You should revisit these practices on an annual basis as they might change over time.

  • Establish and put supply chain trusted insiders’ scope review, risk identification, and risk management in place. To accomplish this, review and identify each supplier’s scope of activities and where they fit into your organization’s supply chain. You must also use any risk management and assessment activities conducted by your organization to identify and address specific risks and threats to critical assets and data that members of the supply chain might introduce.
  • Define and document the rules of engagement for the supplier’s operation within your organization’s daily activities by establishing supplier and organizational terms and conditions. Ensuring these rules are integrated into the contract between your organization and the supplier can provide protections for your organization if the supplier fails to follow the set terms and conditions.
  • Deploy a monitoring strategy that identifies criteria for monitoring supplier interactions and methods for identifying anomalies or deviations. Be sure to outline these criteria in the supplier and organizational defined terms and conditions.
  • Form effective relationships and communications strategies that are supported by all levels of your organization. These strategies are critical because TBP management focuses on establishing an appropriate level of controls to manage the risks that originate from or are related to the organization’s dependence on these external entities.
  • Make background screenings required for all supply chain providers to ensure that the supply chain adequately mitigates insider threat risk. The rigor of these screenings should be equal to those conducted by your organization, at a minimum. Be sure to consider all legal requirements when creating policies involving background screenings.
  • Develop a formal onboarding process that includes clear, formal, and codified agreements with suppliers as part of the initiation process to help your organization manage its resilience over the lifecycle of the relationship. Assign and update all appropriate points of contacts for both your organization and the supplier as necessary.
  • Ensure the Acceptable Use Policy (AUP), which informs employees of the proper use of the organization’s IT systems and services, is followed by supply chain personnel who have been granted access to the organization’s IT systems. You might need to put customized AUPs in place for those who have temporary or guest-level access.
  • Develop an intellectual property (IP) ownership right policy defining your organization’s ownership rights over IP created by TBPs. Documents such as non-disclosure agreements (NDAs), non-competes, and IP agreements should be required and enforced.
  • Reporting of policy violations should be mandatory for all TBPs. These reports can include technical or physical security violations, and should contain any violations that indicate insider risk. Violations should be reported immediately to an appointed point of contact at the organization (e.g. Insider Threat Program Manager or Corporate Security) through a defined process. A clearly articulated Supplier Code of Conduct should be put in place and suppliers should be monitored for adherence.
  • Ensure that the appropriate mandates and regulations are reviewed and applied as necessary and that the best practices are put in place at your organization.


Insider threat remains a large part of an organization’s overall risk, and TBPs who are part of an organization’s supply chain account for a portion of insider threat incidents. The CERT Division’s National Insider Threat Center (NITC) at the Software Engineering Institute at Carnegie Mellon University has used its expansive incident corpus of over 1,000 empirically analyzed cases to identify nine best practices related to the prevention, detection, and response to insider threats within the supply chain. The best practices discussed above, along with the mandates and regulations, should be reviewed and applied as necessary to help reduce insider threat risk to the supply chain. Policies and procedures associated with insider threat risk should also be incorporated into the organization’s overall security framework.

Posted on Leave a comment

Insiders and their Significant Others: Collusion, Motive, and Concealment

Insiders have been known to collude with others, both with coworkers (i.e., other insiders) and outsiders. In our previous post on insider collusion and its impact, we explored 395 insider incidents of collusion and found that insiders working with outsider-accomplices had greater financial impact to their organization than those working with other insiders. When an insider works alone, or when an insider works with others within their organization, User Activity Monitoring (UAM) / User and Entity Behavior Analytics (UEBA) tools have the ability to identify one or multiple insiders as engaging in anomalous or suspicious activity. When insiders are working together, further analysis can correlate that suspicious activity and provide insight into where data may have moved. But what insight do organizations have when an insider reaches out to others to commit a malicious act? In this post, we explore a subset of these insider-outsider collusion incidents that involve an insider’s significant other (i.e., current or former partners or spouses).

These individuals, while not employees of an organization, may have more access to an organization’s assets (e.g., facilities or employees) or be viewed with more trust than a typical ‘outsider’ by virtue of their association with an employee. It follows, then, that these outsiders have the potential to cause more damage. The goal in reviewing these incidents and sharing real examples from the CERT National Insider Threat Center (NITC) Insider Incident Corpus is to understand the complexity of circumstances that surround some insider threat incidents.


At least 28 incidents of an insider colluding with a significant other have been identified within the NITC Insider Incident Corpus. These incidents represent approximately 7% of insider incidents involving collusion. The incidents took place between 2000 and 2016, so it is likely that there are additional incidents that have not yet been recorded. Twenty-three (82%) involved fraud and five involved theft of intellectual property (18%). Three of the fraud incidents also involved the insider working with a coworker in addition to a significant other, as did three of the theft of intellectual property incidents.

  • An engineer and spouse colluded to steal trade secrets from the insider’s employer over the course of several years, ending in 2003. The insider and the spouse planned to start a competing business overseas. Their scheme was uncovered by the spouse’s employer, who discovered the trade secrets on their systems and alerted law enforcement as a result.
  • In 2009, a campus police officer obtained an enrollment list containing names, Social Security Numbers, and dates of birth of approximately 250 students. The insider provided this list to the spouse. The spouse then acquired fraudulent credit cards in the students’ names.
  • In 2012, an insider used authorized access to government-owned case management systems to obtain sealed investigation notes on individuals involved in organized crime. The insider would tell the spouse the names of those being investigated, who in turn notified those being investigated and received bribes or payments in exchange.


In 11 of the aforementioned incidents (39%), insiders were recruited by their significant other to commit malicious acts. Motivating factors primarily included the financial gain for the significant other.

  • For over three years, a customer service representative working for a tax collection agency disclosed confidential customer information to the significant other, a debt collector. The insider illegally used access to information systems to pass on the PII of individuals from whom the partner was attempting to collect debts.

Unlike other incidents where an insider is working with another outsider, like a friend or other relative, these incidents occasionally involve physical abuse and intimidation by the significant other.

  • In 2014, an insider at a health insurance provider stole patient PII after being abused by the spouse and intimidated into stealing patient information so that the spouse and others could file fraudulent tax returns. There are other similar incidents where an insider was pressured into committing fraud (e.g., obtaining PII or accessing accounts without authorization) within healthcare or banking and finance organizations by an abusive significant other.

Additionally, five insiders were indirectly motivated to commit malicious acts because of stressors or circumstances related to a spouse or significant other. In at least three incidents, an insider committed fraud after the spouse experienced job loss. In at least two other incidents, insiders had spouses unable to work and cited financial stress that resulted.

  • In 2016, an engineer attempted to sell trade secrets to an outsider. This insider was under financial strains related to a spouse with a chronic illness and a financially demanding extramarital affair.


Beyond explicit collusion and scheming between spouses, insiders have been known to use their spouses’ names or assets as a form of concealment.

  • In 2012, a research chemist was recruited by a coworker to take part in using the victim organization trade secrets to form a new business in a foreign country. The insider’s spouse represented the accomplices’ business interests in a foreign country where they intended to market the stolen IP from the victim organization. The insider then downloaded trade secrets and confidential information onto thumb drives and other portable storage devices to send to outsiders from a personal email account.
  • In 2013, a bank manager abused privileged access and lack of oversight to credit money into a significant other’s account that the insider had acquired from cash deposits.

Lessons for Organizations

Organizations may want to consider preventative or corrective measures that address scenarios like those discussed above. From a detection standpoint, establishing anonymous reporting mechanisms for coworkers to alert an organization to the insider threat posed by (or perhaps even the potential pressures imposed on) an individual may also be valuable in these scenarios. These circumstances underscore the need for continuous monitoring to account for insiders’ new or developing conflicts of interest or relationships with suspicious individuals.

For other recommendations for your insider threat program, please refer to the CERT Division’s Common Sense Guide to Insider Threats – 5th Edition for recommendations based on an analysis of over 1,000 incidents in the CERT Insider Threat Incident Corpus.

Posted on Leave a comment

Substance Use and Abuse: Potential Insider Threat Implications for Organizations

In this blog post, I will discuss substance abuse as a potential precursor to increased insider threat and share statistics from the CERT National Insider Threat Center’s (NITC) Insider Incident Corpus on incidents that involved some type of substance use or abuse by the insider. In relation to insider threats, I will discuss the prevalence of substance abuse and discuss some of its impacts on organizations. Finally, I will outline some technical means of detecting employee substance abuse and share some best practices from the CERT Common Sense Guide for Mitigating Insider Threats.

Substance Abuse as a Precursor to Insider Threat

“Substance use disorders (SUDs) represent clinically significant impairment caused by the recurrent use of alcohol or other drugs (or both), including health problems, disability, and failure to meet major responsibilities at work, school, or home.”

Substance Abuse and Mental Health Services Administration, 2016

Substance use and abuse are potential precursors to insider threat. They could lead to concerning behaviors and both criminal and non-criminal acts against an organization. Insider incidents may include theft of intellectual property, sabotage, espionage, fraud, workplace violence, and non-malicious, accidental incidents. In these instances, insiders may commit malicious acts in order to procure money to support their habits or addictions or, due to the effects of the substances on their behavior, may commit acts of workplace violence. Substance use and abuse may also impact an insider’s cognitive abilities, leading to unintentional insider threats. These unintentional acts might include being more likely to click on phishing emails or misplace company equipment.

An example of how substance abuse can play a part in an insider incident can be seen in the following true story which can be seen in the following true story from the CERT National Insider Threat Center’s (NITC) Insider Incident Corpus:

The insider was a full-time branch manager for the victim organization, a bank. Over the course of approximately nine months, the insider removed over $270,000 from customer accounts and converted those funds for their personal use. The insider had developed a severe drug addiction to prescription pain medication and heroin over the course of their employment. Their financial gain was utilized to support their drug habit.

Prevalence of Substance Use and Abuse in CERT NITC’s Insider Incident Corpus

The CERT NITC Insider Incident Corpus contains records of over 1,600 actual insider incidents. A subset of 1,046 of these cases found in our Management and Education of Risk of Insider Threat (MERIT) database focuses on theft of intellectual property, fraud, and sabotage and contains detailed information regarding substances used and/or abused by an insider. Five percent of these insider incidents involved known substance use and/or abuse. Information regarding an insider’s substance use or abuse is not always readily available and is frequently not known unless it is disclosed in court proceedings. The incidence of insiders using or abusing substances has risen since 2010. According to the CERT NITC’s Insider Incident Corpus, there has been an increase from 1.1 insider cases involving substance abuse per year in the 20 years leading up to 2009 to an average of 4.4 cases per year from 2010 to 2016. The chart below shows that alcohol abuse was predominant in the subset of insider incidents analyzed.

substance abuse and use in insider threat incidents.PNGFigure 1. Chart showing the prevalence of substance use and abuse in the MERIT subset of CERT NITC’s Insider Incident Corpus from 1999 to 2016.

There has been an increase of insiders committing fraud either to support their own opioid or other substance addiction or to profit from the addiction of others. The healthcare industry is seeing an influx of the latter types of fraud cases, particularly from doctors who are writing out prescriptions for opioid “painkillers” and defrauding the health insurance system by illegally billing for office visits that are only occurring to write out opioid prescriptions in exchange for cash. The FBI and other law enforcement organizations refer to this as a “pill mill.” This type of health care fraud will be explored in a future blog post, along with case examples from the CERT NITC Insider Incident Corpus.

Substance Abuse and Dependence: Potential Organizational Impacts

Substance abuse and dependence is rampant in the U.S. today, with the opioid crisis considered to be an epidemic. This epidemic, including addiction to heroin, prescription painkillers, and other opioids, is said to cost the U.S. around $80 billion a year from lost productivity, incarceration costs linked to addiction, health care, and treatment. According to the Centers for Disease Control and Prevention, between 1999 and 2016, over half a million people in the United States died from drug overdoses. More than half were the result of an opioid overdose, and there was a significant spike in opioid deaths from 2010-2016. The Substance Abuse and Mental Health Services Administration’s (SAMHSA) annual National Survey on Drug Use and Health (NSDUH) for 2016 notes that 11.8 million people misused opioids in the previous year. Of those, 11.5 million misused opioid pain relievers. It is estimated that close to 75% of those with substance misuse disorders are in the work force.

Employees who misuse and abuse substances cost employers money and negatively impact those in the workforce around them. In one study from 2007, prescription opioid abuse was said to have cost employers over $25 billion. One can assume that as the opioid epidemic increases, these numbers will also increase. The National Council on Alcohol and Drug Dependence, Inc. has identified the following areas as potential impacts on organizations due to employee substance abuse, some of which are at the very least counterproductive workplace behaviors and, worst-case scenario, insider threats:

  • Tardiness/sleeping on the job
  • After-effects of substance use (hangover, withdrawal) affecting job performance
  • Poor decision making
  • Loss of efficiency
  • Theft
  • Increased likelihood of having trouble with co-workers/supervisors or tasks
  • Preoccupation with obtaining and using substances while at work, interfering with attention and concentration
  • Illegal activities at work including selling illicit drugs to other employees

Technical Detection and CSG Recommendations

Many of the effects of substance abuse may be visibly detectable via technical means. Several examples of this technical detection that organizations may consider include:

  • Monitor badge records for tardiness, absences during the day, and missed work
  • Gather information from Human Resource Management systems that could provide information about recent disciplinary actions, security violations, and, where legally allowable, drug test results
  • Monitor for web searches concerning alleviating effects of withdrawal and procuring substances
  • Gather information from background checks, where legally allowable, to garner information about past substance related arrests (i.e., DUI) and significant financial stressors potentially stemming from drug abuse

Organizations should work with their general counsel and/or human resources departments to support their employees facing substance use and abuse issues and work to mitigate malicious and unintentional insider threats by taking the following steps, many of which are outlined in the CERT Common Sense Guide for Mitigating Insider Threats, Fifth Edition:

  • Refer struggling employees to Employee Assistance Programs (EAPs) for support and possible referrals
  • Implement drug testing, including for prescription opioid medications, in compliance with all applicable laws and ensuring employee confidentiality
  • Conduct background checks to determine if an employee demonstrates financial insecurities or arrests that may be due to current or former substance use and abuse
  • Educate all employees on recognizing substance use and abuse disorders both in themselves and in co-workers
  • Provide a supportive work environment where people in need will seek help with difficult issues such as substance abuse

Substance use and abuse happens across all demographics. Organizations should work closely with their legal and human resources departments to implement practices and policies that address employee substance use and abuse in a manner that supports employees and the organization and maintains employee privacy.

Posted on Leave a comment

CERT NITC Insider Threat Program Manager Certificate

Increasingly, organizations, including the federal government and industry, are recognizing the need to counter insider threats and are doing it through specially focused teams. The CERT Division National Insider Threat Center (NITC) offers an Insider Threat Program Manager certificate to help organizations build such teams and supports programs that are flexible, based on best practices, and tailored to the unique circumstances of individual organizations.
Insiders pose a substantial threat to organizations because they have the knowledge and access to proprietary systems, data, and facilities that allow them to bypass security measures through legitimate means. The nature of insider threats is different from other cybersecurity challenges; these threats require a different strategy for prevention and mitigation.

Background and Motivation

In January 2011, the federal Office of Management and Budget (OMB) released memorandum M-11-08, Initial Assessments of Safeguarding and Counterintelligence Postures for Classified National Security Information in Automated Systems. The memorandum announced the evaluation of the insider threat safeguards of government agencies. This action by the federal government highlights the pervasive and continuous threat to government and private industry from insiders, as well as the need for programs that mitigate this threat.

In October 2011, then President Obama signed Executive Order (E.O.) 13587, Structural Reforms to Improve the Security of Classified Networks and the Responsible Sharing and Safeguarding of Classified Information. The executive order requires all federal agencies that have access to classified information and systems to have a formal insider threat program.

In May 2016, the Department of Defense (DoD) released Change 2 to the National Industrial Security Program Operating Manual (NISPOM). This change, which came in the wake of a number of high-profile insider incidents involving government contractors, requires cleared federal government contractors to establish and maintain an insider threat program, meeting many of the requirements of E.O. 13587.

A formalized insider threat program as outlined in these documents provides an organization with a designated resource to address the problem of insider threat. Such a program sets the tone for the organization and creates a focal point for awareness about insider threats.

A successful insider threat program includes

  • enterprise-wide participation in developing, implementing, and operating the program
  • active senior leadership and executive management involvement and sponsorship
  • integrated data collection and analysis of both technical and non-technical (behavioral) indicators of potential insider threat activity
  • formal processes for response, communication, and escalation

Although both sets of requirements coming out of E.O. 13587 and the NISPOM focus on having an insider threat program that protects classified information and systems, it is widely recognized in the security community that a comprehensive, robust program should focus on all types of insider threat activity, beyond espionage and national security, integrating data from outside of classified networks and facilities. This means building a program to also deter, detect, and respond to activities by malicious and unintentional insiders that involve IT sabotage, intellectual property theft, fraud, unintentional disclosure of sensitive or proprietary or PII data, and acts of physical harm including workplace violence.

Certificate Components

The NITC Insider Threat Program Manager Certificate can help organizations satisfy the requirements of E.O. 13587 and the NISPOM, along with providing guidance on building a broader, enterprise-focused program. The certificate program content and guidance is based on

  • CERT NITC research, experience, and case analysis
  • National Insider Threat Task Force (NITTF) minimum standards
  • NISPOM requirements for insider threat

The certificate program has four components:

After successfully completing all four components of the certificate program, the participant is awarded an electronic professional certificate.

Program Topics

This certificate program helps participants understand

  • what is needed to build and operate an effective insider threat program
  • technical issues from a management perspective
  • problems and pitfalls to avoid
  • best practices where applicable
  • the importance of continued participation and buy-in from across the enterprise

The main audience for the certificate program is

  • current or potential insider threat program (InTP) managers
  • insider threat program team members

However, the certificate program may also be of interest to others who

  • interact and support an insider threat program team (e.g., IT, Information Security, Human Resources, Physical Security, Legal/Privacy, Risk Management, Contract Officers, Software Engineering, “data owners”)
  • want to learn more about implementing and operating an effective program

Upon completion of this certificate program, participants will be able to

  • identify the right people to involve in the planning and implementation of their InTP
  • propose options for implementing their InTP
  • plan the steps to build, implement, and operate their InTP
  • identify policies, procedures, and training within their organization that require enhancement related to insider threat issues

More information on this certificate program can be found at

Information on general NITC insider threat training can be found at

Posted on Leave a comment

Head in the Clouds

The transition from on-premises information systems to cloud services represents a significant, and sometimes uncomfortable, new way of working for organizations. Establishing meaningful Service Level Agreements (SLAs) and monitoring the security performance of cloud service providers are two significant challenges. This post proposes that a process- and data-driven approach would alleviate these concerns and produce high-quality SLAs that reduce risk and increase transparency.
The National Institute of Standards and Technology (NIST) defines cloud computing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources.” There are several variations of the cloud computing model. These are differentiated primarily by the level of control an organization retains over the shared computing sources. The public cloud is defined as a model in which all shared computing resources are operated by a third party on their premises. Expertise in cloud architecture, risk management, and supply risk management varies greatly. A study by Citrix found a general lack of knowledge about cloud concepts among corporate decisions makers. This research noted that “first to mind when asked what ‘the cloud’ is, a majority responded it’s either an actual cloud, the sky, or something related to the weather.” This meteorological interpretation of cloud computing concepts is evidence of a potential knowledge gap in a vital element of enterprise risk management.

Service Level Agreements as Security Instruments

Information security management expectations and techniques need to evolve along with the service delivery model. Service Level Agreements (SLAs) are a prominent feature of this newly altered domain of information security. SLA are the subject of a great deal of cloud security research. This is no surprise given the prominence of SLAs in the cloud computing model. They are the most practical monitoring mechanism available to most cloud service consumers. NIST defines a Service Level Agreements as “a binding agreement between the provider and customer of a cloud service” (SP 500-307, pg.8). The SLA lifecycle is commonly described as having five phases: negotiation, implementation, monitoring, remediation, and renegotiation). Each phase of the lifecycle presents challenges to the cloud services consumer.

Negotiating SLAs can be a difficult task. This is especially true for smaller organizations attempting to negotiate with large cloud service providers. Standard, non-negotiated, SLAs are not surprisingly advantageous to the provider and minimize penalties for non-compliance. The concept of dedicated security Service Level Agreements (SSLAs) is relatively new and still resisted by some providers.

Putting an organization’s vital information and data assets in the hands of a third party can be a source of new risks. One should not assume that cloud service providers are meeting security expectations because of their size and reputation. SLAs are the most impactful method by which information security practitioners can peer into the propriety black box of the cloud. The advice offered to me by one former CISO that “managing SLAs and doing security” are different should be remembered. The objective needs to be improving SLAs to demonstrate tangible improvements rather than producing more complex charts or visually appealing graphs without meaning.

The Need for Trust in Cloud Relationships

Trust in the cloud is believing that the third-party provider will proactively manage security in the best interest of the consumer. Risks will be identified, and mitigated, without a need to revisit the SLAs in a time of crisis. This means applying judgement to new and unanticipated issues within the context of existing SLAs. The cloud service provider will operate with great transparency and not obscure information that may indicate that service levels have not been met. In this way trust should be viewed as the next stage of relationship after establishing justified confidence (see Figure 1: Evolution of Trust)

figure 1 Evolution of Trust.png

Figure 1: Evolution of Trust

Creating the Conditions for Trust Using SLAs

The need for high-quality SLAs is not obviated by trust. In fact, SLAs are essential to creating the conditions for trust. According to researchers Huang and Nicol, “‘trust, but verify’ is good advice for dealing with the relationship between cloud users and cloud service providers.” Huang and Nicol also identify three key attributes for placing trust in a cloud services provider: competency, goodwill, and consistency. This is an area of significant opportunity to improve security SLAs. They should facilitate trust by verifying that cloud service providers are exhibiting the attributes of competency, goodwill, and consistency.

A framework for the creation and management of SLAs would help to ensure a consistent level of quality and usefulness in SLAs. This framework must also be accessible to the target community of information security professionals. Academics lament the lack of measurement in cloud security SLAs, but typically offer only additional calculus as a solution. The framework needs to produce SLAs with sufficient quantitative data without overwhelming the user. SLAs need to be tied to security requirements important to the service consumer.

The software engineering and operational resilience communities have faced challenges similar to cloud consumers. Capability maturity models emerged as a solution to the need for a standard set of processes and metrics. CMMI for Outsourcing and the CERT Resilience Management Model were designed to be rigorous in process and applied broadly. These examples provide a useful path forward in the evolution of requirements management. Additionally, as securing public cloud services becomes more integrated with other facets of supply chain risk management; a framework containing comparable techniques will be of great value to the organization.

The framework proposed by this author draws on the concept of capability maturity in that a progression of measurable activities result in a desired end state (i.e., high-quality security SLAs that reduce risk and increase transparency). The SLA creation and management framework divides activities into two basic phases of a lifecycle: (1) Relationship Formation and (2) Ongoing Relationship Management. Each phase contains discrete processes and the phases are linked (i.e., the output of the Relationship Formation serves as input for Ongoing Relationship Management).

The Relationship Formation phase comprises (1) creation of a detailed service description, (2) translation of internal security requirements to cloud service provider requirements, (3) selection of specific SLA metrics and measurements, and (4) negotiation and agreement on specific SLAs with the cloud service provider. The Ongoing Management phase contains the following processes: (1) independently monitoring and verifying the security performance of the cloud service provider, (2) performing periodic service reviews with provider using the SLAs, (3) conducting root cause analysis as required on service security issues, (4) invoking penalties for SLA violations as required, (5) managing corrective actions to resolution as specified in the SLAs, and (6) capturing lessons learned to inform revisions of the SLAs (see Figure 2: SLA-Creation and Management Framework).

figure 2 SLA-Creation and Management Framework.png

Figure 2: SLA-Creation and Management Framework


Using the cloud requires the acquisition of new skills and methods. Establishing meaningful SLAs and monitoring the cloud service provider security performance are two significant challenges. This author proposes that a process and data-driven approach would assist in alleviating these concerns and produce high-quality SLAs intended to reduce risk and increase transparency. The proposed SLA creation and management framework leverages concepts from capability maturity models to ensure rigor and accessibility. Application of the framework would help recast SLAs as a device to help facilitate the development of trust. Security SLAs have migrated from the pages of contracts to the forefront of performance management.

Posted on Leave a comment

7 Considerations for Cyber Risk Management

Each year brings new cybersecurity threats, breaches, and previously unknown vulnerabilities in established systems. Even with unprecedented vulnerabilities such as Spectre and Meltdown, the approach to dealing with the risks they pose is the same as ever: sound risk management with systematic processes to assess and respond to risks. This post offers seven considerations for cyber risk management.

What Is Cyber Risk Management?

The International Organization for Standardization (ISO) defines risk as the “effect of uncertainty on objectives.” Risk management is the ongoing process of identifying, assessing, and responding to risk. To manage risk, organizations should assess the likelihood and potential impact of an event and then determine the best approach to deal with the risks: avoid, transfer, accept, or mitigate. To mitigate risks, an organization must ultimately determine what kinds of security controls (prevent, deter, detect, correct, etc.) to apply. Not all risks can be eliminated, and no organization has an unlimited budget or enough personnel to combat all risks. Risk management is about managing the effects of uncertainty on organizational objectives in a way that makes the most effective and efficient use of limited resources.

A good risk management program should establish clear communications and situational awareness about risks. This allows risk decisions to be well informed, well considered, and made in the context of organizational objectives, such as opportunities to support the organization’s mission or seek business rewards. Risk management should take a broad view of risks across an organization to inform resource allocation, better manage risks, and enable accountability. Ideally, risk management helps identify risks early and implement appropriate mitigations to prevent incidents or attenuate their impact.

Essential Elements

Most risk management standards, such as those from ISO, COSO, and NIST, and have common key processes. In its best practices for an enterprise risk management program, the Government Accountability Office (GAO) identified six essential elements:

6 ERM Elements.png

The first element, aligning enterprise risk management to goals and objectives, sets the foundation for the program by establishing the three pillars of enterprise cyber risk management: governance, risk appetite, and policy and procedure. Governance should include a body of risk-decision experts and decision makers using a framework of risk management processes that ensure engagement by key stakeholders (leaders, Authorizing Officials, and Risk Committee). Appetite for risks should be aligned to organizational goals and objectives. Policies and procedures communicate risk management expectations, risk definitions, and guidance throughout the enterprise. Once the risk management program is running, the remaining five elements continuously manage risk.

Seven Considerations for Cyber Risk Management

The following seven topics are well worth considering when planning a risk management program.

  1. Culture. Leaders should establish a culture of cybersecurity and risk management throughout the organization. By defining a governance structure and communicating intent and expectations, leaders and managers ensure appropriate leadership involvement, accountability, and training. That last one is critical: ongoing training is required to maintain expertise and deal with new risks.
  2. Information sharing. Security is a team sport. The right stakeholders must be aware of risks, particularly of cross-cutting and shared risks, and be involved in decision making. Communication processes should include thresholds and criteria for communicating about and escalating risks. The potential business impact of cyber risks should be made clear. Information-sharing tools, such as dashboards of relevant metrics, can keep stakeholders aware and involved.
  3. Priorities. All organizations have limited budget and staff. To prioritize risks and responses, you need information, such as trends over time, potential impact, time horizon for impact, and when a risk will likely materialize (near term, mid term, or long term). This information will enable comparisons of risks.
  4. Resilience. We can’t guarantee success in protecting against all risks. Risk management must also enable continuity of critical missions during and after disruptive or destructive events, including cyber attack. Resilience is an emergent property of an entity to be able to continue to operate and perform its mission under operational stress and disruption. Many organizations use the CERT Resilience Management Model (CERT-RMM) to manage and improve their operational resilience. The model includes Risk Management as one of its 26 process areas.
  5. Speed. When an organization is exposed to a risk, speedy response can minimize impact. Identifying risks early helps. Incident response and recovery depend on planning and preparation for incident management. Incident management plans should be exercised periodically.
  6. Threat environment. Cybersecurity does not always pay enough attention to the threat environment. Organizations should improve their intelligence into adversary capabilities (consider network security sensors and other reporting) while also accounting for risks from third parties (supply chain) and insider threats. Insiders, whether malicious or inadvertent (such as phishing victims), are the cause of most security problems.
  7. Cyber hygiene. Implementing basic cyber hygiene practices is a good starting point for cyber risk management. Cyber hygiene focuses on basic activities to secure infrastructure, prevent attacks, and reduce risks. The Center for Internet Security (CIS) has a list of 20 cybersecurity controls. The SEI recently released a baseline set of 11 cyber hygiene practices. When implementing hygiene practices, start by improving your knowledge of your own high-value services and assets. These require additional protection, including enhanced access controls and system monitoring. Read more in our blog post on cyber hygiene.

Prepared, Not Bullet Proof

With cyber risks continuing to grow, making good risk management decisions really matters. Rushing through decision making and always saying “no” are not the right answers. A better answer is to implement a consistent risk management program. Cyber events will still happen to your organization, but it will be better prepared to deal with them.

For more information about risk and resilience in your organization, see or contact me at [email protected].