Posted on Leave a comment

Scoping IT & OT Together When Assessing an Organization’s Resilience

The SEI engages with many organizations of various sizes and industries about their resilience. Those responsible for their organization’s cybersecurity often tell us that their information technology (IT) and operational technology (OT) are too different to be assessed together. However, not accounting for both technologies could have serious implications to an organization’s resilience. In this post I’ll say why, and I’ll describe the technology-agnostic tools the SEI uses to scope both IT and OT in resilience assessments.

IT and OT systems are distinct systems with their own cybersecurity priorities. In terms of the CIA Triad, IT generally prioritizes confidentiality and OT prioritizes availability. These priorities can drive how organizations deal with risks. However, when evaluating organizational resilience, what really matters are the interconnectedness of these two technologies and their criticality to the organization, because this drives the impact and likelihood of the risk. The NotPetya and WannaCry attacks exploited these characteristics, traversing IT and OT networks and either bringing down or severely degrading operations of major organizations.

Leitstand_2.jpgPhoto: Steag, Germany. Licensed under Creative Commons Attribution-Share Alike 3.0 Unported.

Even if you think IT and OT are apples and oranges, we can agree that many organizations depend on both IT and OT to operate. We can also agree that an organization’s ability to weather times of stress is critical to its customers, employees, and shareholders. It makes sense then that organizations should consider both IT and OT systems when determining operational resilience.

Resilience, Assessments, and the Importance of Scoping

To paraphrase the SEI’s CERT Resilience Management Model (CERT-RMM), operational resilience is an organization’s ability to manage the impact on assets and their related services due to realized risks associated with processes, systems, technology, the actions of people, or external events. In times of stress, a resilient business will be more likely to return to normal operation.

CERT-RMM proposes that organizations can achieve their optimal level of operational resilience through the effective communication and disposition of risks across a business’s many verticals. Crucially, CERT-RMM abstracts organizations to their services and all the assets that support them: people, information, facilities, and technology of any type.

The SEI has developed two effective assessment tools based on CERT-RMM that measure an organization’s operational resilience through the lens of cybersecurity: the Cyber Resilience Review (CRR), developed for the Department of Homeland Security, and the Cybersecurity Capability Maturity Model (C2M2), developed in partnership with industry representatives for the Department of Energy. Both assessments can be performed as a one-day self-assessment by the organization’s own subject matter experts (SMEs) or as part of facilitated workshops. The C2M2 is for the energy sector and more broadly assesses cybersecurity programs. The CRR is sector agnostic and focuses on an organization’s resilience management processes. Both assessments give organizations a repeatable tool to help determine their organizational resilience.

The assessments share a common set of CERT-RMM assumptions and methodologies. Both assessments focus on two aspects of the organization: (1) the organization’s business objectives and (2) the protection and sustainment of assets that support those objectives. The organization itself determines the appropriate level of resilience and resources needed to achieve its objectives and efficiently meet regulatory requirements. The organization has the flexibility to assess the critical service or function regardless of its assets, and in a way that is consistent with their risk appetite.

Scoping, or determining what parts of the organization should be assessed, is key to the assessment’s success. The CRR scopes to a single “critical service,” and the C2M2 scopes to what it calls a “function.” The critical service or function being assessed should indeed be critical to the business: if this service failed or went away, your business would also fail. For example, a car manufacturer may want to focus on its car manufacturing line. Scoping the critical service or function too broadly will dilute the visibility afforded by the assessments. For more, see my colleague Andrew Hoover’s blog post about cyber resilience and the critical service.

Scoping the critical service or function allows the organization and the SMEs engaged in the assessment to clearly define the systems that are being assessed and in turn determine their overall resilience. Scoping also allows the organization to intelligently identify the level of risk associated with those systems. The organization can then prioritize its resources to close any identified gaps–one of the practice areas of cyber hygiene. Repeating the assessment against the same scope allows the organization to measure its performance over time.

IT and OT: Better When Assessed Together

For many organizations, IT and OT assets are both critical to survivability. We should be asking the same questions of both when determining organizational resilience. The answers to those questions might vary depending on whether they address IT or OT, but that should not preclude them being asked.

For example, both the CRR and C2M2 assessments ask about the practice of patching vulnerabilities. Patching IT is generally common and non-disruptive, but patching OT could be extremely rare and disruptive. To simply not ask the question because the IT or OT answers would be different could mask exposures to serious vulnerabilities. Exclusion of IT or OT assets from the assessment not only reduces the organization’s visibility into their support of the critical service or function, but it can also create an unwarranted sense of security.

The presence of the term “cyber” in both the Cyber Resilience Review and Cybersecurity Capability Maturity Model does not imply a limitation on the critical service or supporting assets in scope. Though not all assets inherently include a cyber component, they might be connected through a network. Excluding some of the networked assets from the measurement of the organization’s resilience casts considerable doubt on the efficacy of the measurement.

Having the right subject matter experts (SMEs) on hand is also important during the assessment. Even though IT and OT systems are subject to the same resilience questions, different SMEs may need to answer the questions appropriately.

The Emerging Convergence

As IT and OT become networked together more and more, their vulnerabilities and risks will become shared. Their combined impact on the resilience of the organization will become more complicated and potentially much greater. It has never been more critical to manage the resilience of an organization in the face of these impacts and act on, or at the very least be aware of, any gaps.

Read more about operational resilience or contact us about resilience assessments.

Posted on Leave a comment

Insider Threats in the Federal Government (Part 3 of 9: Insider Threats Across Industry Sectors)

The CERT National Insider Threat Center (NITC) Insider Threat Incident Corpus contains over 2,000 incidents, which, as Director Randy Trzeciak writes, acts as the “foundation for our empirical research and analysis.” This vast data set shows us that insider incidents impact both the public and private sector, with federal government organizations being no exception. As Carrie Gardner introduced in the previous blog post in this series, federal government organizations fall under the NAICS Codes for the public administration category. Public administration, in this context, refers to a collection of organizations working primarily for the public benefit, including within national security. This blog post will cover insider incidents within federal government, specifically malicious, non-espionage incidents.

Fig 1 Public Administration Subtype.png

In part, the focus of this blog post is due to the high representation of federal government incidents within public administration, as can be seen in the chart above. The potential impact of insider incidents in federal government is greater with regard to national security. As such, federal government organizations are mandated by Executive Order 13587 to implement insider threat programs. While this requirement also applies to organizations within the defense industrial base per NISPOM Change 2, the mandate and NISPOM do not extend to state/local government and nonprofits. Since federal government organizations are under mandate to detect insider threats, it follows that more would be detected and eventually prosecuted than in other Public Administration organizations. Additional information on state and local government insider threat incidents will be featured in a future post.

In total, we identified 77 non-espionage insider incidents where a federal government organization was both the victim organization and the direct employer. However, there were 34 additional incidents where a federal organization was impacted by an insider incident at another organization. By and large, these were incidents where a federal government organization had employed a consultant or contractor. These incidents also included an instance where a federal agency was indirectly victimized in the course of a Stolen Identity Refund Fraud (SIRF) scheme and another incident where a federal employee of another agency had authorized access to the victim organization’s IT systems. Though organizations may understandably not consider a SIRF incident by external actors as an insider threat, they may consider any individuals that have hands-on, authorized access to their systems as insiders and pose a threat – even if they are employed elsewhere in government. In general, we advocate for identifying threats by trusted business partners in enterprise-wide risk assessments. While we wanted to give some context regarding these kinds of incidents, a deep dive into these incidents is beyond the scope of this blog post.

Fig 2 Federal Govt Victim Org Relat to Insiders.pngSector Overview

Insider incidents in the federal government rarely include IT Sabotage or Theft of Intellectual Property (IP). In part due to the nature of the federal and national security operating environment, so few cases are categorized as IT Sabotage or Theft of IP because incidents involving classified information are typically considered (or prosecuted as) national security espionage.

Fig 3 Fed Govt Ins Incidents by Case Type.png

Given how few reported incidents involved IT Sabotage or Theft of IP, the following analyses focuses on incidents of Fraud and “Other Misuse”. Incidents of Other Misuse by insiders can be described as those incidents that involve the unauthorized use of organizational devices, networks, and resources that are not better classified as Theft of IP, IT Sabotage, or Fraud. Representative examples of Other Misuse include the use of organizational resources for personal benefit, to violate the privacy of other individuals (e.g., obtaining access to colleagues’ emails without consent or a proper business purpose) or to commit another kind of cyber-related crime (e.g., stalking or purchasing drugs), which in turn violate organizational policies.

Sector Characteristics

Over half (60.8%) of incidents impacting federal organizations involved Fraud. In theory, this is not necessarily surprising given the well-documented efforts to combat issues of fraud, waste, abuse, and mismanagement of federal funds. To that end, the Insider Threat Incident Corpus reflects those efforts. The Fraud statistics include only incidents where no other incident type was known and, for each attribute (i.e., Who, What, When, Where, How, Why), take into account only cases where that attribute was known.

Fig 4 Ins Fraud Incid Fed Gov.png

Other Misuse was the second-most common form of insider threat in federal government. These incidents generally involved insiders abusing access to devices or systems for reasons not including financial gain (often the case in Fraud or Theft of IP) or revenge (typical in IT Sabotage). While these incidents are not typically related to an immediate loss of revenue or system availability, they have potential to cause reputational damage to or generate liability concerns for an organization.

Fig 5 Ins Other Misuse Fed Govt.png

Analysis

Insiders committing Fraud in federal government tended to be in trusted positions and committed the incident during working hours. In federal government Fraud incidents where the financial impact was known (44total), the median financial impact was between $75,712and $317,551. Overall, in federal government insider incidents where impact was known (64 total), the median impact was between $0,000 and $144,195. For comparison, the median financial impact of a domestic insider threat incident – across all industries within the CERT Insider Threat Incident Corpus where financial impact is known – is between $95,200 and $257,500. Three Fraud incidents (9.4%) had a financial impact of $1 million or more.

Perhaps the most notable feature of insider incidents within federal government was how prevalent incidents of Other Misuse were compared to other sectors. Although one-third (33.3%) of the Other Misuse incidents in the CERT NITC Insider Threat Incident Corpus took place in federal government organizations, federal government incidents as a whole represent only about 8.0% of the malicious, non-espionage corpus. Insiders committing Other Misuse generally did so in the furtherance of an additional crime (e.g., stalking an individual using employer-owned information systems), which made motive clear in all but one incident. Given the small sample size of insiders committing Other Misuse (19), claims cannot yet be made about the significance of any of the patterns or statistics described above. Indeed, Other Misuse incidents may be composites of other distinct incident patterns and the technical methods used simply reflect that. Financial impact was unknown in most (84.2%) incidents of Other Misuse. Of course, insider incidents can have more than just a financial impact. For instances where an insider was stalking an individual, that could have resulted in a real compromise of a victim’s personal safety. In at least one of these incidents, in an instance where the insider had access to medical records, the insider’s actions delayed the delivery of vital healthcare.

Final Thoughts

Although not every insider incident impacting the federal government will be at the level of national security espionage, incidents are still worthy of attention and study. Looking at these insider incidents – where for instance the cleared employees are the most vetted, are the most trusted, and have among the most sensitive access to information – we can glean insights into cybersecurity best practices to mitigate insider threat, including the importance of being attuned to employee behavior. Organizations in all industries can (re)consider to whom they give access to information that can be used to harm the organization directly, vis-à-vis fraud, or indirectly by violating the privacy and safety of third parties. Overall, these incidents underscore the need for robust auditing functionality to identify when previously “good” employees start to go “bad.”

Stay tuned for the next post, in which we spotlight the Financial Services sector, or subscribe to a feed of the Insider Threat blog to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

Posted on Leave a comment

Classifying Industry Sectors: Our New Approach to an Industry Sector Taxonomy (Part 2 of 9: Insider Threats Across Industry Sectors)

As Randy Trzeciak mentioned in the first blog in this series, we are often asked about the commonalities of insider incidents for a particular sector. These questions invariably begin conversations about which sector-specific best practices and controls are best suited to address the common incident patterns faced by these organizations. To better address this question, we decided to update our model for coding industry sectors1, or what classification system we use to organize the organizations in our insider threat database.

We decided to adopt a hierarchical system for classifying industry sectors to replace the flat classification system that we previously used. This allows us to report findings on broad sectors, such as transportation systems or communication systems, as well as narrower verticals within each sector, such as air transportation or telecommunications. The new classification system serves as the foundation for this blog series on insider threats across industry sectors.

In this post, we discuss why we transitioned to a hierarchical classification system. We also present the new system and explain its utility. We then describe what’s next for this series on highlighting insider threat trends across industry sectors.

Why Adopt a Hierarchical System?

Previously, we employed a modified version of the Department of Homeland Security Critical Infrastructure Sector classification system. The Presidential Policy Directive 21 (PPD-21) identifies and describes sixteen sectors that contain critical assets or processes that are considered so vital to the United States interests, that “their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety, or any combination thereof.”2 It is widely used in identifying and describing such critical sectors and remains a standard for reporting findings for these industries.

As our incident corpus has grown, we found an increasing number of “miscellaneous” insider incidents that could not be appropriately classified with the flat taxonomy that was formerly used. Examples of difficult-to-classify organizations included retail stores, entertainment businesses, and non-profit organizations. Additionally, we found that flatness of the taxonomy prevented us from drilling-down to collect and analyze incidents in specific verticals within the sectors. This is particularly problematic in the Financial Services sector, which has a large number of sub-organization types (such as banks, insurance, and financial services) that are relatively diverse in terms of regulation.

Accordingly, we sought a new, multi-tiered classification system that was broader in coverage (to include sectors not designated as “critical”) and had enough depth to enable us to scope our findings to more specific sectors as necessary. We also wanted to maintain compatibility with the DHS Critical Infrastructure taxonomy, as well as with government standard industry classification systems: the North American Industrial Classification System (NAICS) and the Standard Industrial Classification (SIC) system. (NAICS was released in 1997 to replace the SIC standard, although some organizations still use SIC.3)

The New Classification System for Incident Analysis

As we mentioned earlier, our new classification system is hierarchical and designed to map to existing industry sector taxonomies such as the PPD-21 critical infrastructure set, or our previous industry sector model. Our new system is modeled after NAICS, a comprehensive classification system with six tiers and over 100 classes. Our modified version contains just two tiers, with 15 classes at the Tier 1 level and 70 classes at the Tier 2 level. Of the subsectors in Tier 2, we find that 12 are not considered “critical infrastructure”. Examples of those include Legal or Professional consulting services, religious institutions, or civic associations.

We modified the NAICS code primarily to reduce the complexity of the lower levels into a two-tiered approach that provided consistent specificity across the sectors. Additionally, we sought to eliminate any overlap between classes and any ambiguity with the class names. We wanted to adopt the standard NAICS classification for our purposes (without creating too much of a burden for our analysts) while preserving the sectors that we regularly report (such as Federal Government versus State/Local Government).

Below is our full taxonomy. For a detailed enumeration of the differences between our new system and the NAICS code, please contact us at [email protected].

Tier I Sector

Tier II Subsector

Agriculture and Mining

Agriculture and Forestry

Fishing and Hunting

Mining and Quarrying

Oil and Gas Extraction

Utilities

Energy (Electric Power, Natural Gas)

Water, Sewage, and Waste Collection

Nuclear (Power, Materials, Waste)

Waste Collection

Dams

Construction

Residential (Home Builder)

Non-Residential (Complexes and Offices)

Civil (Bridges, Roads, Etc.)

Architecture

Manufacturing (Minus Medical Equipment)

Food and Beverage

Chemical

Aerospace, Auto, Marine, and Machinery

Electronics

General Manufacturing

Trade

Retail Trade (Automotive, Clothing, Gas Stations, Health and Personal Care, Electronics and Appliances)

Wholesale Trade

E-Commerce

Transportation and Support Services

Air

Rail

Water

Truck

Transit

Pipeline

Courier Services

Supply Chain Services

Information Technology

Software Publishers & Web Developers

Telecommunications

IT, Data Processing, Hosting, Etc.

Finance and Insurance

Banks & Credit Unions

Insurance (Home, Auto, Life, Etc.)

Other Financial Services

Real Estate and Rental/Leasing

Real Estate Sales/Rentals

Warehousing & Storage

Automotive & Machinery Rental/Leasing

Religious Institutions, Charities, and Non-Profits

Religious Institution

Charity

Non-Profit

Civic Association

Professional Services

Legal

Consulting

Scientific Research and Development

Manual Labor and Related Services

Labor Unions

Business Services (Marketing, PR, Etc.)

Education

Elementary/High School

Colleges/Universities

Technical/Industry Training

Health Care and Social Assistance

Private Practice, Walk-In Clinics, At Home Care, Etc.

Diagnostics, Support Services, and Medical Manufacturing

Advocacy Services

Psychological Practice

Pharmacology

Hospital

Health Network

Health Care Insurance

Arts, Entertainment, Recreation, and Hospitality

Performing Arts & Spectator Sports

Museums & Historical Sites

Content Publishers

Hotels, Amusement, Gambling, Restaurants

Public Administration

Federal Government

State Government

Local Government

Defense Industrial Base

Correctional Facilities

Postal Services

Emergency Services

What’s Next?

In the next series of blog posts, we’ll highlight specific Tier 1 Sectors and Tier 2 Subsectors. We’ll explore insider incident trends in a specific industry sector. We’ll characterize threats by answering the 5W1H questions (Who? What? When? Where? Why? How?). We’ll chronicle story summaries of exemplar incidents and describe unique or interesting findings for the given sector. Additionally, we’ll contextualize each sector by discussing germane regulatory mechanisms (such as GLBA or HIPAA) that govern industry security practices to mitigate insider threats. As appropriate, we’ll identify applicable controls and resources to help reduce insider risk and increase your threat awareness.

Stay tuned for the next post, where we spotlight the Federal Government subsector, or subscribe to a feed of the Insider Threat blog to be alerted whenever a new post is available.

For more information about the CERT National Insider Threat Center, please contact [email protected].

Notes

1 “Industry sector” encompasses federal departments and agencies underneath the Public Administration industry sector.

2 Department of Homeland Security, Critical Infrastructure Sectors Website. https://www.dhs.gov/critical-infrastructure-sectors

3 Standard Industrial Classification. Wikipedia. https://en.wikipedia.org/wiki/Standard_Industrial_Classification

Posted on Leave a comment

Is Compliance Compromising Your Information Security Culture?

Individual organizations spend millions per year complying with information security mandates, which tend to be either too general or too specific. However, organizations focusing solely on compliance miss the opportunity to strengthen their information security culture. This blog post will explain the benefits of information security culture and demonstrate how compliance with information security mandates may prevent organizations from achieving their full information security culture potential.

team fist bump.jpg

What is information security culture? Why does it matter?

There is no single, accepted definition for information security culture. Generally speaking, it can be defined as the “way [information security] things are done” at an organization. Information security practices are acculturated when everyone at an organization knows how to identify information security-related issues, knows what to do if they encounter an issue, and then responds to it appropriately. In other words, information security culture is an information security program manager’s dream.

There are many benefits associated with having a strong information security culture, aside from happy managers and better information security program management. Organizations with strong information security culture have employees that exhibit improved situational awareness and increased resistance to social engineering attacks. These employees are also more likely to have compliant intentions (i.e., they want to comply with their organization’s policies) and are more likely to identify and report information security incidents when they see them.

Information security culture factors

One might expect compliance with an exhaustive standard, like PCI-DSS, to help an organization improve its information security culture. However, empirical research (see Further Reading) shows only a handful of factors contribute to an organization’s information security culture:

  • Senior management support
  • Information security policies
  • Information security training
  • Information security program management (compliance, monitoring, and auditing activities)

Unfortunately, information security mandates do not always incorporate these factors. When they do, they mandate them in a fashion that is not likely to produce a strong cultural effect.

Issues with mandates and information security culture factors

In my master’s thesis, Information Security Culture Factors Assessment of Federal Regulations and Private Standards, I examined a small collection of private standards (ISO 27001, PCI-DSS, NERC CIP) and federal regulations (HIPAA, GLBA, FISMA) to see how they mandate the aforementioned information security culture factors. In the end, I found significant differences between the document types. The federal regulations tended to be more interpretive, whereas the private standards tended to be more explicit. Overly generic or overly specific mandates can be problematic for information security culture because an organization may miss the opportunity to enhance its culture if it follows the letter of these mandates alone. This concern specifically relates to observations I made concerning issues like information security policies, training requirements, and executive position mandates.

Issue: Overly generic information security policy requirements

Most of the documents I analyzed require organizations to have information security policies, but the federal regulations contain few, if any, requirements for the contents of those policies. The regulations miss the opportunity to require organizations to incorporate culture-strengthening elements into their policy documents. Organizational culture research suggests companies with clearly articulated qualitative beliefs have stronger culture, so organizations should use their policies to express the vision for their information security program. This could mean including something, such as an affirmation of senior management’s support of the program, to demonstrate the importance of information security to the entire organization.

Issue: Overly specific training requirements

The private standards I analyzed mandate that organizations train users at specific intervals or for certain skills. Information security training is essential to strong information security culture, but organizational training literature indicates training programs should be adapted to suit organizational needs. This means organizations are more likely to achieve better outcomes, and stronger information security culture, when they implement customized training programs. For example, while an annual phishing training requirement may sound reasonable, the requirement may not be effective at all organizations. Organizations must be willing to train more often, and for more skills, if they expect to create an environment in which users can identify and respond to information security issues appropriately.

Issue: Mandating new executive roles

Some of the regulations analyzed require organizations to create specific senior leadership positions, like the Chief Information Security Officer (CISO) role. Organizations cannot have strong information security culture without senior management support, but there is little research on the positive effect that CISOs, and other designated information security executives, have on information security culture. One study suggests CISOs struggle with legitimacy issues and do not have a significant impact on their organization’s information security culture. Organizations should not rely solely on newly installed information security executives to create an immediate impact on their institution’s information security culture. Instead, they should also encourage established executives to assist in the cultural transformation process.

Compliance is the baseline, not the goal for information security culture

Compliance will always be an objective for information security programs, but a checklist approach to information security can only achieve, at best, a checklist culture. We cannot expect compliance with any mandate, private or federal, to help organizations develop strong information security culture because information security culture factors, like training, must be tailored to each environment.

Organizations interested in improving their information security culture are encouraged to adopt verified information security culture practices and to periodically assess their organization’s culture using validated tools, like the HAIS-Q, to determine if those practices are effective. Organizations are already spending considerable resources on their information security programs, so they should consider investing in their own culture if they want those programs to be more effective.

Further Reading

Security culture and the employment relationship as drivers of employees’ security compliance.

The Contributions of Information Security Culture and Human Relations to the Improvement of Situational Awareness.

Information security culture – state-of-the-art review between 2000 and 2013.

The Influence of Organizational Information Security Culture on Information Security Decision Making.

Shaping intention to resist social engineering through transformational leadership, information security culture and awareness.

Cyber Security Culture: Counteracting Cyber Threats Through Organizational Learning and Training.

Information security: management’s effect on culture and policy.

Impacts of Comprehensive Information Security Programs on Information Security Culture.

Improving the information security culture through monitoring and implementation actions illustrated through a case study.

Please contact the SEI to obtain a copy of my master’s thesis, Information Security Culture Factors Assessment of Federal Regulations and Private Standards.

Posted on Leave a comment

Insider Threat Incident Analysis by Sector (Part 1 of 9)

Hello, I am Randy Trzeciak, Director of the CERT National Insider Threat Center (NITC). I would like to welcome you to the NITC blog series on insider threat incidents within various sectors. In this first post, I (1) describe the purpose of the series and highlight what you can expect to see during the series, and (2) review the NITC insider threat corpus, which is the foundation for our empirical research and analysis. Join us over this nine-part series as we explore in-depth specific issues pertaining to insider threat. We hope you will follow along, and we encourage you to provide feedback about other sectors that we should analyze.

Since 2001, the NITC has been collecting incidents committed by insiders, both with malicious and non-malicious (unintentional) intent, that cause harm to organizations. To date, we have collected over 2,000 incidents and have broken them into categories based on commonalities or how the incidents tend to evolve over time. These categories include Information Technology Sabotage, Theft of Information (Intellectual Property), Fraud, National Security Espionage, Workplace Violence, Unintentional Incidents, and Other Misuse (e.g., Privacy Violations and Miscellaneous Incidents). Analyzing insider incidents by organizational impact informs mitigation strategies for organizations. Information about the NITC insider incidents types can be found here.

While presenting at conferences, workshops, and training deliveries, people often ask NITC members questions about the uniqueness of insider incidents in particular sectors. These people hope to identify unique mitigation strategies they can implement in their insider risk/threat program. This blog series will address that hope by presenting a common analysis framework and identifying data to be considered for developing behavioral and technical risk indicators; characteristics of the insiders perpetrating the incidents; organization events, actions, and conditions that may have influenced insiders causing harm; detection methods; and organizational impact (e.g., financial, operational, and health and safety).

Our blog series will analyze and summarize insider incidents in the following sector-specific categories: Federal Government, State and Local Government, Financial Services, Healthcare, Manufacturing, and Information Technology.

For more information about the CERT NITC, see sei.cmu.edu/go/insider-threat. We’re eager to hear your thoughts, ideas, and suggestions for insider threat mitigation. If you have questions or want to learn about future data analysis efforts regarding our insider threat incident corpus or to suggest a topic for our future research or blog posts, please send an email to us at [email protected]. Stay tuned for the next post, which we discuss in depth our new structure for analyzing sector specific data, or subscribe to a feed of the Insider Threat blogs to be alerted when any new post is available.

Posted on Leave a comment

How CERT-RMM and NIST Security Controls Help Protect Data Privacy and Enable GDPR Compliance, Part 1: Identifying Personally Identifiable Information

The costs of the steady stream of data breaches and attacks on sensitive and confidential data continue to rise. Organizations are responding by making data protection a critical component of their leadership and governance strategies. The European Union’s recent General Data Protection Regulation (GDPR) adds layers of complexity to protecting the data of individuals in the EU and European Economic Area. Organizations are struggling to understand GDPR’s requirements, much less become compliant. In this series of blog posts, I’ll describe how to use the CERT Resilience Management Model (CERT-RMM) to approach GDPR compliance and, more fundamentally, data privacy.

The European Commission defines personal data to be “any information relating to an individual, whether it relates to his or her private, professional or public life.” Under GDPR, which went into effect on May 25, 2018, businesses are required not only to comply with requirements but to demonstrate their compliance.

At its core, GDPR is about risk, in this case data privacy and security. Dealing with risk is not new to the many organizations that have used CERT-RMM’s resilience-focused approach to establishing threat and incident management programs. However, in the face of GDPR’s requirements to address data privacy risk–and EU data subjects already filing access requests with U.S. organizations–most data owners don’t know what to do. Fortunately, just as CERT-RMM can drive resilience activities at the threat and incident management level, it can do the same at the enterprise level for GDPR compliance.

Adapting CERT-RMM for Data Privacy

Data owners can use the GDPR compliance and privacy scenario to address the requests for EU subject data they are currently receiving. However, with no established baseline for normal GDPR compliance, data owners can only respond in an unsustainable, ad hoc manner. I believe organizations can use CERT-RMM to create process improvement efforts to comply with the GDPR regulation. My goal is to adapt CERT-RMM for data privacy by creating a model view of CERT-RMM relationships that drive resilience activities at the enterprise, engineering, operations, and process management levels.

CERT-RMM provides a model of an organization that is resilient to disruption. The model has 26 process areas that each include a mixture of specific and general goals and practices. Twelve of the process areas drive the resilience management of data subjects’ information privacy. They establish and use requirements for protecting and sustaining data subjects’ information and privacy. The CERT-RMM process areas establish PII as a key element in service delivery.

Table 1: CERT-RMM process areas related to data privacy

CERT-RMM Process Area Operational Resilience Management Area
Asset Definition and Management (ADM) Engineering
Controls Management (CTRL) Engineering
Requirements Resilience Management (RRM) Engineering
Service Continuity (SC) Engineering
Compliance Management (COMP) Enterprise Management
Organizational Training and Awareness (OTA) Enterprise Management
Risk Management (RISK) Enterprise Management
Asset Management (AM) Operations
Incident Management and Control (IMC) Operations
External Dependencies Management (EXD) Operations
Vulnerability Access and Resolution (VAR) Operations
Monitoring (MON) Process Management

Privacy by Design

GDPR requires data owners to implement “Data protection by design and by default.” Organizations will need to design privacy into their policies, procedures, and systems from the inception of organizational services, products, and processes. The CERT-RMM process areas above provide a resilience-based approach to privacy by design, considering the nature, purposes, context, and scope of the processes and their implications.

As with privacy design principles generally, not all 12 process areas will apply in all use cases, and the use of some CERT-RMM process areas can conflict with the use of others. Ultimately the process areas chosen are a cluster of related practices in an area that, when implemented collectively, satisfy a set of goals for making improvement in that area. Choosing which ones to implement depends on the people, information, technology, facilities, and organizational culture.

Using CERT-RMM to Identify Personally Identifiable Information

The first step for an organization improving data privacy and complying with GDPR is to identify what it needs to protect: any information relating to an identified or identifiable natural person (data subject), special categories of personal data, the digital systems storing personal data, and the categories of data they hold. Once the organization knows how the data is used and what value it holds, it can decide how to protect it under the organization’s Risk Management program.

Organizations can use CERT-RMM to implement an asset and risk management strategy that provides complete visibility of their assets. The following 10 process areas guide organizations through the identification of PII:

Table 2: CERT-RMM process areas related to identification of PII

CERT-RMM Process Area Operational Resilience Management Area
Asset Definition and Management (ADM) Engineering
Controls Management (CTRL) Engineering
Requirements Resilience Management (RRM) Engineering
Service Continuity (SC) Engineering
Compliance Management (COMP) Enterprise Management
Organizational Training and Awareness (OTA) Enterprise Management
Risk Management (RISK) Enterprise Management
Asset Management (AM) Operations
External Dependencies Management (EXD) Operations
Monitoring (MON) Process Management

The CERT-RMM process areas above can be used to map to the following Articles of GDPR:

  • Article 1: Subject-matter and objectives (data protection as a fundamental right)
  • Article 2: Material scope (processing of personal data wholly or partly)
  • Article 4: Definitions (information relating to an identified or identifiable natural person, or data subject)
  • Article 6: Lawfulness of processing (compliance with existing laws)
  • Article 8: Conditions applicable to child’s consent in relation to information society services (compliance with laws, policies, and regulations of data privacy for children)
  • Article 9: Processing of special categories of personal data
  • Article 24: Responsibility of the controller (ensure and demonstrate processing follows GDPR)

Figure 1 shows the portion of the model view of CERT-RMM relationships that drive resilience activities specifically about identifying PII.

CERT-RMM relationships_identifying PII.png

Figure 1: CERT-RMM relationships that drive resilience activities for identifying PII

Data Privacy Is Good Business

With data breaches at an all-time high, the time is now for organizations to identify and protect the privacy of all their data subjects and drive toward compliance to the GDPR. Failure to do so will lead to significant disruption of business. What’s more, adhering to a process model, such as CERT-RMM, can ultimately help organizations and businesses attract and retain data subjects. In the case of the GDPR, compliance demonstrates the organization’s investments in security, privacy, and usability.

By communicating how they handle data privacy, organizations can build trust with data subjects, differentiate themselves from competitors, and grow in the global marketplace. Organizations must look within and beyond their network to identify and protect all data subjects. We recommend application of CERT-RMM to address the process of data privacy and bridge the GDPR compliance gap.

In the next entry, I’ll show how to protect data privacy with CERT-RMM.

Posted on Leave a comment

Challenges Facing Insider Threat Programs and Hub Analysts: Part 2 of 2

In the first post in this two-part series, we covered five unique challenges that impact insider threat programs and hub analysts. The challenges included lack of adequate training, competing interests, acquiring data, analyzing data, and handling false positives.

As you read the new challenges introduced in this post, ask yourself the same questions: 1) How many of these challenges are ones you are facing today? 2) Are there challenges in this list that lead to an “aha” moment? 3) Are there challenges you are facing that did not make the list? 4) Do you need assistance with combating any of these challenges? Let us know your answers and thoughts via email at [email protected].

Challenge #6: False Negatives

In some regards, a more difficult challenge than dealing with false positives is dealing with false negatives. A false negative is where someone is indeed a threat, but the analyst lets that key indicator pass by without flagging it or escalating it as a concern. Additionally, the anomaly detection algorithm may fail in regards to detecting the behavior of concern. In some insider threat programs, this is one of the most devastating consequences that can arise. Often the balance is trying to reduce false positives to a manageable number through computer automation while simultaneously involving human analysts that peruse alerts to ensure that no threats get through the door in the form of false negatives.

Quick Win #1: Ensure that you understand what the organization (or insider threat program designated approving authority) considers to be a risk. The organization should have completed a risk assessment at various points during the implementation and operation of its insider threat hub. Similarly, ensure that your critical assets list is updated and the organization has a firm understanding of what the “crown jewels” are and their associated protection requirements.

Quick Win #2: Incorporate tabletop exercises and mock scenarios into your insider threat hub to see if related indicators are noticed. These tools have the added benefit of better training analysts to more efficiently determine if an indicator is worthy of further investigation. Ensure that all players know what is considered a false negative and ensure they can balance the number of alerts to determine which events are most urgent and damaging in a timely fashion within the organization’s risk appetite.

Challenge #7: Measuring Effectiveness

How does an insider threat program measure success? How are insider threat analysts assessed? Is it simply based on the number of items cleared from whatever tool analysts are using? Are analysts measured by how many inquiries lead to an investigation? The challenge is coming up with fair and useful metrics that measure the effectiveness of the hub and the analysts that support it. We have seen situations where leadership has come to the insider threat program with the question, “How many bad guys did you catch today?” This problematic approach is further compounded by the fact that it takes time for an organization to properly set up its program, and many organizations are struggling to determine how to measure effectiveness. While many programs are able to protect critical assets and intellectual property, some organizational components may not directly see the benefits of a program and instead see it as a burden that requires additional data calls and analysis.

Quick Win #1: Leverage an internal resource or trusted third-party to complete an insider threat program evaluation and/or an insider threat vulnerability assessment. This type of evaluation helps reduce risk to critical assets by determining the efficacy of your insider threat program.

Quick Win #2: Strive to determine the criteria for benchmarking or evaluating your insider threat program. This may require capturing certain baselines ahead of time. For example, you may consider basic metrics, such as the number of inquiries that led to investigations, number of alerts that were reviewed, number of false positives reduced, or any number of related criteria. However, more advanced metrics that are carefully constructed and reviewed often yield the best results and support for the insider threat program.

Challenge #8: Tools or Combination of Tools to Implement

Over the past few years, there has been an influx of new tools that claim to be the silver bullet in solving the insider threat problem. The difficulty for insider threat programs and their analysts is navigating the tool landscape. It is crucial for the insider threat program to understand how each tool it uses works and how the tools work together. Where are there gaps and overlaps between the different tools? What combination of tools works the best and why?

Quick Win #1: Partner with other organizations to exchange ideas and best practices when it comes to tools. Related, attend conferences such as RSA that have multiple vendors available to demonstrate the latest and greatest tools.

Quick Win #2: Contact the SEI to discuss the new tool-testing environment, Needlestack, at the National Insider Threat Center. The landscape of tools is increasing at a rapid pace and is often as wide and varied as the insider threat program itself. We have done the legwork for you to help explore a variety of features and functionality through a combination of tools. Each insider threat program is different and there is no silver bullet solution available; some require combinations of tools to create a defense-in-depth strategy. However, through our robust tool testing environment, we can recommended categories of tools that would be a useful addition to your insider threat program.

Challenge #9: Malicious vs. Non-Malicious (Does it even matter?)

One of the biggest challenges facing insider threat programs is the ability to discern whether or not an insider is acting maliciously or if the threat was unintentional. This is an important distinction that could have a tremendous impact on policy, process, and training improvements. For some insider threat programs, there is no difference between malicious and non-malicious, as both impact an organization’s ability to complete its mission. In fact, they argue that the intent of the employee should not factor into any decision to investigate, only to prosecute. It is also vitally imperative to view each potential concerning indicator in the appropriate context. Each of these threats can be equally devastating.

Quick Win #1: Review the SEI Common Sense Guide, especially Practice 9 “Incorporate malicious and unintentional insider threat awareness into periodic security training for all employees.” This practice is useful because it can encourage employees to identify potential actions or ways of thinking that could lead to an unintentional event. For example, someone willing to take more risks that the norm, who multi-tasks and is more likely to make mistakes, who posts large amounts of personal information on social media, and who has a general lack of attention to detail.

Quick Win #2: Review the SEI paper Unintentional Insider Threats: A Foundational Study. This paper is recommended because it examines the problem of unintentional insider threat and how it compares/differs from malicious insider threat. It explores cases, frequencies of occurrences across several categories, and presents potential mitigations and countermeasures.

Challenge #10: Navigating Privacy, Civil Liberties, Legal Issues, and the Impact of GDPR

It is imperative that insider threat analysts follow privacy, civil liberties, and legal guidance, including international considerations such as the General Data Protection Regulation (GDPR). There are many potential challenges that an insider threat program may need to consider. Below are a few interesting scenarios to illustrate that point. Think about how your organization and insider threat program would respond or want to respond to each of them. Do you have the right governance in place and policies defined so that the insider threat program staff knows what to do in each situation?

Scenario A: A manager suspects that her employee is watching basketball videos while at work. Furthermore, she suspects that he is leaving for two hours around lunch time. She asks the insider threat program to provide a report of his Internet usage and his badge-in and badge-out records. Should the insider threat hub provide this information to the manager?

Scenario B: The insider threat program has determined that an employee is the “victim” of a scam; perhaps the alerts show she is sending money via Western Union in the hopes of a multimillion dollar windfall. Is it the insider threat program’s responsibility to intervene?

Scenario C: The insider has been absent from work more frequently than normal and has been withdrawn from her peers when they previously attended after-work events together. The insider has also been updating her will during work hours. Should the insider threat program intervene? If yes, how would the staff do so in an appropriate manner?

Given these situations, it is imperative that the organization define its policy and determine how it will react to different situations before program operation begins. It will also need to be flexible to address new issues and concerns (e.g., such as discovering suicidal behavior) as the program grows and expands.

Quick Win #1: Always work closely with your privacy, civil liberties, and legal counsel. If you need further guidance, contact the National Insider Threat Task Force (NITTF).

Quick Win #2: Provide training to the insider threat analyst hub so its members understand what authorization they do and do not have. Establish written policy and ensure that the policy is followed according to the legal guidance.

Quick Win #3: Review the SEI blog post, “GDPR and Its Potential Impacts for Insider Threat Programs.” In this blog post, the author considers what the GDPR means for some of the best practices discussed in the Common Sense Guide to Mitigating Insider Threats, 5th Edition. The author covers the best practices that are most important or most impacted by the GDPR. As was the case in the first part of this blog series, we highly recommend that you consider each of these challenges and have the appropriate conversations with the members of the insider threat program and specifically those working with or in the hub.

Each of these challenges can be explored individually; however, it is a culmination of these challenges that can derail an insider threat program if not addressed properly. Therefore, it is important that these challenges not linger and are resolved as soon as possible, involving as many insider threat program stakeholders as required.

We want to hear what you think. Please send questions, comments, or feedback to [email protected]t.org.

Posted on Leave a comment

Challenges Facing Insider Threat Programs and Hub Analysts: Part 1 of 2

The purpose of this two-part blog series is to discuss five challenges that often plague insider threat programs and more specifically the analysts that are working in insider threat hubs. I am in a unique position to discuss this area because I have many years of experience working directly with operational insider threat programs of varying maturity levels. Thus I have a front-row vantage point to understand the challenges that analysts face on a daily basis. In this blog post, I will discuss some of the key challenges and associated recommendations (e.g., quick wins) facing many organizations.

As you read this blog, think about these questions (1) How many of these challenges are you facing today? (2) Are there any challenges on this list that lead to an “aha” moment? (3) Are there challenges that you are facing that did not make it onto this list? (4) Do you need assistance (from inside and outside your organization) with combating any of these challenges? Let us know your answers and thoughts via email at [email protected].

Challenge #1: Adequate Training:

One important and often overlooked aspect is training the analysts to know what to look for in the data that is pushed or pulled into the hub. The data could consist of HR records, network activity, badge access, and a myriad of other useful information for the analyst to examine data. Frequently, determining an indicator of concern can be thought of as finding a needle in a stack of needles. It is imperative that the insider threat program team set the tone, expectation, rules, and measures of success for the analysts to follow.

Another concern is the breadth of the insider threat problem as it encompasses technical and behavioral science, as well as counter-intelligence domains. Within the technical domain there are specialty areas such as networking, databases, modeling, statistics, etc. Given budget and hiring restrictions, it is difficult to hire for all of these separate positions. Thus, it is imperative that training is provided to the analyst to ensure they are up to speed on as many of the domains as possible.

Quick Win #1 = Enroll in the brand new SEI Insider Threat Analyst Course

Quick Win #2 = Enroll in the NITTF Insider Threat Hub Operations Course

Challenge #2: Competing Interests:

Consider a situation whereby the hub is made up of various analysts from different organizations working on different contracts with each having a different role and responsibility within the insider threat program team. It is important to understand that insider threat is a team sport and requires collaboration. Another concern is how to best handle a particular concerning event. For example, suppose the hub data shows a high frequency of printing, during off-hours, and immediately before foreign travel. The analysts with a cyber-background might recommend a different course of action from those with a counter-intelligence (CI) background. This is often known as the right of first refusal. In simple terms, an insider threat hub comprised of one type of analyst may think the best course of action is to disable access, notify management, and request that the employee of concern be terminated. However, another set of analysts may recommend that management take a wait and see approach. The rationale being to see what else the insider is capable of, whom else they might be colluding with, and to see if there is a foreign nexus at play.

Quick Win #1: Create an insider threat playbook and action plan. This playbook should be developed before there is an incident to ensure that the processes are well-understood, tested, and revised according to who has the authority (refusal).

Quick Win #2: Review the SEI Common Sense Guide to Mitigating Insider Threat, Fifth Edition, focusing on the section “Organization Wide Participation.”

Challenge #3: Acquiring Data:

It is quite difficult to perform insider threat detection without the necessary data in place. Often the data is obsolete or perhaps does not cover all networks or employees. Additionally, data is kept close by the data owner and breaking down the barriers to allow seamless sharing of data is a challenge. The challenge is for the analysts to know the process for data collection and data sharing. They must delicately balance both the frequency and amount of data they are requesting. I have seen many situations where analysts requested a mountain of data but never actually used it. On the flip side, I have seen analysts that were hesitant to request information for concern of “rocking the boat,” perhaps due to the culture of the organization.

Once the data authorization is given there are several other subsequent issues. They range from how to secure the data during transit and at rest. Who has access to the data? How long is access granted? How often is the data updated? Is the data being pushed, pulled, or is it a hybrid approach? All of these challenges should be discussed ahead of time with the insider threat program management office, legal/privacy, and the data owners to reduce the impact on the stakeholders.

Quick Win #1: Create a data sharing and handling agreement.

Quick Win #2: Leadership buy-in. Ensure that you have the appropriate leadership in place that can assist with getting the agreements in place and enforcing the agreements for data sharing that you have put into place. Related, senior leadership should help pave the way by negotiating and promoting information sharing.

Challenge #4: Analyzing Data:

Once the analysts have access to the data, an entirely new set of challenges may arise. Many organizations–either through the use of commercial tools or in-house methods–strive to develop a ranking of the riskiest employees. The risk equation is fluid and consists of many different variables such as clearance held (top secret), position (system admin), and account privileges (super admin. Additionally, it also includes different risk indicators such as frequent use of “bad” keywords, accessing blacklisted sites, accessing file shares and networks without a need to know, frequent printing, etc. With that said, how does the analyst calibrate the data to show the riskiest person in the organization? If an employee has numerous minor violations does that score get calculated higher or lower than the employee that has one egregious violation? Who is the employee of concern? Stated another way, is quantity or quality scored higher? Another challenge is making sense of the data. A particular employee of concern may be ranked high on a list of most anomalous users. All of this information should be analyzed and compared to a baseline which can be that same employee’s previous computer usage or a baseline of a peer doing the same type of job. The challenge is understanding why that particular person is anomalous and what a change in their baseline really means.

Quick Win #1: Ensure appropriate communication between all insider threat hub analysts to ensure that the decision is being made with all appropriate information.

Quick Win #2: Leverage technology (but don’t 100% rely on it) to help you make sense of the data that you are seeing. Be cognizant of developing a baseline over time and comparing any deviations to it.

Challenge #5: False Positives:

False positives, resulting from the analysis of the data, can be quite frustrating and time consuming for the analyst. Generally speaking, false positives in the context of insider threat can be thought of as a system firing an alert when there is nothing malicious there. For example, consider an insider threat hub that uses a particular “bad” keyword list. Now suppose of the words on the list is “DWI” (as in driving while intoxicated). Is the system going to generate an alert every time it encounters the word “Bandwidth?” The ability for the system and the analyst to reduce the amount of false positives is paramount for success. However, there is also the conflicting approach regarding the organization’s appetite for risk and simply not wanting to miss a single potential threat.

Quick Win #1: Review the blog post titled: “Navigating the Insider Threat Tool Landscape

A related step is to familiarize yourself with natural language processing, the use of Regular Expressions, and other techniques to reduce false positives.

Quick Win #2: Ensure that you have an understanding as to what the organization (or insider threat program designated approving authority) considers to be risk. The organization should have completed a risk assessment at various points during the implementation and operation of its hub.

We recommend that you consider each of these challenges and have the appropriate conversations with the members of the insider threat program and specifically those working with or in the hub.

Be sure to check back for part two of this blog series where I will be covering five additional challenges facing insider threat programs and hub analysts, including: (1) false negatives, (2) effectiveness measures, (3) use of insider threat tools, (4) types of insider incidents (malicious or unintentional), and (5) privacy, legal, civil liberty, and GDPR considerations.

Please send questions, comments, or feedback to [email protected].

Posted on Leave a comment

Improving Cybersecurity Governance via CSF Activity Clusters

The National Institute for Science and Technology (NIST) recently released version 1.1 of its Cybersecurity Framework (CSF). Organizations around the world–including the federal civilian government, by mandate–use the CSF to guide key cybersecurity activities. However, the framework’s 108 subcategories can feel daunting. This blog post describes the Software Engineering Institute’s recent efforts to group the 108 subcategories into 15 clusters of related activities, making the CSF more approachable for typical organizations. The post also gives example scenarios of how organizations might use the CSF Activity Clusters to facilitate more effective cybersecurity decision making.

Cybersecurity Governance and Org Charts

Setting up an organizational structure specifically for optimal cybersecurity governance and operations can be highly challenging. The level of cybersecurity success or failure is often as dependent on culture, leadership, experience, and personalities as it is on adherence to a line-and-box organization chart structure. Still, the structure is necessary, and spreading responsibility for 108 separate cybersecurity activities across that structure isn’t straightforward.

Decomposing the overall cybersecurity mission into a more standard set of components and focusing on the dependent relationships among those components can yield valuable insights that may help to inform organizational structure and improve cybersecurity effectiveness. We formed our standard set of cybersecurity components from the CSF because of its widespread use and scope of cybersecurity activities.

Methodology

Subcategory Dependencies

The underlying methodology used for the CSF relationship mappings and clustering is based on two complementary approaches. The first involves comparing each of the 108 CSF subcategories against one another (see Figure 1). We manually evaluated each of the 11,556 subcategory pairs based on dependency, with four possible outcomes: x is dependent on y, vice versa, they are codependent, or there is no dependency. By “dependency,” we mean either a subcategory chronologically preceding another or feeding data to another. For example, ID.GV-2 is dependent on ID.AM-6 because ID.AM-6, “cybersecurity roles and responsibilities are established,” must precede ID.GV-2, “cybersecurity roles and responsibilities and coordinated and aligned with internal roles and external partners.”

Fig1 CSF subcategory relationship matrix.png

Figure 1. CSF subcategory relationship matrix

We then graphed the matrix of these dependency relationships, creating a visual representation of the connectedness of the subcategories. This approach is extremely useful for identifying functional dependencies and highlighting key interfaces between subcategories and groups of subcategories. However, it does not explicitly identify how subcategories and groups of subcategories are related from an organizational perspective, in terms of skillsets required, level of organizational responsibility, and functional similarity.

Organizational Similarity

Our second approach builds on the first by analyzing the organizational characteristics of the newly clustered CSF subcategories. My colleagues and I used a consensus-building discussion to refine the subcategory groupings, based on three questions:

  • To what extent do the outcomes described by the subcategories require similar personnel with similar skillsets?
  • To what extent do they require similar organizational authorities?
  • To what extent are they described by the subcategories “functionally” similar?

The consensus answers to these questions created a secondary graph of the subcategories, which was then overlaid onto the dependency graph. The resulting integrated viewpoint incorporates both approaches–dependency and organizational similarity–into a model that could provide a baseline for addressing governance tasks, such as assigning functions to organizations and ensuring key dependency interfaces are managed. The graph also indicates relationships between the clusters as well as relationships between the subcategories within each cluster.

The 15 CSF Activity Clusters

We’re defining a cluster as a grouping of similar cybersecurity processes that support a common cybersecurity administrative or operational activity. We grouped the NIST CSF subcategories into the following cybersecurity Activity Clusters:

1. Environment and Mission Activities that define the key purpose(s) of the enterprise and ensure that mission is properly communicated to the entire organization
2. Risk Program Definition Risk appetite, assessment, and mitigation approaches to ensure both alignment with the organization’s mission and cost-effective management of attendant risks
3. Cybersecurity Governance Activities that ensure proper leadership, culture, sponsorship, and cybersecurity policies and procedures are in place
4. Awareness and Training Awareness and skills necessary to achieve the desired cybersecurity outcomes
5. Asset Management Activities that identify, document, and manage the organization’s assets
6. External Dependencies Management Activities necessary to effectively manage supply chain risk (e.g., contractual vehicles, third-party assessment)
7. Data Management Engineering data access to maintain appropriate confidentiality, integrity, and availability
8. System Management Technical activities, policies, and procedures to ensure systems are properly developed, configured, and managed throughout their lifecycle
9. Network Management Engineering to ensure protection of the confidentiality, integrity, and availability of data in transit
10. Access Management Activities to manage access, both physical and logical, to organizational assets
11. Vulnerability Management Proper identification and management of cybersecurity vulnerabilities within the organization
12. Threat Information Management Proper identification and understanding of threats to assets
13. Event and Incident Analysis Activities to ensure the appropriate level of response commensurate with the risk represented by events and incidents
14. Monitoring and Detection Proper event monitoring and analysis to maintain full-network cybersecurity situational awareness
15. Incident Response and Recovery Activities to ensure that response to incidents is commensurate with defined organizational risk program as well as relevant policies and procedures

Use Cases

Here are two examples of how the CSF Activity Clusters could inform organizational cybersecurity governance effectiveness.

Example 1: Organizational Tension

By overlaying the CSF Activity Clusters onto its organizational chart, an enterprise could see how specific cybersecurity activities and groups are distributed within the organization. The mapping could show possible areas of decision-making tension. For example, if responsibilities for one cluster’s subcategories are distributed across three or four individuals, management can better understand the challenges in those individuals executing those subcategories soundly and consistently.

Example 2: Resource Allocation

By introducing and mapping other metadata (e.g., budget, headcount) against the CSF Activity Clusters and their inter- and intra-cluster relationships, an organization could better identify its distribution of governance-related resources. Such a mapping could inform a realignment of resources to help ensure greater likelihood of effective CSF outcomes.

Conclusion

The CSF Activity Cluster concept allows organizations to look beyond the standard line-and-box organization charts to understand and address their governance challenges. It enables a resource perspective that can help organizations identify where and how to address those challenges and, ultimately, improve cybersecurity outcomes. Going forward, we are looking at ways to visualize the clusters and their relationships, as well as other possible applications of the concept.

Please let us know what you think! Write to us at [email protected].

The author would like to acknowledge the contributions of Doug Gardner, Carl Grant, Matt Trevors, and Mike Wigal to this effort.

Posted on Leave a comment

Assets and Information in the Insider Threat Indicator Ontology

Insider threat programs can better implement controls and detect malicious insiders when they communicate indicators of insider threat consistently and in a commonly accepted language. The Insider Threat Indicator Ontology is intended to serve as a standardized expression of potential indicators of malicious insider activity.

This ontology is also a formalization of much of our team’s research on insider threat detection, prevention, and mitigation. It bridges the gap between natural language descriptions of malicious insiders, malicious insider activity, and machine-generated data that analysts and investigators use to detect behavioral and technical observables of insider activity. The ontology is a mechanism that multiple participants can use to share and test indicators of insider threat without compromising organization-sensitive data, thereby enhancing the data fusion and information sharing capabilities of the insider threat detection domain.

As researchers and practitioners implement the ontology, we received feedback that they found it difficult to differentiate between the asset and information concepts. In particular, consistency problems arose when models were implemented when actions were performed on information rather than assets. For example, when an action is performed directly on an information object, it is reasoned to be an asset. However, asset and information are disjoint classes. This post describes our design decisions and clarifies the distinction between these concepts.

An important design consideration for the Insider Threat Indicator Ontology was to model information and the documents, files, or databases that contain it. Given its intended applications, cyber observables to detect potential risk indicators (PRIs) to information assets were a major focus throughout the design of the ontology. From a cyber observable perspective, the PRIs on a database are different from PRIs on a file that is emailed over a network, even if it contains the same information.

This difference led to our team’s decision to treat technology assets and information as separate things, allowing technology assets to be containers that hold information through the ‘hasInformation’ object property. So, actions related to information are always be performed on assets, not directly on the information they contain. The following statement and figure depict an example of a file asset that serves as a container for a specific piece of trade secret information.

“The insider emailed a file containing trade secret information.”
asset container of information example.png

There are multiple ways to use the ontology components, and this is an example of a design pattern that leverages the distinction between information and assets. For more examples, please see the Insider Threat Indicator Ontology.

We encourage those using the insider threat indicator ontology to provide feedback to us. Your ideas may identify potential design patterns as well as areas that may require clarification on our intended applications.

Please send questions, comments, or feedback to [email protected].