Posted on

Windows Event Logging for Insider Threat Detection

In this post, I continue my discussion on potential low-cost solutions to mitigate insider threats for smaller organizations or new insider threat programs. I describe a few simple insider threat use cases that may have been detected using Windows Event logging, and I suggest a low-effort solution for collecting and aggregating logs from Windows hosts.

Numerous publications and guides exist, including those from the NSA, Microsoft, and SANS, that explain how and why host-level logging should be used for Windows systems. This cybersecurity concept applies as much to insider threat detection and response as it does to general troubleshooting, intrusion detection, and incident response; it should not be overlooked as a valuable resource. This is particularly true considering that the implementation comes at no additional software licensing cost on top of the base operating system that you are already using. Many security information and event management systems (SIEMs) require additional management overhead, and they may even introduce additional attack vectors. However, Windows Event Forwarding and Collection provides a straightforward mechanism that can be used to centrally aggregate logs across Windows systems without installing additional client collection agents.

Consider the following insider incident:

A system administrator was dating another employee who was fired; the fired employee began sending emails to management demanding her reinstatement using threatening language. Because of the threatening emails, the system administrator was fired as well. Before leaving, the insider created a backdoor administrator account, which he later used to attack the organization. The insider accessed the company’s servers several times (post termination), deleted sensitive data, and shut down several machines. The insider was discovered via access logs tied to the backdoor account.

This case highlights the need to audit account creation and privileged group modification, both of which may lead to the creation of unauthorized access paths. The following table lists Windows security event IDs that pertain to account management, which includes activities such as creating and disabling user accounts and groups and modifying group permissions.

Event ID



User Right Assigned


User Account Created


User Account Enabled


Security Enabled Global Group Created


Security Enabled Global Group Member Added


Security Enabled Local Group Created


Security Enabled Local Group Member Added


Computer Account Created


Computer Account Changed


Security Disabled Local Group Created


Security Disabled Local Group Changed


Security Disabled Local Group Member Added


Security Disabled Global Group Created


Security Disabled Global Group Changed


Security Disabled Global Group Member Added


Security Enabled Universal Group Created


Security Enabled Universal Group Changed


Security Enabled Universal Group Member Added


Security Disabled Universal Group Created


Security Disabled Universal Group Changed


Security Disabled Universal Group Member Added


A user account was created


A security-enabled global group was created


A security-enabled local group was created


A security-disabled local group was created


A security-disabled global group was created


A security-enabled universal group was created


A security-disabled universal group was created


A basic application group was created

These types of security events should occur relatively infrequently on domain controllers and even more infrequently as local account or group modifications on workstations and servers. Depending on the frequency observed, it may be operationally feasible to configure an email alert for these types of activities. You can use the Windows Event Viewer on the Forwarded Events log on your collector (or even on individual servers) to create a task based on specific event IDs. Filter the log to locate an event for the desired ID, then right-click and select Attach Task To This Event. You can use this task method to call specific programs or scripts, such as a PowerShell script that sends a notification email to your security team.

Fig 1 Attach Task to This Event.png

The following incident highlights the need to monitor printing activity, which is fairly straightforward to accomplish for Windows-based workstations and print servers:

An insider expressed disgruntlement to his co-workers about current organizational policies. He logged into a system and printed a sensitive document, which he then physically exfiltrated and mailed to an external party.

In this case study, the PrintService operational log could have been used to collect useful information, such as the title of the document that was printed, the user who printed it, the printer name, the total byte count, and the number of pages printed. You can readily enable this logging on centralized Windows print servers and user workstations by (1) opening the Event Viewer, (2) navigating to Applications and Services Logs > Microsoft > Windows > PrintService, (3) right-clicking Operational, and (4) selecting Enable Log.

Fig 2 Enable Log.pngAfter enabling the log, you begin to see an event ID 307 for each print job submitted on the system.

Fig 3 ID 307.pngUnless your organization is very small or printing is minimal, it would be impractical to analyze these events individually. However, using just the information contained in this event type, you can do some interesting anomaly detection across all of these events based on page count and size, and you can do trivial keyword searching on the titles of the documents.

If you are forwarding all of this log data to your Event Collector, you can use a few simple PowerShell commands to output it to a flat file as input to an anomaly detection or analysis pipeline. Specifically, you can use the PowerShell Get-WinEvent Cmdlet to locally or remotely connect to the Event Collector and then export the results using the Export-Csv Cmdlet:

PS C:Windows> GetWinEvent - logname "ForwardedEvents" 
-ComputerName wef-server -MaxEvents 100 | Export-CSV output.csv

If you deploy a more robust SIEM tool, this effort will not be lost since you will have taken the necessary steps to centralize your logging, allowing you to then deploy the SIEM tool’s event log collectors on your Event Collector servers instead of across all the systems in your enterprise.

Once you have enabled the desired event logs and implemented some sort of centralized collection mechanism, one of the next steps is to begin analyzing the data to provide meaningful and actionable intelligence and alerting. Stay tuned for more content from the CERT National Insider Threat Center, refer to our current publications (such as Analytic Approaches to Detect Insider Threats), or consider attending our instructor-led Insider Threat Analyst course.

Subscribe to our Insider Threat blog feed to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

Posted on Leave a comment

The CERT Division’s National Insider Threat Center (NITC) Symposium

Addressing the Challenges of Maturing an Insider Threat (Risk) Program

On May 10, 2019, the Software Engineering Institute’s National Insider Threat Center (NITC) will host the 6th Annual Insider Threat Symposium, with this year’s theme, “Maturing Your Insider Threat (Risk) Program.” The purpose of the symposium is to bring together practitioners on the front lines of insider threat mitigation to discuss the challenges and successes of maturing their insider threat (risk) programs. You will have opportunity to learn from others how to move beyond the initial operating capacity of your program.


This event will be open to the Department of Defense, U.S. and international governments, and public-sector insider threat communities, with presentations and panel sessions from government, industry, and academia. We anticipate over 225 security professionals will attend, evenly split across industry and government, with no participation by the media. This will be an ideal venue for honest and open discussions about the challenges facing organizations as they attempt to stand up and improve insider threat mitigation programs.

Our mission at the NITC is to assist in the development, implementation, and measurement of effective insider threat programs by performing research, modeling, analysis, and outreach to define socio-technical best practices, to assist organizations in deterring, detecting, and responding to evolving insider threats.

May 10, 2019
8:00 – 8:30 am Registration
8:30 – 4:00 pm Symposium

NRECA Conference Center
4301 Wilson Blvd.
Arlington, VA


Registration to this event is free, but space is limited to the first 225 registrants. A continental breakfast and lunch will be provided.

Where to Stay:
Hotel accommodations in the Arlington, VA area

Preliminary Event Agenda:

8:30 – 8:45

Welcome / Introduction

· Mr. Randall Trzeciak, Director – CERT National Insider Threat Center

8:45 – 9:15

Community Updates

· OUSD(I) – Mr. Jeffrey Smith

· DoD Insider Threat Management and Analysis Center (DITMAC) – Ms. Delice-Nicole Bernhard

· National Insider Threat Task Force (NITTF) – Ms. Pamela Prewitt

· Intelligence and National Security Alliance (INSA) – Mr. Sandy MacIsaac

9:15 – 10:00

Facilitating Insider Threat Analysis Using OCTACVE FORTE

· Mr. Brett Tucker – Software Engineering Institute / CERT Division

· Mr. Randall Trzeciak – (Software Engineering Institute / CERT Division)

10:00 – 10:30

Keynote Address

· U.S. Representative Chrissy Houlahan (PA) – (INVITED)

10:30 – 10:45

Morning Break

10:45 – 11:30

Insider Threat Program Maturity Framework

· Ms. Pamela Prewitt – National Insider Threat Task Force (NITTF)

11:30 – 12:00

2019 Verizon Insider Threat Report

· Mr. John Grim – Senior Manager, Verizon Security Research

12:00 – 1:00

Lunch Break

1:00 – 1:30

Maturing an Insider Threat Program – An Industry Perspective

· Mr. Douglas Thomas – Director, CI Operations & Corporate Investigations, Lockheed Martin Corporation

1:30 – 2:15

Maturing an Insider Threat Program- Incorporating Behavioral Analytics

· Dr. Christopher Myers – Chief, Behavioral Science Division, National Geospatial Agency (INVITED)

2:15 – 2:30

Afternoon Break

2:30 – 3:15

Maturing an Insider Threat Program- Utilizing Machine Learning for Insider Anomaly Detection

· To Be Determined

3:15 – 3:45

Maturing an Insider Threat Program – A Government Perspective

· Mr. Andrew Jordan – Insider Threat Program Manager, Marine Corp Intelligence Activity

3:45 – 4:00

Closing Remarks

· Mr. Daniel Costa, Technical Team Lead – CERT National Insider Threat Center

We hope to see you at this important event on May 10th in Arlington VA.

Posted on Leave a comment

Are You Providing Cybersecurity Awareness, Training, or Education?

When I attend trainings, conferences, or briefings, I usually end up listening to someone reading slides about a problem. Rarely am I provided with any solutions or actions to remediate the problem. As a cybersecurity trainer with 17+ years of experience and a degree in education, I understand that developing a good presentation is a challenge in any domain. Fortunately for cybersecurity professionals, the National Institute of Standards and Technology (NIST) can help you choose which kind of presentation to give. This blog post will review the three types of presentations defined by NIST: awareness, training, and education.

briefing room.jpg

What are you presenting?

You have to know whether you’re delivering a presentation for awareness, training, or education. Here are the definitions, according to NIST Speciation Publication (SP) 800-16, Information Technology Security Training Requirements: A Role- and Performance-Based Model.


Awareness presentations are intended to allow individuals to recognize IT security concerns and respond accordingly. – NIST SP 800-16

If the purpose of your briefing is to simply tell your audience about a topic or problem so that they can respond, you’re providing awareness. Provide the information and suggest actionable solutions for your audience.


Training strives to produce relevant and needed security skills and competency by practitioners of functional specialties other than IT security (e.g., management, systems design and development, acquisition, auditing). – NIST SP 800-16

Describe the new skills, provide practice–either guided or independent–and maybe even provide a checklist or job aid that will prompt the audience to use those new skills and abilities after they leave your presentation. Your checklist or job aid will not only improve that person’s work, but the cybersecurity of their office, and the transference of that skill to others within their organization.

If you want to change their normal behaviors, then you are providing training.


Education integrates all of the security skills and competencies of the various functional specialties into a common body of knowledge, adds a multi-disciplinary study of concepts, issues, and principles (technological and social), and strives to produce IT security specialists and professionals capable of vision and proactive response. – NIST SP 800-16

Education is generally thought of when beginning or entering a new field. For example, a high school graduate or someone changing careers would attend a college or university to receive an education in cybersecurity. This audience must learn the breadth and depth of knowledge necessary to begin a successful career in the cybersecurity industry. Once on the job, they would receive job-specific training to focus their knowledge to successfully complete the tasks of their employment.


At the Software Engineering Institute and within Carnegie Mellon University, we provide awareness, training, and education to a variety of audiences. Knowing which to use in the right situation is important.

  • If your audience needs to know about a cybersecurity situation so they can devise a solution, you are providing awareness.
  • If you are trying to change your audience’s behavior or improve their knowledge, skills and abilities to improve their cybersecurity, you are providing training.
  • If you are trying to create well-rounded cybersecurity professionals who can take what they have learned, add it to other knowledge, and expand it to different situations to improve the overall body of knowledge of cybersecurity, you are providing education.

Here is my final piece of practical advice, especially when speaking to cybersecurity professionals: Your audiences should always leave with new information, a new way of operating, or a list of tasks to perform or complete. If you can do that, you can make a difference in the way your audience conducts cybersecurity and protects the information entrusted to their care.

Posted on Leave a comment

Insider Threats in Entertainment (Part 8 of 9: Insider Threats Across Industry Sectors)

This post was co-authored by Carrie Gardner.

The Entertainment Industry is the next spotlight blog in the Industry Sector series. Movie and television producers have long entertained the public with insider threat dramas such as Jurassic Park, Office Space, or the more recent Mr. Robot. These dramas showcase the magnitude of damage that can occur from incidents involving our assumed good, trusted employees. Yet as we discuss in this post, movie producers and the entertainment industry are not immune from experiencing such incidents.

According to a SelectUSA article, the Entertainment industry is expected to be valued at $830 billion by 2022. This sector poses a prized target for malicious actors. From areas such as music, film, video gaming, theater, and hospitality, there are multiple sub-sectors within the industry that require unique and individual attention for identifying insider threats and preventing insider incidents.

Of the 26 Entertainment malicious insider threat incidents in our case corpus, we identified 26 related victim organizations. Within the 26 Entertainment organizations, we identified 18 organizations classified as “Hotels, Amusement, Gambling, and Restaurants,” and the remaining 8 are classified as “Content Publishers,” such as media producers for TV and web services. Perhaps surprisingly, two of the subsectors under Entertainment did not have any recorded insider incidents: “Performing Arts and Spectator Sports” and “Art, Museums, and Historical Sites.”

Bar chart of Entertainment Organizations Impacted by Insider Threat Incidents, 1996 to present. Hotels, Gambling, etc. organizations had 18 incidents. Content Publisher organizations had 8 incidents.

In addition to the 26 incidents where the organizations directly employed the insider, we identified 11 organizations involving a trusted business partner relationship (e.g., contractor or temporary employee).

Pie chart of Entertainment Victim Organization Relationship to Insider. In 11 organizations, or 30%, the insider was a trusted business partner. In 26 organizations, or 70%, the insider was a permanent employee.

Sector Overview

Insider incidents in the Entertainment sector contain all three of the case types (fraud, IT sabotage, and theft of intellectual property [IP]) we used to analyze data in our Industry Sector blogs. The majority of the incidents affecting Entertainment organizations are fraud cases, occurring 61.5% across all incidents.

Bar chart of Entertainment Insider Incidents by Case Type. Fraud: 16. IP Theft: 5. IT Sabotage: 4. Fraud and Theft of IP: 1.

Sector Characteristics

Given how few reported incidents involved IT sabotage or theft of IP, the following table focuses on the 5W1H (Who? What? When? Where? Why? How?) of fraud incidents. These calculations exclude instances where the data was unknown.

Insider Fraud Incidents in the Entertainment Sector. Who? Over half (55.5%) of insiders were with the victim organization for five years or more. Over two-thirds (69.2%) of insiders had an authorized account and data. Insiders ranged from ages 21 to over 51 years old with insiders in their twenties accounting for 30.7%, thirties 23%, forties 30.7%, and fifties just 15.3%. An overwhelming majority (89.5%) of the insiders were full-time employees. A majority (90%) were current employees. Several insiders occupied management (33.3%), accounting (13.3%), or other non-technical positions (40%). Some insiders occupied multiple roles. What? Entertainment fraud incidents generally targeted theft of money (66.6%) (e.g., cash in the cash register) followed by theft of customer data, such as customer credit cards (22.2%). When? For the incidents where attack time was known (15 total), roughly one-third (33.3%) involved activity that occurred only during regular work hours, a small percentage (6%) involved activity only outside of regular hours, while the majority (60%) of incidents involved malicious activity that occurred both outside and during regular hours. Where? In fraud incidents where attack location was known (15 total), nearly two-thirds (60%) involved activity on site and remotely. However, over a third (40%) of these incidents involved only on-site access. How? Of the known cases, technical methods used in fraud incidents were fairly technical. More than two-thirds (66.6%) of insiders used a skimming device, and the remaining third (33.3%) of insiders used other technical methods that were not specified. Just over one-quarter (26.6%) received their fraudulent funds by wire transfer, with just over another quarter (26.6%) abusing their access to gain fraudulent funds. Why? Unsurprisingly, as seen with most fraud cases, the motive for all 15 fraudsters was financial gain (100%).


The majority of insider incidents in the Entertainment sector occurred due to fraud motivated by financial gain. These insiders were usually with the company for over five years, had access to accounts and data, and were full-time employees. With most of the insiders in trusted positions, they had the means and methods to commit their crimes with relative ease.

It’s interesting that despite some of the insiders being employed in non-technical positions, two-thirds of them used skimming devices, a tool generally considered to be relatively technically sophisticated. In addition to using skimmers, the insiders tended to move their funds through wire transfers or they misused their access to move funds around.

Final Thoughts

We see many movies and TV shows that depict insider threat dramas; the industry is not immune to the consequences. We identified incidents of fraud, IP theft, and sabotage across the industry, including with content publishers.

Stay tuned for the next post, in which we feature Cross-Sector Analysis, or subscribe to a feed of the Insider Threat blog to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

Entries in the “Insider Threats Across Industry Sectors” series:

Posted on Leave a comment

Insider Threats in Healthcare (Part 7 of 9: Insider Threats Across Industry Sectors)

This post was co-authored by Carrie Gardner.

Next in the Insider Threats Across Industry Sectors series is Healthcare. As Healthcare-related information security conversations are predominantly driven by security and privacy concerns related to patient care and data, it’s important to recognize the magnitude of security lapses in this sector. Patients can face severe, permanent consequences from medical record misuse, alteration, or destruction. And medical record fraud vis-a-vis identify theft, otherwise known simply as Fraud in our incident corpus, is one of the primary types of security instances observed in this sector.

Defining and enforcing security and privacy protections in this sector is the 1996 Health Insurance Portability and Accountability Act of 1996 (HIPAA), which has since been expanded. The HIPAA Privacy Rule specifies data-access standards for personal health information (PHI) (i.e., who may access PHI). The HIPAA Security Rule defines requirements for ensuring that proper authentication and authorization policies and practices are in place for accessing electronic PHI in medical records.

In our National Insider Threat Center (NITC) Incident Corpus, we identified 88 malicious insider incidents impacting Healthcare organizations. These incidents do not include unintentional insider threats who may have accidentally left a laptop at a bus stop or sent an email containing PHI to a party that it wasn’t intended for. The 88 malicious insider incidents map to 91 healthcare organizations that were directly victimized in the attack (i.e., in some incidents, there is more than one direct victim organization). Of these victim organizations, Health Networks make up the largest subsector. Health Networks, also known as Integrated Health Systems, are networks of hospitals and private practices that are dedicated to bringing healthcare to a specific region.

Bar graph of Healthcare Organizations Impacted by Insider Threat Incidents, 1996 to present. The bars show the number of victim organizations by subsector. Health Network: 25. Diagnostics, Support Services, and Medical Manufacturing: 21. Private Practices, Walk-In Clinics, etc.: 20. Healthcare Insurance: 10. Pharmacology: 7. Hospitals: 6. Advocacy Services: 2.

In addition to the 91 direct victim organizations, 20 victim organizations indirectly employed the insider in some sort of trusted business partner relationship or non-regular full-time employment (e.g., contractors).

Pie chart of Healthcare Victim Organization Relationship to Insiders. 91 organizations, or 82%, employed the insider. 20 organizations, or 18%, did not directly employ the insider.

Sector Overview

Fraud is the most prevalent case type across all of the insider threat incidents within the Healthcare Sector. It occurred in some form in about 76% of all incidents. This rate of fraud is at a higher observed frequency than across the entire NITC corpus (68%). Within these fraud cases, we generally see individuals with access to patient payment records taking advantage of their access to customer/patient data to create fraudulent assets such as credit cards in order to make a profit.

Bar chart of Insider Incidents within Healthcare by Case Type, 1996 to present. The bars show the number of incidents per case type. Fraud: 67. Theft of IP: 12. Sabotage: 8. Sabotage and Fraud: 1.

Sector Characteristics

Below is a summary of the Healthcare Fraud incidents that are contained within the NITC corpus.

Insider Fraud Incidents in Healthcare Who? Most healthcare fraudsters began their malicious activities within their first five years of working for the organization (64.3%). A majority (78.2%) misused their authorized access (e.g., a privileged account or PII data access). Insiders were distributed fairly evenly throughout each age group: twenties (27.8%), thirties (25.9%), forties (31.5%), and fifty and older (14.8%). Nearly all of the healthcare insiders (82.0%) were full time employees. What? Over half (52.7%) of fraud incidents within the healthcare sector involved the theft of customer data, while 37.5% of incidents directly targeted financial assets (e.g., cash). When personal identifiable information (PII) was stolen, almost all of it was customer data (94.9%) versus employee data (5.1%). When? Of the incidents where the attack time was known, 70% of the incidents solely took place during work hours. The other 30% of incidents took place both during work hours and outside of work hours. Where? Of the incidents where the location of the activity was known, a majority occurred only onsite (72.7%). Some involved both onsite and remote activity (23.6%). A couple of incidents involved activity that only occurred remotely (3.6%).	How? Most incidents used rudimentary techniques. In almost one half of incidents, the insider either received or transferred funds (25.8%) and/or abused their privileged access (24.2%). In over a third of incidents (36.4%) the insider tried to conceal their activity in some manner, such as by modifying log files, using a compromised account, or creating an alias. Why? More than three quarters of the insider healthcare fraud incidents (84.8%) took place due to the insider's desire for financial gain. The only other stated motives were entitlement (e.g., the insider felt entitled to pay for time not worked) and the desire to gain a competitive business advantage, both of which took place once.


Although Healthcare may be an industry defined by unique regulations (e.g., HIPAA), the statistics gathered for it are similar to the statistics gathered from the broader NITC corpus. For almost all of the insider fraud cases within healthcare, the insider followed a similar path of improperly using patient PII or PHI to acquire some asset in order to gain a profit.

Financial impact differs slightly from the Healthcare sector to the broader NITC corpus. From the incidents with a reported financial impact, eight healthcare organizations (11.6%) recorded a financial impact of greater than $1 million. A higher percentage of fraud incidents (16.9%) outside of the Healthcare sector in the NITC corpus recorded the same financial loss. Notably, we did not find a significant difference in high financial impact. This is noteworthy because, given the gravity of healthcare data and the legal and reputational penalties associated with a breach, we might expect a potentially higher frequency of significant financial loss for the Healthcare sector.

Final Thoughts

Healthcare information security should be of the utmost importance for administrators and IT staff alike. Although identity theft is the most common misuse of patient data, patients could face severe medical debt from identity theft.

To better protect healthcare organizations from insider threat incidents, it is suggested that organizations participate in an Information Sharing and Analysis Center (ISAC) to receive pertinent information and help propagate a collaborative security environment. In addition to participating in an ISAC, it is also suggested that organizations enforce least privilege concerning organizational roles and data access along with tracking and blocking data exfiltration.

Stay tuned for the next post, in which we spotlight the Entertainment sector. Or subscribe to a feed of the Insider Threat blog to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

Entries in the “Insider Threats Across Industry Sectors” series:

Posted on Leave a comment

Top 5 Incident Management Issues

The CERT Division of the SEI has a history of helping organizations develop, improve, and assess their incident management functions. Frequently we discover that an organization’s primary focus is on security incident response, rather than the broader effort of security incident management. Incident response is just one step in the incident management lifecycle. In this blog post, we look at five recurring issues we regularly encounter in organizations’ Incident Management programs, along with recommended solutions. By discovering and resolving these issues, organizations can attain a better cybersecurity posture.

Incident Management Lifecycle

The incident management evaluation process we use is based on a number of known standards and guidelines from government and industry, such as the National Institute of Standards and Technology (NIST) Special Publications (SP) 800-61 Rev. 2 and 800-53 Rev. 4, DOD guidance, and our own internal research. Currently we evaluate organizations against a phased incident management lifecycle with associated categories and subcategories as described below.

  • PLAN focuses on implementing an operational and successful incident management program.
  • DEFEND relates to actions taken to prevent attacks from occurring and mitigate the impact of attacks that do occur as well as fixing actual and potential malicious activity.
  • IDENTIFY includes proactively collecting information about current events, potential incidents, vulnerabilities, or other incident management functions.
  • ACT includes the steps taken to analyze, resolve, or mitigate an event or incident.
  • MAINTAIN focuses on preserving and improving the computer security incident response team (CSIRT) or incident management function itself.

The incident management lifecycle has five phases and subcategories. Phase 1 is Plan, with subcategories Establish Incident Management Program and Develop Tools/Processes. Phase 2 is Defend, with subcategories Risk Assessment, Operational Exercises, and Network Defense. Phase 3 is Identify, with subcategories Network and Systems Monitoring and Threat and Situational Awareness. Phase 4 is Act, with subcategories Reporting, Analysis, and Response. Phase 5 is Maintain, with subcategories Program Management, Development Technology, and Physical Security. The Maintain phase leads back to the Plan phase.

The Top 5 Issues in Incident Management

Based on our Incident Management Evaluations, we have discovered the most common issues encountered by organizations with deficient incident management programs. Understanding these problems can provide insights into better management of incidents before they become major security concerns.

(1) No list or database of critical assets

An absence of a database or list of critical assets is typically due to a lack of asset management processes and procedures. Not having documentation of critical assets and data would decrease the ability to defend and protect them from potential attackers and other threats.


Develop an inventory of all critical assets and data. It is also important to establish and document processes for the management of lists, to include processes for updates, reviews, and storage.

(2) No insider threat program

The risk of a successful insider exploit in the organization will increase without an insider threat program. The loss or compromise of critical assets, personally identifiable information (PII), sensitive information, and other valuable assets from insider fraud, theft, sabotage, and acts of violence or terror may produce irreparable damage.


Develop a formalized insider threat program with defined roles and responsibilities. The program must have criteria and thresholds for conducting inquiries, referring to investigators, and requesting prosecution. For more information, see the Common Sense Guide to Mitigating Insider Threats, Fifth Edition.

(3) Operational exercises not conducted

Organizations without operational exercises will not be able to practice standard operating procedures (SOPs) in a realistic environment. Gathering lessons-learned, improving IM operations and procedures, and validating operations may also suffer.


Develop a formal process to perform operational exercises, create lessons learned, and incorporate them into future exercise objectives and operational SOPs. Doing so will benefit the organization tremendously. For more information, see the NIST SP 800-84 and NIST SP 800-61 Rev. 2.

(4) No operational security (OPSEC) program

Not having a formal operational security program can reduce awareness of sensitive information and operations. This lack of knowledge can lead to unintentional exposure of data about processes and procedures, along with the inability to properly handle, store, and transport sensitive data.


Establish a formal OPSEC program that covers sensitive information. The program should include policies for identifying, controlling, and handling sensitive information. The organization should also implement a policy for the storage, transport, and release of sensitive data. For more information, see NIST SP 800-61 Rev. 2 and NIST SP 800-53 Rev. 4.

(5) Documented plans and policies not developed

Not having developed plans and policies, such as an Incident Management Plan or a Communications Plan, can cause a number of problems. These issues include a delayed response time due to the lack of stakeholder and staff contact details and improper escalation of incidents or creation of new issues.


Develop an Incident Management (IM) Plan that all stakeholders review during updates. Organizations should develop related policies and procedures, such as a Communications Plan and Information Management Plan. Specifically, they should develop, maintain, distribute, and test an organization-wide communications plan that lists groups (e.g., Information Technology, Human Resources, Legal, Public Affairs, and Physical Security), individuals, and the details of their functional roles and responsibilities as well as relevant contact information. The Information Management Plan should contain a schema of classification and appropriate labels. The plan should include policies or guidance on media relations and acceptable use. For more information, see Executive Order 12958 Classified National Security Information.

Next Time

My next blog post will talk about how to develop an Incident Management Plan and a Communications Plan. These plans are part of any productive Incident Management Program. In the meantime, check out the SEI’s recently released Incident Management Capability Assessment. The capabilities it presents can provide a baseline or benchmark of incident management practices for an organization. Organizations can use the benchmark to assess its current incident management function. You can also learn more about the SEI’s Incident Management Resources and our work in this area.

Posted on Leave a comment

Insider Threats in Information Technology (Part 6 of 9: Insider Threats Across Industry Sectors)

This blog post was co-authored by Carrie Gardner.

As Carrie Gardner wrote in the second blog post in this series, which introduced the Industry Sector Taxonomy, information technology (IT) organizations fall in the NAICS Code category professional, scientific, and technology. IT organizations develop products and perform services advancing the state of the art in technology applications. In many cases, these services directly impact the supply chain since many organizations rely on products and services from other organizations to perform and carry out their own business goals. This post covers insider incidents in the IT sector and focuses mainly on malicious, non-espionage incidents.

The CERT Insider Threat Incident Corpus has 60 incidents in Information Technology, with 631 victim organizations spread across three main subsector spaces: Telecommunications, IT Data Processing, and Application Developers. Telecommunications organizations account for the majority of insider incidents in the CERT Insider Incident Corpus. One specific example of a telecommunications incident involves a contractor working for an Internet service provider (ISP) where the insider committed Sabotage by gaining administrator access and disabling the Internet connection to all customers for almost three weeks, costing the victim organization more than $65,000 to fix.

Bar chart of the number of IT organizations, by subsector, impacted by insider threat incidents, 1996 to the present. The Telecommunications subsector had 30 incidents. IT, Data Processing, Hosting, Etc. had 21 incidents. Software Publishers and Web Developers had 12 incidents.

Federal mandates put forth by EO 13587 and NISPOM Change 2 require DoD, USG, LE, and Defense Contractors with access to or who handle classified information to have insider threat programs that involve monitoring of IT systems for threats such as data exfiltration and sabotage. The absence of similar federal mandates for the non cleared private sector leaves many organizations, including those in IT, without insider threat programs or insider threat security controls. These organizations may be more susceptible to insider attacks. This situation could lead to incidents not being detected simply because of a lack of security awareness training about insider threats and their impacts.

Of the 60 IT insider incidents, we identified 81 organizations impacted by those incidents, of which 63 (78%) organizations were both the direct victim and the direct employer of the insider. The remaining 18 (22%) organizations involved trusted business partner relationships in which an insider was a contractor or had non-regular full-time employement with the victim organization.

Pie chart of information technology victim organization relationship to insiders. 18 organizations, or 22%, did not directly employ the insider. 63 organizations, or 78%, employed the insiders.Sector Overview

Insider incidents in the IT sector included IT Sabotage (36.67%), Fraud (21.67%), and Theft of IP (16.67%).

Bar chart of the number of insider incidents within IT by case type, 1996 to the present. Sabotage: 22. Fraud: 13. Theft of IP: 10. Fraud and Theft of IP: 7. Sabotage and Theft of IP: 5. Misuse: 3.The remaining analysis will focus on Sabotage, the incident type of greatest number.

Sector Characteristics

Over one third (36.67%) of incidents impacting IT organizations involved Sabotage. The statistics below include only incidents where the case type was solely Sabotage (22 incidents). Each attribute (i.e., Who, What, When, Where, How, Why) considers only cases where that attribute was known.

Who? A majority (71.4%) were former employees, and an overwhelming majority (80.0%) of the insiders were full-time employees while employed at the victim organization. Two-thirds (66.7%) of insiders were with the victim organization for less than a year. One-fifth (20.0%) of insiders were former employees whose access was not deactivated, and some (15.0%) of them had administrator or root privileges. Insiders were relatively young in age: teens (9.5%), twenties (38.1%), thirties (47.6%), and forties (4.8%). Most insiders occupied system administrator (31.8%), non-technical management (27.3%), or other technical (22.7%) positions. Some insiders occupied various positions, some including the aforementioned roles, throughout their tenure at the victim organization. What? More than half (60.7%) of the targets in Sabotage incidents were networks or systems. Another common target was data - insiders deleted, modified, copied, or hid customer data (10.7%) and/or passwords (7.1%). When? For the incidents where attack time was known (10), half (50.0%) involved insider malicious activity taking place only outside of work hours. Over a third of these incidents (40.0%) involved malicious activity only taking place during work hours. Few Sabotage incidents (10%) took place both during and outside of work hours. Where? Insiders primarily committed sabotage off-site using some type of remote access (81.0%).  Few insiders committed sabotage on-site (9.5%) or took actions both on-site and remotely (9.5%). How? Unlike fraudsters, insiders committing sabotage are usually in more technical roles and can harm systems by changing lines of software code. Few insiders sabotaged backups (17.6%), created an unauthorized account (17.6%), or used a keystroke logger (5.9%). Almost a third (30.0%) of insiders abused their privileged access or modified critical data (30.0%). A quarter of the insiders committing Sabotage (25.0%) received or transferred fraudulent funds. Why? Unsurprisingly, of the 20 cases with a known motive, the insiders were seeking revenge (100.0%).


Insiders committing Sabotage in the IT sector tended to be in high-trust IT positions, such as those with administrator-level access and permissions. These insiders typically committed the incident outside of typical working hours. In insider Sabotage incidents where the financial impact was known and the victim organization directly employed the insider (17), the median financial impact was between $10,000 and $20,000. Overall, in IT insider incidents and all evaluated incident types, where impact was known (63 total), the median impact was between $5,000 and $26,000. For comparison, the median financial impact of a domestic, malicious insider threat incident–across all industries within the CERT Insider Threat Incident Corpus where financial impact is known–is between $95,200 and $257,500. Six Sabotage incidents (9.5%) occurring within the IT sector had a financial impact of $1 million or more.

Final Thoughts

Reliance on the supply chain within the IT sector is growing rapidly, particularly in today’s popular business models. When looking at IT Sabotoge incidents, most incidents were conducted by employees who had the greatest privilege and trust, which is why the CERT Division’s Common Sense Guide to Mitigating Insider Threats (CSG), Fifth Edition recommends creating separation of duties and granting least privileges.

By thoroughly understanding motives and implementing effective behavioral and techncial monitoring strategies, organizations can better prevent, detect, and respond to insider incidents, including Sabotage. The cases of Sabotage in the IT sector tell us that former employees possess potentially damaging knowledge to do devastating harm. Some may retain access to an organization’s systems, and some may be motivated to seek revenge, a known factor in these incidents. Best practice 20 of the Common Sense Guide referenced above recommends that organizations implement better practices and procedures for employee separation and disabling access to organizational systems.

Stay tuned for the next post, which will spotlight the Healthcare Services sector, or subscribe to a feed of the Insider Threat blog to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

1 For some events, there is a one-to-many mapping for incidents to many victimized organizations that directly employed the insider.

Entries in the “Insider Threats Across Industry Sectors” series:

Posted on Leave a comment

Insider Threats in State and Local Government (Part 5 of 9: Insider Threats Across Industry Sectors)

This post was co-authored by Drew Walsh.

Continuing our industry sector series, this blog post highlights insider threat trends in the State and Local Government subsector and explores distinct characteristics of fraud, the most common insider case type in the CERT Insider Threat Corpus for this subsector.

State and local governments, including emergency services, make up nearly half of the collected public administration insider incidents. Unlike the Federal Government subsector, there is no national requirement for state and local government organizations to have an insider threat mitigation capability.

Only 19 states require by statute that their state-run institutions maintain some level of cybersecurity readiness to protect their sensitive data. These statutes generally prescribe that state institutions implement and maintain security practices and procedures for data protection. Some of these state measures additionally mandate periodic security audits or employee awareness training. While these measures seem to be a good-faith effort to support information assurance, none of the statutes specifically acknowledge or address threats from authorized users–insider threats.

Some of the state and local government organizations that are victims of insider incidents include state departments of motor vehicles, courts, police departments, and health care programs. The graph below lists the breakdown of the 87 cases in the State and Local Government subsector in which organizations were direct victims of an insider threat attack.

Bar graph of the number of insider incidents impacting state and local departments and agencies of different types from 1996-present. State: 46. Local: 22. Emergency Services: 19.

In addition to the 87 incidents where the organizations involved directly employed the insider, we identified 16 cases where there was an incident in the State and Local Government subsector involving a Trusted Business Parter (e.g., contractors or temporary employees).

State and Local Government Victim Organization Relationship to Insiders. The pie chart shows 87 cases, or 84%, where the victim organization employed the insider and 16 cases, or 16% where the victim organization did not directly employ the insider.

Sector Overview

The most frequent insider incident case type in the State and Local Government subsector is fraud, occurring in 77% of incidents. These findings are consistent with findings in the Federal Government subsector of public administration. In these fraud incidents, we see insiders with access to sensitive data, such as personally identifiable information (PII), attempting to illegally profit by selling the data or their authorization to handle sensitive data or systems.

For example, a Department of Motor Vehicles (DMV) clerk misused their access to create a fraudulent ID card and sell their access to sensitive systems. Similiarly, we have instances in the corpus where DMV clerks have misused their access to scrape PII data about individuals without the need to know, and then turn around and sell that information for profit.

In both scenarios, the insider abused their authorization to impact the confidentiality or integrity of sensitive data or systems.

State and Local Insider Incidents by Case Type (2003-present). The bar chart shows the number of insider incidents by case type. Fraud: 67. Sabotage: 9. Miscellaneous: 8. Theft of IP: 2. Sabotage and Fraud: 1.

Sector Characteristics

We summarize the findings from these fraud incidents below. These statistics consider only incidents where the case type is exclusively fraud and the industry subsector is either State Government, Local Government, or Emergency Services.

Insider Fraud Incidents in State and Local Government. Who? More than half (63.4%) of insiders were with the victim organization for five years or more. A majority (86.79%) of insiders misused their account or data access. A small fraction (13.21%) did not use their authorized privileges but compromised or created another account. Insiders were evenly distributed in age: twenties (25.0%), thirties (16.1%), forties (26.7%), and fifties (32.2%). An overwhelming majority (92.7%) of the insiders were full-time employees. All (100.0%) were current employees. Several insiders occupied law enforcement (22.4%), management (16.4%), or other non-technical (44.8%) positions. Some insiders occupied more than one of the aforementioned roles. What? Over half (53.5%) of the targets in fraud incidents were related to personally identifiable information (PII), including the theft of non-employee data (32 targets), employee data (4 targets), or law enforcement sensitive databases (2 targets). Other common targets were related to financial assets (17 targets) or physical property (2 targets). When? For the incidents where the attack time was known (50 total), nearly all (98.0%) involved insider activity during work hours. Almost a quarter of the incidents (24.0%) also involved activity outside of work hours. Only one fraud incident took place solely outside of work hours. Where? In fraud incidents where attack location was known (57 total), most (98.2%) involved activity on-site. However, some (19.3%) of the incidents also involved remote access. Only one incident appeared to involve remote access only. How? Some technical methods used include sabotaging backup tapes (1 incident), planting a logic bomb (1 incident), or installing a keylogger (1 incident). Most technical methods were rudimentary. Almost half of insiders abused their privileged access (46.2%) and/or received or transferred fraudulent funds (28.2%). Why? As with the Federal Government subsector, the primary motive for the fraud cases was financial gain (89.6%). Other motives included recognition (1 incident) or benefiting a foreign entity (1 incident).


Incidents in the State and Local Government subsector appear to share many similarities with Federal Government incidents, such as attack patterns and insider objectives (i.e., fraud). When we look at the overall impact and the targeted assets, we notice some differences. Federal Government insiders target non-employee data (31.6%), passports and immigration databases (21.3%), or financial assets (9.6%). State and local government insiders target non-employee data (49.6%) and financial assets (15.0%) at much higher rates, with an additional focus on employee data (7.1%).

Final Thoughts

The majority of insider incidents in the State and Local Government subsector of public administration occur due to the insider’s authorized access to sensitive data. The unauthorized use of access can make it difficult for employers to differentiate activity that is potentially malicious from activity that is characteristic of typical job functions. Some specific best practices that organizations can use to mitigate insider threats include auditing employee activity such as database searches, monitoring the movement of monetary funds, and auditing the creation and modification of user accounts.

Stay tuned for the next post, in which we spotlight the Information Technology sector. Or subscribe to a feed of the Insider Threat blog to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

Posted on Leave a comment

Insider Threats in Finance and Insurance (Part 4 of 9: Insider Threats Across Industry Sectors)

This post was co-authored by Jonathan Trotman.

In the previous post of our series analyzing and summarizing insider incidents across multiple sectors, we discussed some of the mandates and requirements associated with federal government insider threat programs as well as documented insider threat incidents. In this post, we will discuss information security regulations and insider threat metrics based on Finance and Insurance incidents from our CERT National Insider Threat Center (NITC) Incident Corpus.

For context, Finance and Insurance refers to a collection of organizations working across various facets of financial services. The graph below shows incident counts in this sector.

Bar chart of the number of insider incidents impacting finance and insurance organizations, from 1996 to the present. Banks and credit unions had 190 incidents. Insurance had 14. Other financial services had 33.

With the statistics that follow, keep in mind that Banks and Credit Unions are far more represented than Insurance or Other Financial Services in the CERT NITC Incident Corpus.

In total, we identified 237 malicious, non-espionage insider incidents where a Finance and Insurance organization was both the victim organization and the direct employer. There were 148 additional incidents where a Finance and Insurance industry organization was impacted by a Trusted Business Partner (e.g., temporary employees, outsourced computer support, or cleaning services) or an insider incident at another organization. In one incident, more than 20 individuals (approximately half of them insiders and half of them outsiders) targeted more than a dozen Finance and Insurance organizations across multiple states as part of a Stolen Identity Refund Fraud (SIRF) scheme coordinated by an outsider-ringleader. This incident underscores the ubiquity of SIRF schemes within Finance and Insurance organizations and the threat of collusion with outsiders. Finance and Insurance organizations are well-served by getting involved with information sharing groups, such as the Financial Services Information Sharing and Analysis Center (FS-ISAC) to collaborate with others in the sector.

Pie graph of Finance and Insurance Victim Organization Relationship to Insiders. 237 victims, or 62%, employed the insider. 148 victims, or 38%, did not directly employ the insider.

Sector Overview

To understand the Finance and Insurance insider threat landscape, we first need to understand its regulations and their background check guidance:

  • 1999: The Gramm-Leach-Bliley Act (GLBA), also known as the Financial Modernization Act, requires that financial institutions take steps to ensure the security and confidentiality of customers’ personal and financial information. The Federal Trade Commission (FTC), as part of its implementation of GLBA, issued the Safeguards Rule. The Safeguards Rule in turn recommends, among other best practices, checking references and performing background checks as part of the hiring process for employees that would have access to customer information.
  • 2002: Section 404 of the Sarbanes-Oxley Act (SOX) requires an assessment of the internal control structure used to ensure financial statement account and disclosure accuracy, which involves detecting security breaches. It also implicitly necessitates employee background checks.
  • 2004: Guidance for reference and background checks is also provided by the Payment Card Industry Data Security Standards (PCI DSS) Council.

Taken together, these regulations would suggest that Finance and Insurance employees are vetted before they ever have system access. However, as we have demonstrated, insider threats still persist.

As the chart below shows, Fraud is the most frequent insider threat incident type for Finance and Insurance organizations (in the CERT Insider Incident Corpus), followed by Theft of Intellectual Property (IP) and IT Sabotage. In the four incidents where two case types were present, Fraud was one of them. Fraud is all too commonplace in the Finance and Insurance sector because employees have a high degree of access to money and sensitive financial data.

Bar graph of the number of Finance and Insurance Insider Incidents by Case Type. Fraud: 204. Theft of IP: 19. Sabotage: 10. Sabotage and Fraud: 3. Fraud and Theft of IP: 1.Given how few reported incidents involved Theft of IP or IT Sabotage, the following analyses focuses on incidents of Fraud where the insider was employed by the Finance and Insurance victim organization.

Sector Characteristics

Nearly all (87.8%) of the insider incidents impacting Finance and Insurance organizations involved Fraud. The Fraud statistics below include only the 204 incidents where no other incident type was known. Each attribute (i.e., Who, What, When, Where, How, Why) represents only the cases where that attribute was known.

Who? Approximately half (49.5%) of the insiders were employed by the victim organization for 5 years or more when they started their malicious activity. Insiders were primarily in their twenties (29.7%) or thirties (34.9%). Nearly all (97.1%) of the insiders were full-time employees and nearly all (93.4%) were current employees. Half of the insiders used an authorized account and data (50.0%) and over one-quarter were authorized, privileged users (26.4%). The most common roles occupied by insiders were management (32.8%), cashier (13.8%), executive (12.6%), accountant/bookkeeper (10.9%), and other non-technical positions (29.9%). What? Over one-third (40.7%) of insiders targeted money in an electronic form. An additional subset of insiders (5.4%) targeted money in a physical form. Insiders also targeted electronic customer data (5.4%), which in turn could be used to commit identity theft. When? For Fraud incidents where the attack time was known (179 total), nearly all (98.3%) involved insider activity during work hours. Nearly one-third of incidents (31.8%) of incidents also involved activity outside of work hours. Three incidents (1.7%) involved activity outside of work hours only. Where? For Fraud incidents where attack location was known (189 total), nearly all (99.5%) involved activity on-site. Over one-quarter (27.0%) also involved remote access. Only one incident (0.5%) appeared to involve remote access only. How? Nearly one-third (32.8%) of insiders created or used a fraudulent asset to commit their attack. One-quarter (25.0%) of insiders created or used an alias over the course of their Fraud scheme. Nearly one-quarter (24.5%) of insiders made fraudulent purchases. Insiders also abused privileged access (20.6%), falsified information (20.1%), and modified critical data (16.2%). Why? Unsurprisingly, nearly all of the insiders (97.8%) were motivated by the prospect of financial gain. Insiders were also motivated by gambling addiction (2.2%), family pressures (1.6%), competitive business advantage (1.1%), or a desire for recognition (1.1%). Some insiders (3.8%) had multiple motivations for committing Fraud.


Insiders committing Fraud in Finance and Insurance organizations tended to be more tenured employees, and many were in leadership positions. These insiders also had privileged access to information systems and customer personally identifiable information (PII) commensurate with that level of experience or role. In turn, these insiders exploited vulnerabilities in processes they were familiar with to access money or customer data. Nearly all of the insiders were motivated purely by financial gain, with a portion of those insiders also under financial stress from a gambling problem or family distress. In Finance and Insurance insider incidents where the financial impact was known (221 total), the median financial impact was between $98,137 and $268,403. Again, the median impact across the CERT Insider Threat Incident Corpus is between $95,200 and $257,500. The Finance and Insurance incidents, since they comprise the majority of the incidents collected, essentially define the median financial impact range of the corpus.

Final Thoughts

As we addressed earlier in this post, organizations subject to the Safeguards Rule are encouraged to perform background checks before hire. At the NITC, we also advocate for employee background checks. However, given the evidence around the tenure of insiders, recurring background checks are also advisable, particularly when an employee receives increased or new access to customer information. After all, we saw that managers and executives are also capable of committing Fraud.

However, SOX compliance can pose significant financial and implementation challenges for organizations, even 16 years after its passage. With those costs, organizations may not be performing background checks as frequently or extensively as is needed to address emerging threats from within.

If you work for a small organization that must comply with SOX, review the Securities and Exchange Commission’s Section 404 Guide for Small Business. Organizations small and large can also reference the Common Sense Guide to Mitigating Insider Threats, Fifth Edition, for additional best practices on how to improve their security posture.

For recommendations for your insider threat program, stay tuned for future blog posts related to your industry. Check back in a few weeks to read our next industry post on state and local government, or subscribe to a feed of the Insider Threat blog to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, please contact [email protected].

Posted on Leave a comment

Scoping IT & OT Together When Assessing an Organization’s Resilience

The SEI engages with many organizations of various sizes and industries about their resilience. Those responsible for their organization’s cybersecurity often tell us that their information technology (IT) and operational technology (OT) are too different to be assessed together. However, not accounting for both technologies could have serious implications to an organization’s resilience. In this post I’ll say why, and I’ll describe the technology-agnostic tools the SEI uses to scope both IT and OT in resilience assessments.

IT and OT systems are distinct systems with their own cybersecurity priorities. In terms of the CIA Triad, IT generally prioritizes confidentiality and OT prioritizes availability. These priorities can drive how organizations deal with risks. However, when evaluating organizational resilience, what really matters are the interconnectedness of these two technologies and their criticality to the organization, because this drives the impact and likelihood of the risk. The NotPetya and WannaCry attacks exploited these characteristics, traversing IT and OT networks and either bringing down or severely degrading operations of major organizations.

Leitstand_2.jpgPhoto: Steag, Germany. Licensed under Creative Commons Attribution-Share Alike 3.0 Unported.

Even if you think IT and OT are apples and oranges, we can agree that many organizations depend on both IT and OT to operate. We can also agree that an organization’s ability to weather times of stress is critical to its customers, employees, and shareholders. It makes sense then that organizations should consider both IT and OT systems when determining operational resilience.

Resilience, Assessments, and the Importance of Scoping

To paraphrase the SEI’s CERT Resilience Management Model (CERT-RMM), operational resilience is an organization’s ability to manage the impact on assets and their related services due to realized risks associated with processes, systems, technology, the actions of people, or external events. In times of stress, a resilient business will be more likely to return to normal operation.

CERT-RMM proposes that organizations can achieve their optimal level of operational resilience through the effective communication and disposition of risks across a business’s many verticals. Crucially, CERT-RMM abstracts organizations to their services and all the assets that support them: people, information, facilities, and technology of any type.

The SEI has developed two effective assessment tools based on CERT-RMM that measure an organization’s operational resilience through the lens of cybersecurity: the Cyber Resilience Review (CRR), developed for the Department of Homeland Security, and the Cybersecurity Capability Maturity Model (C2M2), developed in partnership with industry representatives for the Department of Energy. Both assessments can be performed as a one-day self-assessment by the organization’s own subject matter experts (SMEs) or as part of facilitated workshops. The C2M2 is for the energy sector and more broadly assesses cybersecurity programs. The CRR is sector agnostic and focuses on an organization’s resilience management processes. Both assessments give organizations a repeatable tool to help determine their organizational resilience.

The assessments share a common set of CERT-RMM assumptions and methodologies. Both assessments focus on two aspects of the organization: (1) the organization’s business objectives and (2) the protection and sustainment of assets that support those objectives. The organization itself determines the appropriate level of resilience and resources needed to achieve its objectives and efficiently meet regulatory requirements. The organization has the flexibility to assess the critical service or function regardless of its assets, and in a way that is consistent with their risk appetite.

Scoping, or determining what parts of the organization should be assessed, is key to the assessment’s success. The CRR scopes to a single “critical service,” and the C2M2 scopes to what it calls a “function.” The critical service or function being assessed should indeed be critical to the business: if this service failed or went away, your business would also fail. For example, a car manufacturer may want to focus on its car manufacturing line. Scoping the critical service or function too broadly will dilute the visibility afforded by the assessments. For more, see my colleague Andrew Hoover’s blog post about cyber resilience and the critical service.

Scoping the critical service or function allows the organization and the SMEs engaged in the assessment to clearly define the systems that are being assessed and in turn determine their overall resilience. Scoping also allows the organization to intelligently identify the level of risk associated with those systems. The organization can then prioritize its resources to close any identified gaps–one of the practice areas of cyber hygiene. Repeating the assessment against the same scope allows the organization to measure its performance over time.

IT and OT: Better When Assessed Together

For many organizations, IT and OT assets are both critical to survivability. We should be asking the same questions of both when determining organizational resilience. The answers to those questions might vary depending on whether they address IT or OT, but that should not preclude them being asked.

For example, both the CRR and C2M2 assessments ask about the practice of patching vulnerabilities. Patching IT is generally common and non-disruptive, but patching OT could be extremely rare and disruptive. To simply not ask the question because the IT or OT answers would be different could mask exposures to serious vulnerabilities. Exclusion of IT or OT assets from the assessment not only reduces the organization’s visibility into their support of the critical service or function, but it can also create an unwarranted sense of security.

The presence of the term “cyber” in both the Cyber Resilience Review and Cybersecurity Capability Maturity Model does not imply a limitation on the critical service or supporting assets in scope. Though not all assets inherently include a cyber component, they might be connected through a network. Excluding some of the networked assets from the measurement of the organization’s resilience casts considerable doubt on the efficacy of the measurement.

Having the right subject matter experts (SMEs) on hand is also important during the assessment. Even though IT and OT systems are subject to the same resilience questions, different SMEs may need to answer the questions appropriately.

The Emerging Convergence

As IT and OT become networked together more and more, their vulnerabilities and risks will become shared. Their combined impact on the resilience of the organization will become more complicated and potentially much greater. It has never been more critical to manage the resilience of an organization in the face of these impacts and act on, or at the very least be aware of, any gaps.

Read more about operational resilience or contact us about resilience assessments.