Posted on

Keeping an Eye Out for Positive Risk

We commonly think about risks having negative consequences. With each month bringing new cybersecurity threats, breaches, and vulnerabilities, sound risk management practices are necessary to protect your organization. However, when performing risk management, do organizations unnecessarily limit themselves by only thinking about risks as negative effects and not looking at positive effects, too?

Risks Can Be Positive

The CERT Resilience Management Model (CERT-RMM) defines risk as “the combination of a threat and vulnerability (condition), the impact (consequence) on the organization if the vulnerability is exploited, and the presence of uncertainty.” Threats and vulnerabilities are inherently negative, and the impact of an exploited vulnerability, for many organizations, may cause the disruption of a high-value asset or service that negatively impacts the organization’s mission. Understandably, many organizations’ top priority is to address negative risk.

The International Organization for Standardization (ISO) has a broader definition of risk: “effect of uncertainty on objectives.” According to this definition, uncertainty can arise from conditions other than threats and vulnerabilities, and the effect can be negative or positive. The ISO definition encourages organizations to think about risk as a change in circumstances, its likelihood, and its impact, regardless of whether it is positive or negative.

Positive risk is often misunderstood or not considered. As the CERT-RMM and other standards describe, most organizations have historically applied risk management to control negative future outcomes. Yet it’s hard to deny that risk and opportunity go hand in hand. Advancement cannot be achieved without taking risk. Risk is essential to progress, and the outcome may be positive, negative, or some of both. Successful organizations learn how to balance the possible negative consequences of risk against the potential benefits of its associated opportunity. Organizational risk management activities would be unnecessarily limited by not accounting for the positive effects of risk.

Risk need not be defined as good or bad. Risk management is a proactive organizational practice to prepare for variation and the unexpected. Organizations are then better prepared to mitigate adverse impacts and exploit favorable ones to achieve objectives, instead of just act reactively. As ISO 3100:2018 puts it, “Risks emerge, change, and disappear as a project’s external and internal context changes. Risk management anticipates, detects, acknowledges, and responds to those changes and events in an appropriate and timely manner.”

A good risk management program should establish clear communications and situational awareness about all risks. This allows risk decisions to be well informed, well considered, and made in the context of organizational objectives, such as opportunities to support the organization’s mission and potential business rewards. Risk management should take a broad view of risks across an organization to inform resource allocation, better manage risks, and enable accountability.

Approaching Positive Risk

Many projects use different processes to minimize the impact of a risk versus maximize the impact of an opportunity. Not all risks can be eliminated, and no organization has an unlimited budget or enough personnel to address all risks, so management should define risk management activities that maximize the balance.

For negative risks, organizations avoid, transfer, mitigate, and accept risk based on their risk tolerance and appetite. They try to ensure either the risk doesn’t occur or that when it occurs, it has little or no impact on the organization’s overall mission.

Organizations take the same approaches to positive risks but with a twist.

For opportunities, organizations should try to exploit, enhance, share, or accept the positive risk. Exploiting a positive risk means accepting the risk and realizing the positive effect. Enhancing is acting to increase the chance of the positive risk occurring to maximize the opportunity. Sharing the risk allocates part of the ownership and responsibility to a third party. This is the same approach as with a negative risk, and it tries to control the potential loss or gain. Lastly, accepting the risk or doing nothing is always an option, whether the risk is negative or positive. Often the risk is either highly unlikely to occur, or the effect when realized is not significant. Consequently, organizations do not invest resources to accept the risk, which may be the correct approach.

Conclusion

As most organizations address today’s ever-changing challenges, they are trying to define risk management activities that allow them to balance their focus and resources to maximize their opportunities and minimize their challenges. Most organizational cultures place a greater emphasis on the protection against loss than the attainment of gain.

Do you think of risk as a negative outcome, or do you think of risk as preparing for the unexpected?

Too much of a good thing is possible, if you are unprepared for it. That’s why it’s important to consider risk from both sides, positive and negative. For example, there may be such a thing as too secure. If you have not been able to do innovative things because you spent your entire budget on security controls and tools, are you missing an opportunity?

What do you think? Do you track both positive and negative risks? Send us your thoughts on positive risk management to [email protected].

Consider making your risk management activities more robust by not only identifying effects that can have negative consequences but modifying your practices as necessary to leverage opportunities, too.

Posted on

High-Level Technique for Insider Threat Program’s Data Source Selection

This blog discusses an approach that the CERT Division’s National Insider Threat Center developed to assist insider threat programs develop, validate, implement, and share potential insider threat risk indicators (PRIs). The motivation behind our approach is to provide a broad, tool-agnostic framework to promote sharing indicator details. You might share these details among your insider threat team personnel and other key stakeholders, such as Human Resources, Legal, and Information Technology, before the direct dive into implementation or tool acquisition.

Figure 1 Insider Threat Indicator Sharing.png

To develop a new indicator, your team will start with a threat scenario describing a potential insider incident. These scenarios can be gathered from various sources, such as known issues within the organization, cases taken from organizations in the same field, or an internal tabletop exercise.

The next step is to break the threat scenario into a set of observables. Observables can be physical, behavioral, or technical actions to describe the state or activity the insider may be exhibiting. For each of the observables, identify the following four pieces of information (we call this the 4-tuple).

  1. data source(s)
  2. field within the data source
  3. analytic technique(s)
  4. response options

Let’s look at each element of the 4-tuple. The data source(s) is the first element of the 4-tuple, and it specifies where to find details related to the observable. A sensor monitoring user or device activity, for instance, could be a technical data source that is logging information at (or inside) your organization’s firewall. An organization might have data sources such as data loss prevention tools, operating system security logs, physical security logs, and Human Resources management system information1, for example.

Since a data source may capture many different types of activities, the second element consists of the fields in the data source. Examining these fields narrows down the details needed to find events associated with the observable. For example, for the given observable you only may want to look for failed login attempts. For example, Window’s Security logs have a type field, and you could use the failed login value to find failed logins. You could also include fields associated with the activity such as the account name, machine, date, time, and other information to support the observable.

The next element is the analytic technique. This element describes how the fields in the data sources are analyzed in hunting for potential threats; various techniques are presented. Above we mentioned matching a failed login type to the window’s security type field–an example of a value match analysis technique. You could also use pattern matching analysis techniques to learn whether a data source’s fields match a pattern. For example, use the activities timestamp field to show any activity that occurred within the last seven days.

More sophisticated analysis techniques enable you to use multiple field and trend data over time to look for statistical anomalies (outliers). The anomalies could be in the insider’s baseline behavior, or you could compare the insider’s behavior against the insider’s peer group.2

The analytic techniques vary in complexity and computational resources needed. The 4-tuple approach to indicator development notes this complexity early in the process and makes it explicit to the team. Members of the insider threat program team should perform a cost/benefit analysis regarding the implementation of this PRI.

The final element in the 4-tuple is the response the insider threat program will take if this indicator is detected, generating an alert for an insider threat analyst to triage. The response could range from simple alerting to taking some automatic action, such as disabling an account with suspicious activity. Using this approach, the team can use the response option to express how critical this indicator is to the organization.

As mentioned above, the 4-tuple is written at a high level to facilitate discussion among the team. Let’s make this high-level nature more concrete by looking at an example. Assume we want to develop a PRI from a scenario that includes the loading of unauthorized software. The observable in this example is the installation of unauthorized software. The 4-tuple is given in the following table.

Observable

Data Sources

Fields

Analytic Technique

Response Option

Successful software installation attempt

Windows Event Logs

Software Name

Pattern match — is software not in a list of approved software packages?

Generate an alert (high)

Enable enhanced monitoring

The 4-tuple approach provides enough detail that all the stakeholders can discuss the merits of the indicator. Human Resources and General Council are made aware of the data being requested. The technical teams can scope the complexity and the impact to system resources. Executive leadership understands the scenarios being monitored and implications to the organization based on the response options.

Once the stakeholders agree the PRI is of concern to the organization and should be tracked, the insider threat team can begin the process of incorporating the appropriate data source(s), analytics, tools, and/or procedures to detect the indicator. Sometimes this means the insider threat team might simply need to update the configuration of a currently deployed tool. If the current tools can’t detect the indicator, the organization could begin an evaluation and acquisition process using the information gathered during the 4-tuple exercise as a guide for selecting a tool capable of finding the required PRIs.

With this approach, the insider threat program’s personnel and stakeholders can more effectively discuss any indicator’s requirements under consideration and the impact of incorporating those indicators would have on the organization.

Stay tuned for more content from the CERT National Insider Threat Center, refer to our current publications (such as Analytic Approaches to Detect Insider Threats), or consider attending our instructor-led Insider Threat Analyst course.

Subscribe to our Insider Threat blog feed to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

1 See Common Sense Guide to Mitigating Insider Threats, Sixth Edition (https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=540644) for a discussion of data source providers.

2 See https://www.insaonline.org/an-assessment-of-data-analytics-techniques-for-insider-threat-programs/ for detailed discussion of analytics

Posted on

Windows Event Logging for Insider Threat Detection

In this post, I continue my discussion on potential low-cost solutions to mitigate insider threats for smaller organizations or new insider threat programs. I describe a few simple insider threat use cases that may have been detected using Windows Event logging, and I suggest a low-effort solution for collecting and aggregating logs from Windows hosts.

Numerous publications and guides exist, including those from the NSA, Microsoft, and SANS, that explain how and why host-level logging should be used for Windows systems. This cybersecurity concept applies as much to insider threat detection and response as it does to general troubleshooting, intrusion detection, and incident response; it should not be overlooked as a valuable resource. This is particularly true considering that the implementation comes at no additional software licensing cost on top of the base operating system that you are already using. Many security information and event management systems (SIEMs) require additional management overhead, and they may even introduce additional attack vectors. However, Windows Event Forwarding and Collection provides a straightforward mechanism that can be used to centrally aggregate logs across Windows systems without installing additional client collection agents.

Consider the following insider incident:

A system administrator was dating another employee who was fired; the fired employee began sending emails to management demanding her reinstatement using threatening language. Because of the threatening emails, the system administrator was fired as well. Before leaving, the insider created a backdoor administrator account, which he later used to attack the organization. The insider accessed the company’s servers several times (post termination), deleted sensitive data, and shut down several machines. The insider was discovered via access logs tied to the backdoor account.

This case highlights the need to audit account creation and privileged group modification, both of which may lead to the creation of unauthorized access paths. The following table lists Windows security event IDs that pertain to account management, which includes activities such as creating and disabling user accounts and groups and modifying group permissions.

Event ID

Description

608

User Right Assigned

624

User Account Created

626

User Account Enabled

631

Security Enabled Global Group Created

632

Security Enabled Global Group Member Added

635

Security Enabled Local Group Created

636

Security Enabled Local Group Member Added

645

Computer Account Created

646

Computer Account Changed

648

Security Disabled Local Group Created

649

Security Disabled Local Group Changed

650

Security Disabled Local Group Member Added

653

Security Disabled Global Group Created

654

Security Disabled Global Group Changed

655

Security Disabled Global Group Member Added

658

Security Enabled Universal Group Created

659

Security Enabled Universal Group Changed

660

Security Enabled Universal Group Member Added

663

Security Disabled Universal Group Created

664

Security Disabled Universal Group Changed

665

Security Disabled Universal Group Member Added

4720

A user account was created

4727

A security-enabled global group was created

4731

A security-enabled local group was created

4744

A security-disabled local group was created

4749

A security-disabled global group was created

4754

A security-enabled universal group was created

4759

A security-disabled universal group was created

4783

A basic application group was created

These types of security events should occur relatively infrequently on domain controllers and even more infrequently as local account or group modifications on workstations and servers. Depending on the frequency observed, it may be operationally feasible to configure an email alert for these types of activities. You can use the Windows Event Viewer on the Forwarded Events log on your collector (or even on individual servers) to create a task based on specific event IDs. Filter the log to locate an event for the desired ID, then right-click and select Attach Task To This Event. You can use this task method to call specific programs or scripts, such as a PowerShell script that sends a notification email to your security team.

Fig 1 Attach Task to This Event.png

The following incident highlights the need to monitor printing activity, which is fairly straightforward to accomplish for Windows-based workstations and print servers:

An insider expressed disgruntlement to his co-workers about current organizational policies. He logged into a system and printed a sensitive document, which he then physically exfiltrated and mailed to an external party.

In this case study, the PrintService operational log could have been used to collect useful information, such as the title of the document that was printed, the user who printed it, the printer name, the total byte count, and the number of pages printed. You can readily enable this logging on centralized Windows print servers and user workstations by (1) opening the Event Viewer, (2) navigating to Applications and Services Logs > Microsoft > Windows > PrintService, (3) right-clicking Operational, and (4) selecting Enable Log.

Fig 2 Enable Log.pngAfter enabling the log, you begin to see an event ID 307 for each print job submitted on the system.

Fig 3 ID 307.pngUnless your organization is very small or printing is minimal, it would be impractical to analyze these events individually. However, using just the information contained in this event type, you can do some interesting anomaly detection across all of these events based on page count and size, and you can do trivial keyword searching on the titles of the documents.

If you are forwarding all of this log data to your Event Collector, you can use a few simple PowerShell commands to output it to a flat file as input to an anomaly detection or analysis pipeline. Specifically, you can use the PowerShell Get-WinEvent Cmdlet to locally or remotely connect to the Event Collector and then export the results using the Export-Csv Cmdlet:

PS C:Windows> GetWinEvent - logname "ForwardedEvents" 
-ComputerName wef-server -MaxEvents 100 | Export-CSV output.csv

If you deploy a more robust SIEM tool, this effort will not be lost since you will have taken the necessary steps to centralize your logging, allowing you to then deploy the SIEM tool’s event log collectors on your Event Collector servers instead of across all the systems in your enterprise.

Once you have enabled the desired event logs and implemented some sort of centralized collection mechanism, one of the next steps is to begin analyzing the data to provide meaningful and actionable intelligence and alerting. Stay tuned for more content from the CERT National Insider Threat Center, refer to our current publications (such as Analytic Approaches to Detect Insider Threats), or consider attending our instructor-led Insider Threat Analyst course.

Subscribe to our Insider Threat blog feed to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

Posted on Leave a comment

The CERT Division’s National Insider Threat Center (NITC) Symposium

Addressing the Challenges of Maturing an Insider Threat (Risk) Program

On May 10, 2019, the Software Engineering Institute’s National Insider Threat Center (NITC) will host the 6th Annual Insider Threat Symposium, with this year’s theme, “Maturing Your Insider Threat (Risk) Program.” The purpose of the symposium is to bring together practitioners on the front lines of insider threat mitigation to discuss the challenges and successes of maturing their insider threat (risk) programs. You will have opportunity to learn from others how to move beyond the initial operating capacity of your program.

National_Insider_Threat_Center_Symposium_2019.jpg

This event will be open to the Department of Defense, U.S. and international governments, and public-sector insider threat communities, with presentations and panel sessions from government, industry, and academia. We anticipate over 225 security professionals will attend, evenly split across industry and government, with no participation by the media. This will be an ideal venue for honest and open discussions about the challenges facing organizations as they attempt to stand up and improve insider threat mitigation programs.

Our mission at the NITC is to assist in the development, implementation, and measurement of effective insider threat programs by performing research, modeling, analysis, and outreach to define socio-technical best practices, to assist organizations in deterring, detecting, and responding to evolving insider threats.

Date:
May 10, 2019
8:00 – 8:30 am Registration
8:30 – 4:00 pm Symposium

Location:
NRECA Conference Center
4301 Wilson Blvd.
Arlington, VA

Registration:
https://insider-threat-symposium-2019.eventbrite.com/

Registration to this event is free, but space is limited to the first 225 registrants. A continental breakfast and lunch will be provided.

Where to Stay:
Hotel accommodations in the Arlington, VA area

Preliminary Event Agenda:

8:30 – 8:45

Welcome / Introduction

· Mr. Randall Trzeciak, Director – CERT National Insider Threat Center

8:45 – 9:15

Community Updates

· OUSD(I) – Mr. Jeffrey Smith

· DoD Insider Threat Management and Analysis Center (DITMAC) – Ms. Delice-Nicole Bernhard

· National Insider Threat Task Force (NITTF) – Ms. Pamela Prewitt

· Intelligence and National Security Alliance (INSA) – Mr. Sandy MacIsaac

9:15 – 10:00

Facilitating Insider Threat Analysis Using OCTACVE FORTE

· Mr. Brett Tucker – Software Engineering Institute / CERT Division

· Mr. Randall Trzeciak – (Software Engineering Institute / CERT Division)

10:00 – 10:30

Keynote Address

· U.S. Representative Chrissy Houlahan (PA) – (INVITED)

10:30 – 10:45

Morning Break

10:45 – 11:30

Insider Threat Program Maturity Framework

· Ms. Pamela Prewitt – National Insider Threat Task Force (NITTF)

11:30 – 12:00

2019 Verizon Insider Threat Report

· Mr. John Grim – Senior Manager, Verizon Security Research

12:00 – 1:00

Lunch Break

1:00 – 1:30

Maturing an Insider Threat Program – An Industry Perspective

· Mr. Douglas Thomas – Director, CI Operations & Corporate Investigations, Lockheed Martin Corporation

1:30 – 2:15

Maturing an Insider Threat Program- Incorporating Behavioral Analytics

· Dr. Christopher Myers – Chief, Behavioral Science Division, National Geospatial Agency (INVITED)

2:15 – 2:30

Afternoon Break

2:30 – 3:15

Maturing an Insider Threat Program- Utilizing Machine Learning for Insider Anomaly Detection

· To Be Determined

3:15 – 3:45

Maturing an Insider Threat Program – A Government Perspective

· Mr. Andrew Jordan – Insider Threat Program Manager, Marine Corp Intelligence Activity

3:45 – 4:00

Closing Remarks

· Mr. Daniel Costa, Technical Team Lead – CERT National Insider Threat Center

We hope to see you at this important event on May 10th in Arlington VA.

Posted on Leave a comment

A New Scientifically Supported Best Practice That Can Enhance Every Insider Threat Program!

(Or…”How This One Weird Thing Can Take Your Program to the Next Level!”)

The CERT National Insider Threat Center (NITC) continues to transition its insider threat research to the public through its publications of the Common Sense Guide to Mitigating Insider Threats (CSG), blog posts, and other research papers. We recently released an updated version of the CSG: the Common Sense Guide to Mitigating Insider Threats, Sixth Edition. In this post, I’ll highlight the new additions and updates: best-practice mappings to standards and more attention to workplace violence, monitoring, and privacy. I’ll also walk you through the new best practice, on positive incentives in the workplace.

21 Best Practices

In the fifth edition of the CSG, we described 20 best practices that any organization can implement to help prevent, detect, or mitigate insider threats. The sixth edition describes 21 best practices. The new and revised best practices in the sixth edition are based on the latest research findings and case studies. The table below summarizes the best practices from the sixth edition of the CSG.

Table 1 The 21 Insider Threat Best Practices.PNG

Best Practice 21: Adopt Positive Incentives to Align Workforce with the Organization

Figure 2 Best Practice 21 Adopt positive incentives to align workforce with the organization.PNG

All groups within an organization, as shown above, are involved in the newest, capstone best practice: “Adopt positive incentives to align the workforce with the organization.” Best Practice 21 refers to workforce management practices that increase perceived organizational support as positive incentives because they attempt to entice (rather than force) an employee to act in the interests of the organization.

Enticing employees to act in the interests of the organization through positive incentives reduces the baseline insider threat risk. Positive incentives that align workforce values and attitudes with the organization’s objectives form a foundation on which to build traditional security practices that rely on forcing functions. The combination of incentives and forcing functions improves the effectiveness and efficiency of insider threat defense.

Best Practice 21 is derived from the research published in an SEI technical report: The Critical Role of Positive Incentives for Reducing Insider Threats. The research identified and analyzed three avenues for aligning the interests of the employee and the organization–job engagement, perceived organizational support, and connectedness with co-workers–to reduce the risk of an insider becoming a threat. The model developed from this research shows how these factors can encourage employees to act in the interests of the organization. One particularly strong outcome showed that as perceived organizational support went up, the risk of an insider incident went down (see figure below).

Figure 1 Perceived org support vs insider misbehavior.pngFigure 1. Negative Correlation Between Perceived Organizational Support and Insider Misbehavior

We adapted the key components of this research into Best Practice 21 in the Common Sense Guide to Mitigating Insider Threats, Sixth Edition.

This practice is related to Best Practice 5, “Anticipate and manage negative issues in the work environment,” and Best Practice 8, “Structure management and tasks to minimize insider stress and mistakes.” The difference is that Best Practice 21 focuses on using positive incentives to improve employee attitudes independent of whether a specific negative issue or insider stress exists or is even identifiable. In other words, positive incentives are proactive and reduce the frequency of insider incidents before they, or even their indicators, occur.

Best Practice 21, consistent with all the other best practices, contains the following sections:

  • Protective Measures
  • Challenges
  • Case Studies
  • Incident Analysis
  • Survey on Organizational Supportiveness and Insider Misbehavior
  • Quick Wins and High-Impact Solutions for All Organizations

Other New Features: EU-GDPR, Privacy, Workplace Violence, Standards Mapping

In the sixth edition, we also integrated new information into the other best practices to reflect aspects of the European Union’s General Data Protection Regulation (EU-GDPR); we paid special attention to issues surrounding insider threat and associated employee-monitoring concerns. In the sixth edition, we also interwove aspects of workplace violence prevention into many of the best practices. Finally, we updated mappings of the best practices to other relevant standards and added new mappings to the following:

  • NIST Cybersecurity Framework
  • Center for Internet Security Controls V7
  • National Insider Threat Task Force Program Maturity Framework
  • European Union General Data Protection Regulation (GDPR)

The table below shows an example of this mapping of best practices, using Best Practice 1, to security control standards.

Table 2 Mapping practices to standards.PNG

Example of Best Practice 1 Mapped to Security Control Standards

Looking Ahead: New Practices for New Threats

We continue to research new insider threat vectors and develop mitigation strategies for organizations to prevent, detect, and respond to these threats. We plan to incorporate these strategies into future versions of the CSG.

Additional Resources

We invite you to search for and read our blog series on CERT Best Practices to Mitigate Insider Threats and read our report titled The Critical Role of Positive Incentives for Reducing Insider Threat.

Subscribe to our Insider Threat blog feed to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

Posted on Leave a comment

Are You Providing Cybersecurity Awareness, Training, or Education?

When I attend trainings, conferences, or briefings, I usually end up listening to someone reading slides about a problem. Rarely am I provided with any solutions or actions to remediate the problem. As a cybersecurity trainer with 17+ years of experience and a degree in education, I understand that developing a good presentation is a challenge in any domain. Fortunately for cybersecurity professionals, the National Institute of Standards and Technology (NIST) can help you choose which kind of presentation to give. This blog post will review the three types of presentations defined by NIST: awareness, training, and education.

briefing room.jpg

What are you presenting?

You have to know whether you’re delivering a presentation for awareness, training, or education. Here are the definitions, according to NIST Speciation Publication (SP) 800-16, Information Technology Security Training Requirements: A Role- and Performance-Based Model.

Awareness

Awareness presentations are intended to allow individuals to recognize IT security concerns and respond accordingly. – NIST SP 800-16

If the purpose of your briefing is to simply tell your audience about a topic or problem so that they can respond, you’re providing awareness. Provide the information and suggest actionable solutions for your audience.

Training

Training strives to produce relevant and needed security skills and competency by practitioners of functional specialties other than IT security (e.g., management, systems design and development, acquisition, auditing). – NIST SP 800-16

Describe the new skills, provide practice–either guided or independent–and maybe even provide a checklist or job aid that will prompt the audience to use those new skills and abilities after they leave your presentation. Your checklist or job aid will not only improve that person’s work, but the cybersecurity of their office, and the transference of that skill to others within their organization.

If you want to change their normal behaviors, then you are providing training.

Education

Education integrates all of the security skills and competencies of the various functional specialties into a common body of knowledge, adds a multi-disciplinary study of concepts, issues, and principles (technological and social), and strives to produce IT security specialists and professionals capable of vision and proactive response. – NIST SP 800-16

Education is generally thought of when beginning or entering a new field. For example, a high school graduate or someone changing careers would attend a college or university to receive an education in cybersecurity. This audience must learn the breadth and depth of knowledge necessary to begin a successful career in the cybersecurity industry. Once on the job, they would receive job-specific training to focus their knowledge to successfully complete the tasks of their employment.

Conclusion

At the Software Engineering Institute and within Carnegie Mellon University, we provide awareness, training, and education to a variety of audiences. Knowing which to use in the right situation is important.

  • If your audience needs to know about a cybersecurity situation so they can devise a solution, you are providing awareness.
  • If you are trying to change your audience’s behavior or improve their knowledge, skills and abilities to improve their cybersecurity, you are providing training.
  • If you are trying to create well-rounded cybersecurity professionals who can take what they have learned, add it to other knowledge, and expand it to different situations to improve the overall body of knowledge of cybersecurity, you are providing education.

Here is my final piece of practical advice, especially when speaking to cybersecurity professionals: Your audiences should always leave with new information, a new way of operating, or a list of tasks to perform or complete. If you can do that, you can make a difference in the way your audience conducts cybersecurity and protects the information entrusted to their care.

Posted on Leave a comment

Insider Threats in Entertainment (Part 8 of 9: Insider Threats Across Industry Sectors)

This post was co-authored by Carrie Gardner.

The Entertainment Industry is the next spotlight blog in the Industry Sector series. Movie and television producers have long entertained the public with insider threat dramas such as Jurassic Park, Office Space, or the more recent Mr. Robot. These dramas showcase the magnitude of damage that can occur from incidents involving our assumed good, trusted employees. Yet as we discuss in this post, movie producers and the entertainment industry are not immune from experiencing such incidents.

According to a SelectUSA article, the Entertainment industry is expected to be valued at $830 billion by 2022. This sector poses a prized target for malicious actors. From areas such as music, film, video gaming, theater, and hospitality, there are multiple sub-sectors within the industry that require unique and individual attention for identifying insider threats and preventing insider incidents.

Of the 26 Entertainment malicious insider threat incidents in our case corpus, we identified 26 related victim organizations. Within the 26 Entertainment organizations, we identified 18 organizations classified as “Hotels, Amusement, Gambling, and Restaurants,” and the remaining 8 are classified as “Content Publishers,” such as media producers for TV and web services. Perhaps surprisingly, two of the subsectors under Entertainment did not have any recorded insider incidents: “Performing Arts and Spectator Sports” and “Art, Museums, and Historical Sites.”

Bar chart of Entertainment Organizations Impacted by Insider Threat Incidents, 1996 to present. Hotels, Gambling, etc. organizations had 18 incidents. Content Publisher organizations had 8 incidents.

In addition to the 26 incidents where the organizations directly employed the insider, we identified 11 organizations involving a trusted business partner relationship (e.g., contractor or temporary employee).

Pie chart of Entertainment Victim Organization Relationship to Insider. In 11 organizations, or 30%, the insider was a trusted business partner. In 26 organizations, or 70%, the insider was a permanent employee.

Sector Overview

Insider incidents in the Entertainment sector contain all three of the case types (fraud, IT sabotage, and theft of intellectual property [IP]) we used to analyze data in our Industry Sector blogs. The majority of the incidents affecting Entertainment organizations are fraud cases, occurring 61.5% across all incidents.

Bar chart of Entertainment Insider Incidents by Case Type. Fraud: 16. IP Theft: 5. IT Sabotage: 4. Fraud and Theft of IP: 1.

Sector Characteristics

Given how few reported incidents involved IT sabotage or theft of IP, the following table focuses on the 5W1H (Who? What? When? Where? Why? How?) of fraud incidents. These calculations exclude instances where the data was unknown.

Insider Fraud Incidents in the Entertainment Sector. Who? Over half (55.5%) of insiders were with the victim organization for five years or more. Over two-thirds (69.2%) of insiders had an authorized account and data. Insiders ranged from ages 21 to over 51 years old with insiders in their twenties accounting for 30.7%, thirties 23%, forties 30.7%, and fifties just 15.3%. An overwhelming majority (89.5%) of the insiders were full-time employees. A majority (90%) were current employees. Several insiders occupied management (33.3%), accounting (13.3%), or other non-technical positions (40%). Some insiders occupied multiple roles. What? Entertainment fraud incidents generally targeted theft of money (66.6%) (e.g., cash in the cash register) followed by theft of customer data, such as customer credit cards (22.2%). When? For the incidents where attack time was known (15 total), roughly one-third (33.3%) involved activity that occurred only during regular work hours, a small percentage (6%) involved activity only outside of regular hours, while the majority (60%) of incidents involved malicious activity that occurred both outside and during regular hours. Where? In fraud incidents where attack location was known (15 total), nearly two-thirds (60%) involved activity on site and remotely. However, over a third (40%) of these incidents involved only on-site access. How? Of the known cases, technical methods used in fraud incidents were fairly technical. More than two-thirds (66.6%) of insiders used a skimming device, and the remaining third (33.3%) of insiders used other technical methods that were not specified. Just over one-quarter (26.6%) received their fraudulent funds by wire transfer, with just over another quarter (26.6%) abusing their access to gain fraudulent funds. Why? Unsurprisingly, as seen with most fraud cases, the motive for all 15 fraudsters was financial gain (100%).

Analysis

The majority of insider incidents in the Entertainment sector occurred due to fraud motivated by financial gain. These insiders were usually with the company for over five years, had access to accounts and data, and were full-time employees. With most of the insiders in trusted positions, they had the means and methods to commit their crimes with relative ease.

It’s interesting that despite some of the insiders being employed in non-technical positions, two-thirds of them used skimming devices, a tool generally considered to be relatively technically sophisticated. In addition to using skimmers, the insiders tended to move their funds through wire transfers or they misused their access to move funds around.

Final Thoughts

We see many movies and TV shows that depict insider threat dramas; the industry is not immune to the consequences. We identified incidents of fraud, IP theft, and sabotage across the industry, including with content publishers.

Stay tuned for the next post, in which we feature Cross-Sector Analysis, or subscribe to a feed of the Insider Threat blog to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

Entries in the “Insider Threats Across Industry Sectors” series:

Posted on Leave a comment

Insider Threats in Healthcare (Part 7 of 9: Insider Threats Across Industry Sectors)

This post was co-authored by Carrie Gardner.

Next in the Insider Threats Across Industry Sectors series is Healthcare. As Healthcare-related information security conversations are predominantly driven by security and privacy concerns related to patient care and data, it’s important to recognize the magnitude of security lapses in this sector. Patients can face severe, permanent consequences from medical record misuse, alteration, or destruction. And medical record fraud vis-a-vis identify theft, otherwise known simply as Fraud in our incident corpus, is one of the primary types of security instances observed in this sector.

Defining and enforcing security and privacy protections in this sector is the 1996 Health Insurance Portability and Accountability Act of 1996 (HIPAA), which has since been expanded. The HIPAA Privacy Rule specifies data-access standards for personal health information (PHI) (i.e., who may access PHI). The HIPAA Security Rule defines requirements for ensuring that proper authentication and authorization policies and practices are in place for accessing electronic PHI in medical records.

In our National Insider Threat Center (NITC) Incident Corpus, we identified 88 malicious insider incidents impacting Healthcare organizations. These incidents do not include unintentional insider threats who may have accidentally left a laptop at a bus stop or sent an email containing PHI to a party that it wasn’t intended for. The 88 malicious insider incidents map to 91 healthcare organizations that were directly victimized in the attack (i.e., in some incidents, there is more than one direct victim organization). Of these victim organizations, Health Networks make up the largest subsector. Health Networks, also known as Integrated Health Systems, are networks of hospitals and private practices that are dedicated to bringing healthcare to a specific region.

Bar graph of Healthcare Organizations Impacted by Insider Threat Incidents, 1996 to present. The bars show the number of victim organizations by subsector. Health Network: 25. Diagnostics, Support Services, and Medical Manufacturing: 21. Private Practices, Walk-In Clinics, etc.: 20. Healthcare Insurance: 10. Pharmacology: 7. Hospitals: 6. Advocacy Services: 2.

In addition to the 91 direct victim organizations, 20 victim organizations indirectly employed the insider in some sort of trusted business partner relationship or non-regular full-time employment (e.g., contractors).

Pie chart of Healthcare Victim Organization Relationship to Insiders. 91 organizations, or 82%, employed the insider. 20 organizations, or 18%, did not directly employ the insider.

Sector Overview

Fraud is the most prevalent case type across all of the insider threat incidents within the Healthcare Sector. It occurred in some form in about 76% of all incidents. This rate of fraud is at a higher observed frequency than across the entire NITC corpus (68%). Within these fraud cases, we generally see individuals with access to patient payment records taking advantage of their access to customer/patient data to create fraudulent assets such as credit cards in order to make a profit.

Bar chart of Insider Incidents within Healthcare by Case Type, 1996 to present. The bars show the number of incidents per case type. Fraud: 67. Theft of IP: 12. Sabotage: 8. Sabotage and Fraud: 1.

Sector Characteristics

Below is a summary of the Healthcare Fraud incidents that are contained within the NITC corpus.

Insider Fraud Incidents in Healthcare Who? Most healthcare fraudsters began their malicious activities within their first five years of working for the organization (64.3%). A majority (78.2%) misused their authorized access (e.g., a privileged account or PII data access). Insiders were distributed fairly evenly throughout each age group: twenties (27.8%), thirties (25.9%), forties (31.5%), and fifty and older (14.8%). Nearly all of the healthcare insiders (82.0%) were full time employees. What? Over half (52.7%) of fraud incidents within the healthcare sector involved the theft of customer data, while 37.5% of incidents directly targeted financial assets (e.g., cash). When personal identifiable information (PII) was stolen, almost all of it was customer data (94.9%) versus employee data (5.1%). When? Of the incidents where the attack time was known, 70% of the incidents solely took place during work hours. The other 30% of incidents took place both during work hours and outside of work hours. Where? Of the incidents where the location of the activity was known, a majority occurred only onsite (72.7%). Some involved both onsite and remote activity (23.6%). A couple of incidents involved activity that only occurred remotely (3.6%).	How? Most incidents used rudimentary techniques. In almost one half of incidents, the insider either received or transferred funds (25.8%) and/or abused their privileged access (24.2%). In over a third of incidents (36.4%) the insider tried to conceal their activity in some manner, such as by modifying log files, using a compromised account, or creating an alias. Why? More than three quarters of the insider healthcare fraud incidents (84.8%) took place due to the insider's desire for financial gain. The only other stated motives were entitlement (e.g., the insider felt entitled to pay for time not worked) and the desire to gain a competitive business advantage, both of which took place once.

Analysis

Although Healthcare may be an industry defined by unique regulations (e.g., HIPAA), the statistics gathered for it are similar to the statistics gathered from the broader NITC corpus. For almost all of the insider fraud cases within healthcare, the insider followed a similar path of improperly using patient PII or PHI to acquire some asset in order to gain a profit.

Financial impact differs slightly from the Healthcare sector to the broader NITC corpus. From the incidents with a reported financial impact, eight healthcare organizations (11.6%) recorded a financial impact of greater than $1 million. A higher percentage of fraud incidents (16.9%) outside of the Healthcare sector in the NITC corpus recorded the same financial loss. Notably, we did not find a significant difference in high financial impact. This is noteworthy because, given the gravity of healthcare data and the legal and reputational penalties associated with a breach, we might expect a potentially higher frequency of significant financial loss for the Healthcare sector.

Final Thoughts

Healthcare information security should be of the utmost importance for administrators and IT staff alike. Although identity theft is the most common misuse of patient data, patients could face severe medical debt from identity theft.

To better protect healthcare organizations from insider threat incidents, it is suggested that organizations participate in an Information Sharing and Analysis Center (ISAC) to receive pertinent information and help propagate a collaborative security environment. In addition to participating in an ISAC, it is also suggested that organizations enforce least privilege concerning organizational roles and data access along with tracking and blocking data exfiltration.

Stay tuned for the next post, in which we spotlight the Entertainment sector. Or subscribe to a feed of the Insider Threat blog to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

Entries in the “Insider Threats Across Industry Sectors” series:

Posted on Leave a comment

Top 5 Incident Management Issues

The CERT Division of the SEI has a history of helping organizations develop, improve, and assess their incident management functions. Frequently we discover that an organization’s primary focus is on security incident response, rather than the broader effort of security incident management. Incident response is just one step in the incident management lifecycle. In this blog post, we look at five recurring issues we regularly encounter in organizations’ Incident Management programs, along with recommended solutions. By discovering and resolving these issues, organizations can attain a better cybersecurity posture.

Incident Management Lifecycle

The incident management evaluation process we use is based on a number of known standards and guidelines from government and industry, such as the National Institute of Standards and Technology (NIST) Special Publications (SP) 800-61 Rev. 2 and 800-53 Rev. 4, DOD guidance, and our own internal research. Currently we evaluate organizations against a phased incident management lifecycle with associated categories and subcategories as described below.

  • PLAN focuses on implementing an operational and successful incident management program.
  • DEFEND relates to actions taken to prevent attacks from occurring and mitigate the impact of attacks that do occur as well as fixing actual and potential malicious activity.
  • IDENTIFY includes proactively collecting information about current events, potential incidents, vulnerabilities, or other incident management functions.
  • ACT includes the steps taken to analyze, resolve, or mitigate an event or incident.
  • MAINTAIN focuses on preserving and improving the computer security incident response team (CSIRT) or incident management function itself.

The incident management lifecycle has five phases and subcategories. Phase 1 is Plan, with subcategories Establish Incident Management Program and Develop Tools/Processes. Phase 2 is Defend, with subcategories Risk Assessment, Operational Exercises, and Network Defense. Phase 3 is Identify, with subcategories Network and Systems Monitoring and Threat and Situational Awareness. Phase 4 is Act, with subcategories Reporting, Analysis, and Response. Phase 5 is Maintain, with subcategories Program Management, Development Technology, and Physical Security. The Maintain phase leads back to the Plan phase.

The Top 5 Issues in Incident Management

Based on our Incident Management Evaluations, we have discovered the most common issues encountered by organizations with deficient incident management programs. Understanding these problems can provide insights into better management of incidents before they become major security concerns.

(1) No list or database of critical assets

An absence of a database or list of critical assets is typically due to a lack of asset management processes and procedures. Not having documentation of critical assets and data would decrease the ability to defend and protect them from potential attackers and other threats.

Recommendations

Develop an inventory of all critical assets and data. It is also important to establish and document processes for the management of lists, to include processes for updates, reviews, and storage.

(2) No insider threat program

The risk of a successful insider exploit in the organization will increase without an insider threat program. The loss or compromise of critical assets, personally identifiable information (PII), sensitive information, and other valuable assets from insider fraud, theft, sabotage, and acts of violence or terror may produce irreparable damage.

Recommendations

Develop a formalized insider threat program with defined roles and responsibilities. The program must have criteria and thresholds for conducting inquiries, referring to investigators, and requesting prosecution. For more information, see the Common Sense Guide to Mitigating Insider Threats, Fifth Edition.

(3) Operational exercises not conducted

Organizations without operational exercises will not be able to practice standard operating procedures (SOPs) in a realistic environment. Gathering lessons-learned, improving IM operations and procedures, and validating operations may also suffer.

Recommendations

Develop a formal process to perform operational exercises, create lessons learned, and incorporate them into future exercise objectives and operational SOPs. Doing so will benefit the organization tremendously. For more information, see the NIST SP 800-84 and NIST SP 800-61 Rev. 2.

(4) No operational security (OPSEC) program

Not having a formal operational security program can reduce awareness of sensitive information and operations. This lack of knowledge can lead to unintentional exposure of data about processes and procedures, along with the inability to properly handle, store, and transport sensitive data.

Recommendations

Establish a formal OPSEC program that covers sensitive information. The program should include policies for identifying, controlling, and handling sensitive information. The organization should also implement a policy for the storage, transport, and release of sensitive data. For more information, see NIST SP 800-61 Rev. 2 and NIST SP 800-53 Rev. 4.

(5) Documented plans and policies not developed

Not having developed plans and policies, such as an Incident Management Plan or a Communications Plan, can cause a number of problems. These issues include a delayed response time due to the lack of stakeholder and staff contact details and improper escalation of incidents or creation of new issues.

Recommendation

Develop an Incident Management (IM) Plan that all stakeholders review during updates. Organizations should develop related policies and procedures, such as a Communications Plan and Information Management Plan. Specifically, they should develop, maintain, distribute, and test an organization-wide communications plan that lists groups (e.g., Information Technology, Human Resources, Legal, Public Affairs, and Physical Security), individuals, and the details of their functional roles and responsibilities as well as relevant contact information. The Information Management Plan should contain a schema of classification and appropriate labels. The plan should include policies or guidance on media relations and acceptable use. For more information, see Executive Order 12958 Classified National Security Information.

Next Time

My next blog post will talk about how to develop an Incident Management Plan and a Communications Plan. These plans are part of any productive Incident Management Program. In the meantime, check out the SEI’s recently released Incident Management Capability Assessment. The capabilities it presents can provide a baseline or benchmark of incident management practices for an organization. Organizations can use the benchmark to assess its current incident management function. You can also learn more about the SEI’s Incident Management Resources and our work in this area.

Posted on Leave a comment

Insider Threats in Information Technology (Part 6 of 9: Insider Threats Across Industry Sectors)

This blog post was co-authored by Carrie Gardner.

As Carrie Gardner wrote in the second blog post in this series, which introduced the Industry Sector Taxonomy, information technology (IT) organizations fall in the NAICS Code category professional, scientific, and technology. IT organizations develop products and perform services advancing the state of the art in technology applications. In many cases, these services directly impact the supply chain since many organizations rely on products and services from other organizations to perform and carry out their own business goals. This post covers insider incidents in the IT sector and focuses mainly on malicious, non-espionage incidents.

The CERT Insider Threat Incident Corpus has 60 incidents in Information Technology, with 631 victim organizations spread across three main subsector spaces: Telecommunications, IT Data Processing, and Application Developers. Telecommunications organizations account for the majority of insider incidents in the CERT Insider Incident Corpus. One specific example of a telecommunications incident involves a contractor working for an Internet service provider (ISP) where the insider committed Sabotage by gaining administrator access and disabling the Internet connection to all customers for almost three weeks, costing the victim organization more than $65,000 to fix.

Bar chart of the number of IT organizations, by subsector, impacted by insider threat incidents, 1996 to the present. The Telecommunications subsector had 30 incidents. IT, Data Processing, Hosting, Etc. had 21 incidents. Software Publishers and Web Developers had 12 incidents.

Federal mandates put forth by EO 13587 and NISPOM Change 2 require DoD, USG, LE, and Defense Contractors with access to or who handle classified information to have insider threat programs that involve monitoring of IT systems for threats such as data exfiltration and sabotage. The absence of similar federal mandates for the non cleared private sector leaves many organizations, including those in IT, without insider threat programs or insider threat security controls. These organizations may be more susceptible to insider attacks. This situation could lead to incidents not being detected simply because of a lack of security awareness training about insider threats and their impacts.

Of the 60 IT insider incidents, we identified 81 organizations impacted by those incidents, of which 63 (78%) organizations were both the direct victim and the direct employer of the insider. The remaining 18 (22%) organizations involved trusted business partner relationships in which an insider was a contractor or had non-regular full-time employement with the victim organization.

Pie chart of information technology victim organization relationship to insiders. 18 organizations, or 22%, did not directly employ the insider. 63 organizations, or 78%, employed the insiders.Sector Overview

Insider incidents in the IT sector included IT Sabotage (36.67%), Fraud (21.67%), and Theft of IP (16.67%).

Bar chart of the number of insider incidents within IT by case type, 1996 to the present. Sabotage: 22. Fraud: 13. Theft of IP: 10. Fraud and Theft of IP: 7. Sabotage and Theft of IP: 5. Misuse: 3.The remaining analysis will focus on Sabotage, the incident type of greatest number.

Sector Characteristics

Over one third (36.67%) of incidents impacting IT organizations involved Sabotage. The statistics below include only incidents where the case type was solely Sabotage (22 incidents). Each attribute (i.e., Who, What, When, Where, How, Why) considers only cases where that attribute was known.

Who? A majority (71.4%) were former employees, and an overwhelming majority (80.0%) of the insiders were full-time employees while employed at the victim organization. Two-thirds (66.7%) of insiders were with the victim organization for less than a year. One-fifth (20.0%) of insiders were former employees whose access was not deactivated, and some (15.0%) of them had administrator or root privileges. Insiders were relatively young in age: teens (9.5%), twenties (38.1%), thirties (47.6%), and forties (4.8%). Most insiders occupied system administrator (31.8%), non-technical management (27.3%), or other technical (22.7%) positions. Some insiders occupied various positions, some including the aforementioned roles, throughout their tenure at the victim organization. What? More than half (60.7%) of the targets in Sabotage incidents were networks or systems. Another common target was data - insiders deleted, modified, copied, or hid customer data (10.7%) and/or passwords (7.1%). When? For the incidents where attack time was known (10), half (50.0%) involved insider malicious activity taking place only outside of work hours. Over a third of these incidents (40.0%) involved malicious activity only taking place during work hours. Few Sabotage incidents (10%) took place both during and outside of work hours. Where? Insiders primarily committed sabotage off-site using some type of remote access (81.0%).  Few insiders committed sabotage on-site (9.5%) or took actions both on-site and remotely (9.5%). How? Unlike fraudsters, insiders committing sabotage are usually in more technical roles and can harm systems by changing lines of software code. Few insiders sabotaged backups (17.6%), created an unauthorized account (17.6%), or used a keystroke logger (5.9%). Almost a third (30.0%) of insiders abused their privileged access or modified critical data (30.0%). A quarter of the insiders committing Sabotage (25.0%) received or transferred fraudulent funds. Why? Unsurprisingly, of the 20 cases with a known motive, the insiders were seeking revenge (100.0%).

Analysis

Insiders committing Sabotage in the IT sector tended to be in high-trust IT positions, such as those with administrator-level access and permissions. These insiders typically committed the incident outside of typical working hours. In insider Sabotage incidents where the financial impact was known and the victim organization directly employed the insider (17), the median financial impact was between $10,000 and $20,000. Overall, in IT insider incidents and all evaluated incident types, where impact was known (63 total), the median impact was between $5,000 and $26,000. For comparison, the median financial impact of a domestic, malicious insider threat incident–across all industries within the CERT Insider Threat Incident Corpus where financial impact is known–is between $95,200 and $257,500. Six Sabotage incidents (9.5%) occurring within the IT sector had a financial impact of $1 million or more.

Final Thoughts

Reliance on the supply chain within the IT sector is growing rapidly, particularly in today’s popular business models. When looking at IT Sabotoge incidents, most incidents were conducted by employees who had the greatest privilege and trust, which is why the CERT Division’s Common Sense Guide to Mitigating Insider Threats (CSG), Fifth Edition recommends creating separation of duties and granting least privileges.

By thoroughly understanding motives and implementing effective behavioral and techncial monitoring strategies, organizations can better prevent, detect, and respond to insider incidents, including Sabotage. The cases of Sabotage in the IT sector tell us that former employees possess potentially damaging knowledge to do devastating harm. Some may retain access to an organization’s systems, and some may be motivated to seek revenge, a known factor in these incidents. Best practice 20 of the Common Sense Guide referenced above recommends that organizations implement better practices and procedures for employee separation and disabling access to organizational systems.

Stay tuned for the next post, which will spotlight the Healthcare Services sector, or subscribe to a feed of the Insider Threat blog to be alerted when any new post is available. For more information about the CERT National Insider Threat Center, or to provide feedback, please contact [email protected].

1 For some events, there is a one-to-many mapping for incidents to many victimized organizations that directly employed the insider.

Entries in the “Insider Threats Across Industry Sectors” series: