Derrick Spooner co-authored this post.

Because of the scope and scale of the insider threat, the SEI recommends that organizations adopt a use-case-based approach to insider risk mitigation. In such an approach, organizations iteratively deploy capabilities to prevent, detect, and respond to the greatest threats to their most critical assets. However, the tools modern insider threat programs rely on to collect and analyze data do not adapt themselves to the organization or its changing insider threat landscape. A sound testing environment for insider threat tools allows an organization to quickly, responsively, and effectively deploy various capabilities. This blog post presents the motivation, design, implementation, and challenges associated with building an insider threat testing environment.

The Challenges of Insider Threat Tools

Tools used by insider threat programs require ongoing refinement and enhancement. They do not arrive preconfigured to a specific organization’s security posture, risk appetite, and policies and procedures, nor do they automatically reconfigure themselves when these conditions change. A crucial aspect of building a successful insider threat program is testing to measure the effectiveness of insider threat tools and their configurations, such as how and when they generate alerts of risk indicators.

However, testing new tools or changing existing tools in an operational environment is fraught with issues, including the following:

  • potential negative impacts to an organization’s networks and systems by faulty or misconfigured tools
  • flawed measures of effectiveness caused by an inability to differentiate actual malicious behavior and activity from benign (baseline) activity
  • security concerns associated with granting tool vendors access to operational data

A Sound Testing Environment

To address these issues, organizations should use a controlled, isolated testing environment. At the CERT Division of the SEI, we have helped many insider threat programs develop these environments, and we developed our own environments as a reference architecture. From these experiences, we formed a series of design principles and functional requirements to help organizations develop their own insider threat tool-testing capabilities.

Every insider threat testing environment should realistically portray the organization’s users and include activities and attributes such as the following:

  • sending and receiving email to and from internal and external recipients using enterprise and web-based personal email accounts
  • using multiple browsers to visit a variety of websites
  • creating, reading, updating, and deleting files
  • using services like cloud storage, removable media, virtual private networks, printers, scanners, and remote administration protocols (e.g., remote desktop protocol, or RDP, and Secure Shell, or SSH)
  • portraying a robust organizational structure complete with user personas, privileged and nonprivileged users, and a reporting hierarchy
  • capturing personnel events, such as performance appraisals, complaints, reprimands, policy violations, and job status (e.g., promotions, terminations, resignations)
  • following a realistic work schedule, including increases during peak work hours, downtime during breaks, and consistent working and nonworking hours

Recommended Approach

1. Implement a virtualized representation of an enterprise network. Such an approach allows the organization to conduct controlled, isolated testing and validation using a dedicated, small-scale testing architecture that is

  • highly configurable so that the organization can adjust variables (e.g., security policies, software providers, and user behavior baselines)
  • decoupled so that the data-generation technique does not depend on a specific collection tool or log output format
  • controlled and repeatable so that a sound experimental process can control all but the specific variables being tested
  • accessible so that stakeholders, vendors, and other third parties can be involved as required
  • comprehensive so that both technical and behavioral observable actions are supported

2. Enable testers to configure different parameters, such as network size, security controls, and logging verbosity. The testing environment should also generate data so that it does not depend on a particular capture mechanism. Some testing environments may generate data by directly emitting log output in a given format. For example, with data loss prevention (DLP) software, a robust tool-testing environment mimics the actual user-centric activities of sending or moving files. This approach is better than the alternative, which simply generates synthetic log entries that mimic the log output corresponding to the actual output from the specific DLP software. A robust tool-testing environment should be flexible and allow the organization to update software or change vendors for a particular class of tool.

3. Programmatically simulate the user’s actions. This approach enables specific tests to be measurable and repeatable when compared to the known baseline of activities. In other words, the benign activity should be executed programmatically so that malicious or nonstandard activity can be inserted as a known delta from the baseline.

4. Make the platform accessible to all those involved in the testing and analysis process, including employees, trusted business partners, and vendors. With complex insider threat systems, it is generally more expedient to enable vendors to install and configure their own products than to rely on the organization’s staff to learn how to use the tool before installation.

5. Configure the environment to enable a holistic representation of employees and other individuals, including the technical logs of their activities and the behavioral attributes of the data related to human resources, the organization’s reporting structure, employee performance, and policy violations.

Underlying Infrastructure

The testing environment can consist of virtualized servers, workstations, and network devices running within a large-scale virtualized network topology, either on the premises or available from a cloud provider. The use of virtualization provides several key features, including the following:

  1. The testing environment is able to run large-scale simulations using a reduced number of servers, where the size of the topology is primarily restricted by the CPU, RAM, and storage of the hypervisor and can range from tens to hundreds of users.
  2. Testers can satisfy repeatability design requirements by using built-in snapshot functionality. Snapshots of all systems should be taken before inserting malicious test data to preserve the pristine state the organization knows is good. Once a test is performed and observed, the systems can be rolled back to their snapshots and continue producing baseline traffic.
  3. Testers can deploy identical copies of the network topology to perform multiple tests in parallel.
  4. The environment can accurately portray live, Internet-like traffic using tools such as the SEI’s GreyBox and TopGen. These tools simulate traffic by (1) virtualizing the core-routing infrastructure and (2) serving locally cached, static copies of several hundred external sites.

User Simulation

With this testing approach, users can be simulated using any number of frameworks that realistically mimic user behavior, such as the SEI’s GHOSTS, or by developing custom code. The user agents can browse copies of local sites instead of browsing directly to external sites. The environment may also need to simulate other Internet-like services, such as webmail and cloud storage. These services can be provided using open source or free tools, such as iRedMail and ownCloud.

By using these local services, the environment maintains consistency in data sources and ensures that testing does not depend on operational cloud-based services that can vary between tests. Moreover, this approach enables the system state to be fully reverted between tests to eliminate all traces of prior activity.

Simulating a robust user population also requires developing artifacts, such as organizational charts, acceptable-use policies, personal-conduct policies, and data-classification schemes.

  • An organizational chart establishes a chain of management for all employees and organizes them into peer groups in the organization’s departments and business units.
  • The acceptable-use and personal-conduct policies codify violations that are tracked and justify the monitoring for behavioral precursors, personal predispositions, and other concerning behaviors.
  • Data classification schemes provide descriptions and justifications for tagging documents that may require enhanced monitoring and logging.

Training

This kind of testing environment can also be leveraged for training purposes. Most insider threat analyst training is done on the job using live environments or paper-based exercises. Using an operational environment may be effective for on-the-job training, but it presents challenges like those previously discussed. Paper-based exercises lack the realism of interactions with actual systems and log data.

The repeatable nature of a virtualized environment makes it an ideal setting for developing hands-on training for insider threat analysts. The platform may also provide remote accessibility that does not limit participation based on geographic location. By injecting controlled threat scenarios into the platform, analysts can explore the impact of malicious behavior on various systems and from various sensors in the virtual organization.

Conclusion

A formal insider threat program tasked to deal with increasing attacks and a changing mission scope should implement, as part of a formal program lifecycle, an environment for testing the efficacy of policies, programs, tools, and control changes. We propose a set of key requirements for the testing environment that provides the flexibility and repeatability needed to determine the efficacy of those elements without affecting mission operations.