Why Narrow Indicators of Compromise Don’t Cut It for Endpoint Monitoring

by Al Hartmann

March 23, 2015

access_time 5 min read

Indicator Breadth—Broad Versus Narrow

Comprehensive cyber attack reports typically include appendices listing detailed indicators of compromise. These tend to be quite narrow in scope, applying to a particular attack group as observed in a specific attack on a targeted organization over a limited time span. Often these narrow indicators are specific artifacts of an observed attack that may constitute definitive evidence of compromise by themselves. That gives them high specificity for that particular attack, but typically at the cost of low sensitivity to similar attacks with different artifacts.

Simply put, narrow indicators have very limited scope, which is why they exist by the billions and billions in enormous continually growing databases of malware signatures, suspect network addresses, malicious filepaths and registry keys, packet and file content snippets, intrusion detection rules, etc. Ziften’s continuous endpoint monitoring solution aggregates a number of these third party databases and threat feeds into the Ziften Knowledge Cloud, to take advantage of known artifact detection. This detection can be applied in real time as well as retrospectively. The latter is important given the short-lived nature of these artifacts as attackers continually obfuscate the details of their attacks to frustrate this narrow IoC detection approach. This is why a continuous monitoring solution must archive monitoring results for months and years (corresponding to industry-reported typical attacker dwell times), to have a sufficient lookback horizon.

While of substantial detection value, narrow IoC’s are largely ineffective in detection of new targeted attacks by skilled cyber adversaries. New target-specific attack code can be readily pre-tested against typical enterprise security products in lab environments to verify non-reuse of detectable artifacts. That is a prominent weakness of security products that function simply as black/white classifiers, i.e. providing an explicit determination of malicious or benign. That detection approach is too easily evaded. And the enterprise you are defending is likely to be thoroughly pwned for months or years before detectable artifacts can be identified (after intensive investigation) for your specific attack instance.

In contrast to the ease with which attack artifacts can be obfuscated by common hacker toolsets, the characteristic strategies and techniques—the modus operandi—used by hackers have endured over many years and several decades. General methods such as weaponized documents and websites, vulnerability exploitation, new service installation, sensitive directory and registry area modification, module injection, new scheduled tasks, malicious scripting, memory and drive corruption, credentials compromise, and many others are broadly typical. Proper use of system logging and monitoring instrumentation can observe much of this characteristic attack activity, when appropriately combined with security analytics to focus on the highest risk observations. This entirely eliminates the potential for attackers to pre-validate the evasiveness of their attack code, since risk quantification is not black/white, but nuanced shades of gray. In particular, all endpoint risk is relative and varying, across any system/user population and time span, and that population (and its temporal dynamics) cannot be duplicated in any lab environment. The fundamental attacker evasion methodology is foiled.

In future blogs we will examine Ziften endpoint risk analysis is more depth, as well as the important relation between endpoint security and endpoint management. “You can’t secure what you can’t manage and you can’t manage what you don’t see” Organizations get pwned because they have less oversight and control of their endpoint population than their adversaries have. Stay tuned…