Study Exposes Weaknesses Of Risk-Based Security

An audit by the US Department of Energy has identified a major security weakness in the way that organisations identify critical assets

A recent audit from the office of the US Department of Energy’s Inspector General painted a not-so-rosy picture of efforts to secure the US’ power grid. But it also highlighted something of a conundrum in the world of compliance–how to take a truly risk-based approach when organisations have an incentive to underreport risk.

Inside the report (PDF), the department states its audit, which was conducted between October 2009 and November 2010, found existing CIP (critical infrastructure protection) standards do not always include controls commonly recommended for protecting critical information systems. But another problem was much more basic–the standards did not include a clear definition of what constitutes a critical asset.

Clarity necessary

“When outlining what attributes should be considered when proposing reliability standards, the (Federal Energy Regulatory Commission) noted in Order 672…that CIP reliability standards should be clear and unambiguous regarding what is required and who is required to comply,” the report states. “The Commission noted that such clarity was necessary because users, owners and operators of the bulk electric system must know what they are required to do to maintain reliability. Despite this guidance, both Commission and NERC (Nuclear Energy Regulatory Commission) officials stated that they believed entities were under-reporting the number of critical assets and associated critical cyber assets.”

For example, the DOE notes that in April 2009, then-NERC Chief Security Officer Michael Assante reported that only 29 percent of power generation owners and operators – and less than 63 percent of power transmission owners – identified at least one critical asset on a self-certification compliance survey. Subsequent filings by organisations have not shown significant improvement in the reporting of critical assets, despite the fact those assets could include such things as control centres and transmission substations, the report adds.

“Every so-called risk-based security plan starts with: ‘identify your critical assets’,” said Richard Stiennon, chief research analyst at IT-Harvest. “This never works in IT organisations because it requires someone to admit that the assets they are responsible (for) are not critical. Of course the DBAs (database administrators) say their Oracle database servers are critical, the email guys say email is critical, the web team says the web servers are critical. So you do not get the weighted differentiation you hoped for.”

When regulations are involved there can be the opposite effect as businesses look to avoid some of the costs associated with compliance, he said.

“If you have to disclose a breach of critical health care information or PII (personally identifiable information) immediately none is critical,” he said. “If you have to archive critical communications, suddenly no communication is critical. This is why regulation based on risk does not work either.”

Differences of opinion

Risk-based regulation introduces potential for differences of opinion when the risk rating of a particular asset is determined by the individual responsible for that asset, said Sumner Blount, director of product marketing, security and compliance at CA Technologies. Still, a one-size-fits-all approach, where the risk of a given asset is not considered, is even worse.

“A balance is clearly needed,” he said. “Organisations need to evaluate asset importance based on clearly documented criteria, and the decision should be made by cross-functional, compliance-savvy teams rather than individual asset owners. Similarly, the definition and treatment of critical information or PII should not be up to one person…There are generally accepted definitions for this type of information for regulatory purposes, and where none exists, definitions should be developed by the team so as to avoid conflicts later on.”

In addition, the complexity and redundancy of controls should be to some extent related to the impact and likelihood of a situation that would cause the control to fail, Blount said. Some compliance controls, such as making sure administrators only have the rights they need, are essential due both to the likelihood and the potential impact of a violation. Others are much less likely and therefore don’t require the same type of strong controls, he added.

“In short, risk-based compliance is like Churchill’s description of democracy – it’s one of the worst ways to approach compliance… except for all the other ways that have been tried,” he said.

Financial incentives

While to Blount risk-based regulations have their place, Stiennon argued regulations need to move beyond such methodologies.

“They have not worked in IT security; they will not work in CIP,” he said. “Laws and regulations must supply real financial incentives. Instead of mandating password policies they should assign liability. Make a power generating utility liable for the damage caused by an outage from a cyber incident and they will find the resources to devote to IT security. They, along with their insurers, and bond raters, will quickly determine their risks.”

A vulnerability on an expose machine is a higher priority than one on a machine that is not exposed for example, he noted, just as a vulnerability that is being exploited by a worm or virus is of higher priority than one that requires a targeted attack to exploit.

“Imagine a military commander using risk based management,” he said. “During a battle he would deploy his forces to protect the most valuable assets instead of where the enemy was penetrating his line.”