Application Security – Who Is Responsible?
If the software industry can’t make applications secure, should governments wade in with regulations? Peter Judge thinks not
When your car fails, you can sue the manufacturer. But if your software turns out not to be secure, you may have a lot more trouble. Because in the IT world, these things are all too often the user’s responsibility.
Last week, most of the people involved in the government of the UK were busy determining who’d do what in the new coalition government (or else who’d run the defeated Labour party). But one of our unelected representatives in the House of Lords found time to go a hundred yards across Parliament Square to weigh in on software security at the Westminster Conference Centre in Victoria Square.
Lord Toby Harris is a Labour peer (and a former leader of Haringey Council). He’s not involved in the government cuts which claimed Becta this week, so he opened Thursday’s International Secure Systems Development (ISSD) conference with a presentation about the need to make software more secure.
Where are the standards for software security?
Lord Harris (left) used an analogy in his speech. Cars are required to pass safety checks, and drivers have to pass a test before driving them on roads which are built to national standards, he said. When users take a PC for a spin on the Internet, by contrast, they have no such standards, and if things go wrong have very little comeback against the people who built and sold the software they use.
The roads are kept safe by regulations, then. So was Lord Harris calling for simliar regulations to make software secure? No, he told eWEEK Europe after his speech. Apart form anything else, the fact that the Internet is international would make such an idea difficult to administer.
“I wasn’t making specific demands for regulation but there should be an expectation on software developers,” he told us. “It shouldn’t just be the responsibility of the end user. There should be responsibility on all the others involved, including the system designers and developers.”
Lord Harris would like to see some sort of “kitemark” [after the British Standards Institution kitemark] agreed collectively by the industry – which would guarantee to the user that software met minimum security levels: “Some sort of accreditation built into the process.”
This might actually fit in with Conservative ideas for a Centre for Cyber Security – and the industry could probably be persuaded to come up with something along the lines Harris suggests. “Stopping these crimes is not an easy thing and certainly not a legislative issue,” according to Ian Moyse, EMEA channel director at security software provider Webroot. “Having a kitemark type system and encouraging UK companies to step up the security game should be encouraged across organisations of all sizes.”
A kitemark could have value
Other support for the idea emerged at the ISSD conference. Chris Wysopal of Veracode presented on the state of software security, starting from the depressing statistic that two-thirds of business software fails. Third parties are the Achilles Heel of software devlopment, Wysopal says, but better checking during the development process could help.
“The world would be a better place if people checked on security before putting new stuff out there,” he said.”It would be tough to do it with laws, so I’m looking for a market-based approach.”
He thinks industry acreditation could have the benefit of getting implemented quicker. But how could it be paid for?
When companies pay such a lot of money for software, it should be possible, somewhere in the development cycle, to put the investment in to make the software properly secure. The problem at the moment is that there is no incentive to make that investment.
When software fails or is insecure, the developer, producer and seller rarely suffer. The small print of any contract protects them.
If there was some sort of accreditation for secure software, then at least vendors could get extra revenue. If we could build up trust in the security kitemark, then users would pay a premium for accredited software, and it would actually be worth their while making secure software.
Wysopal suggests the process needs a third-party test for the security of software. By a strange coincidence is pretty much exactly what his company offers – but he has a point.
An accreditation for the security of software is only going to be any good if it is based on some reliable way to assess how secure that software is.