When your car fails, you can sue the manufacturer. But if your software turns out not to be secure, you may have a lot more trouble. Because in the IT world, these things are all too often the user’s responsibility.
Last week, most of the people involved in the government of the UK were busy determining who’d do what in the new coalition government (or else who’d run the defeated Labour party). But one of our unelected representatives in the House of Lords found time to go a hundred yards across Parliament Square to weigh in on software security at the Westminster Conference Centre in Victoria Square.
Lord Toby Harris is a Labour peer (and a former leader of Haringey Council). He’s not involved in the government cuts which claimed Becta this week, so he opened Thursday’s International Secure Systems Development (ISSD) conference with a presentation about the need to make software more secure.
The roads are kept safe by regulations, then. So was Lord Harris calling for simliar regulations to make software secure? No, he told eWEEK Europe after his speech. Apart form anything else, the fact that the Internet is international would make such an idea difficult to administer.
“I wasn’t making specific demands for regulation but there should be an expectation on software developers,” he told us. “It shouldn’t just be the responsibility of the end user. There should be responsibility on all the others involved, including the system designers and developers.”
Lord Harris would like to see some sort of “kitemark” [after the British Standards Institution kitemark] agreed collectively by the industry – which would guarantee to the user that software met minimum security levels: “Some sort of accreditation built into the process.”
This might actually fit in with Conservative ideas for a Centre for Cyber Security – and the industry could probably be persuaded to come up with something along the lines Harris suggests. “Stopping these crimes is not an easy thing and certainly not a legislative issue,” according to Ian Moyse, EMEA channel director at security software provider Webroot. “Having a kitemark type system and encouraging UK companies to step up the security game should be encouraged across organisations of all sizes.”
Other support for the idea emerged at the ISSD conference. Chris Wysopal of Veracode presented on the state of software security, starting from the depressing statistic that two-thirds of business software fails. Third parties are the Achilles Heel of software devlopment, Wysopal says, but better checking during the development process could help.
“The world would be a better place if people checked on security before putting new stuff out there,” he said.”It would be tough to do it with laws, so I’m looking for a market-based approach.”
He thinks industry acreditation could have the benefit of getting implemented quicker. But how could it be paid for?
When companies pay such a lot of money for software, it should be possible, somewhere in the development cycle, to put the investment in to make the software properly secure. The problem at the moment is that there is no incentive to make that investment.
When software fails or is insecure, the developer, producer and seller rarely suffer. The small print of any contract protects them.
If there was some sort of accreditation for secure software, then at least vendors could get extra revenue. If we could build up trust in the security kitemark, then users would pay a premium for accredited software, and it would actually be worth their while making secure software.
Wysopal suggests the process needs a third-party test for the security of software. By a strange coincidence is pretty much exactly what his company offers – but he has a point.
An accreditation for the security of software is only going to be any good if it is based on some reliable way to assess how secure that software is.
Suspended prison sentence for Craig Wright for “flagrant breach” of court order, after his false…
Cash-strapped south American country agrees to sell or discontinue its national Bitcoin wallet after signing…
Google's change will allow advertisers to track customers' digital “fingerprints”, but UK data protection watchdog…
Welcome to Silicon In Focus Podcast: Tech in 2025! Join Steven Webb, UK Chief Technology…
European Commission publishes preliminary instructions to Apple on how to open up iOS to rivals,…
San Francisco jury finds Nima Momeni guilty of second-degree murder of Cash App founder Bob…
View Comments
could someone devise a new Trojan and then put a security kitemark on it to tempt users to download it? what would stop them?
like the verified by visa nonsense
You are so right.
Government regulations are clearly unworkable, but an industry-administered scheme would have much the same problems.
It is possible to take a secure hash value of an executable file to uniquely identify it. This hash could then be used to look up kitemarked apps in an online directory. A fake kitemarked app would have no entry in an online directory of kitemaked apps. This lookup could be automated.
Veracode has an online directory of apps that have passed our VerAfied testing and earned our kitemark.
http://www.veracode.com/directory
-Chris