by Chris Clearfield
Harvard Business Review, June 26, 2013
Cyber attacks, once primarily directed against networks to steal confidential information and wreak virtual havoc, have begun to expand and are now directly affecting the physical world. For example, the recent hacking of the Associated Press’s Twitter account by the Syrian Electronic Army and subsequent tweet about an explosion at the White House caused the U.S. stock market to decline almost 1% before the news was revealed as a hoax. In 2010 the computer worm Stuxnet was discovered and implicated in the attack that caused physical damage to centrifuges at Iranian nuclear enrichment facilities. In 2012 a hacker built and revealed a simple device that can open Onity-brand electronic locks (which secure over 4 million hotel room doors) without a key.
The growing Internet of Things — the connection of physical devices to the internet — will rapidly expand the number of connected devices integrated into our everyday lives. From connected cars, iPhone-controlled locks (versions of which here, here, and here are in or close to production), to the hypothetical “smart fridge” that will one day order milk for me when I’ve run out, these technologies bring with them the promise of energy efficiency, convenience, and flexibility. They also have the potential to allow cyber attackers into the physical world in which we live as they seize on security holes in these new systems.
As consumer demand for connected devices increases (and projections from Cisco and others suggest that there will be 50 billion connected devices by 2020), traditional manufacturers will, en masse, become manufacturers of connected devices. As these devices become subject to the same cyber threats with which software developers have long been familiar, companies will need to take steps to integrate security considerations into their designs and design processes right from the start.
Train engineers to apply existing systems-engineering tools to security threats. Apart from those who work on specific niche applications, engineers who write software for embedded hardware systems don’t usually focus on security issues. For example, although Onity locks used a “secret” cryptographic key to prevent unauthorized access, the key was stored insecurely in the lock’s memory, allowing hackers to bypass the security and open the lock. And several models of networked security cameras, designed for remote streaming of real-time security footage over the internet, are vulnerable to remote hacking through a software flaw that exposes the video stream to unauthorized parties and compromises security and privacy. Educating engineers on common cyber threats and design paradigms that have evolved to mitigate attacks would allow them to integrate existing robust security protections into the systems-engineering practices that they already use to build reliable, stable systems.
Train engineers to incorporate security into products by using modular hardware and software designs, so that a breach in one area can’t take control over other parts of the system. Technologies like microkernels and hypervisors (which allow individual components to fail and be restarted without affecting other parts of the system) are already commonly used to increase the reliability of embedded systems. These technologies also isolate different parts of the system from one another in the event of a security breach. So, for example, if attackers remotely take control of a car’s infotainment system through an unsecure music-streaming station or e-mail app, they won’t have access to the authentication or navigation application to change the car’s destination or order a remote pickup.
Use existing, open security standards where possible. Open security standards, whose details and implementation have been investigated and vetted by many experts, are more secure than proprietary solutions. Robust security is hard to achieve, and mistakes in proprietary approaches often manifest themselves only when a third party has succeeded in uncovering a security weakness. The internet is built on open standards. Technologies like TLS (which provides secure identification, encryption, and prevents eavesdropping) and OAuth (an open standard for authentication) provide secure, tested protocols. While choosing an established platform does remove direct control over some security design decisions, it is preferable to rolling one’s own custom solution, which will have been subject to less scrutiny and the input of fewer experts.
Encourage a skeptical culture. In addition to incorporating security considerations into formal design processes, companies should encourage a skeptical culture in which intellectually diverse groups from different product teams review one another’s designs and give feedback about flaws, including those that affect security. One particularly useful approach is to designate internal specialists or external experts as devil’s advocates and make it their job to independently review, test, and try to break existing systems. Products produced from a culture in which skepticism is not just encouraged but formally ensured are not only more secure but generally more reliable, as well.
The smart, connected fridge that will know when I’ve run out of milk and automatically place an order seems like a lovely, benign addition to my house. But when that fridge also has access to my credit card and can wirelessly unlock a door for a delivery person, it becomes less benign, especially if it depends on a security model designed for a fridge that only plugs into the power outlet. As cars, locks, cameras, and other traditionally unconnected products join the Internet of Things, cyber threats directed toward hardware will affect an increasing range of companies. For these companies, investing in a robust, open security solution will be less costly than deploying a proprietary system, if its hidden flaws cause customers harm, trigger costly product recalls, and damage their brand.