Whatever, put a chip in it
The careless days in IoT are numbered: Sebastian Floss explores the value of Security by Design for IoT devices, and offers 4 simple rules to get started.
As some of you will recognize, this headline has been borrowed from the infamous twitter account @internetofshit. I like it for the unfortunate reason that, if you take a closer look at the growing market of Internet-connected devices, you will quickly come to the conclusion that it is really not the worst summary of what is going on in IoT right now.
Not in the sense of only stupid or silly additions to previously “offline” products. I am referring to a simplified description of the apparent mindset of companies when jumping on the IoT bandwagon.
Related Stories
A few figures: According to the European Commission’s Eurobarometer for Cyber Security 2017, 69% of companies have no or only a basic understanding of their exposure to cyber risks. On the other hand, Gartner says we have more than 8 billion IoT devices out there in the wild. A study by HP conducted in 2015 found that 70% of the most commonly used IoT devices are being shipped with insecure defaults, a lack of encryption, and so forth. Even though that was three years ago, you can still do the math here.
Sure, consumers demand “smart products now!”, vendors are forced to react quickly and the easiest way of course is to take what you already have and “put a chip in it”. But why ignore security? Isn’t that becoming a selling point?
Well, first of all: No, it isn’t.
People might start to worry about cyber security as some abstract danger, but there are no benchmarks or other means of comparing products regarding cyber security. Features, on the other hand, can be compared easily, so there really is no incentive for a vendor to spend money on security. While it is easy to put the blame for the lack of IoT security solely on a bunch of cheapskate manufacturers, what’s truly flawed is placing responsibility for security solely in the hands of consumers.
It is basically a vicious circle: A new product gets released as “minimum viable” with a big technical debt to security. The vendor wants to test its marketability at the least possible cost (what is abusively called agile development). Later, if the product achieves the desired success, it will get the attention of security researchers (or it just gets hacked by someone). Then, to avoid further damage to their brand, vendors will react and pay off their debt. But the same thing can and will happen again, even to the same product by the same manufacturer: Remember Intel’s disastrous last year? First the Management Engine, then Spectre and Meltdown. Yes, that's not pure IoT, but you get the picture.
Admittedly, in the very long term this behavioral pattern could lead to consumers mistrusting new “smart” products, especially in niche markets, making it difficult for companies to launch new products when they don’t have a certain security reputation.
But let’s be honest: A lot has to happen before consumers arrive at this point.
I firmly believe that trying to raise consumer awareness is a rather futile endeavor. Unless vendors are forced to put labels on the box saying: “This device may try to attack your government or start mining bitcoins for organized crime”, we will not get consumers' attention.
As long as the product works as described, how is the consumer even supposed to know it poses a threat – especially if the threat is not immediately aimed at them!? We as consumers are not expected to tell if a device could be a safety risk, contains hazardous substances, or disturbs the Wi-Fi in the whole building. Here, in governmental regulations we trust!
But these regulations do not include cyber security. At least not yet.
What could possibly go wrong?
There will be governmental regulations. The question is to what extent. For some time now, the voices demanding regulatory measures regarding IoT security have been getting louder. Unfortunately but not surprisingly, there is no single solution in sight, since different stakeholders have different demands. When looking into current discussions in industry and politics, there are three major proposed approaches:
Mandatory minimum security
Since requirements need to be assessable, it is likely that there is going to be a list of specifications that can be easily met, like being forced to use a particular encryption standard or the banning of default passwords. But it will not be possible to assess the quality of the overall security implementation based on such standards. So, while a set of minimum security requirements can be easily set up by regulatory bodies and implemented by vendors, it provides the least amount of security.
Non-mandatory certifications
In their state of the union 2017 press release, the EU mentions plans for a cyber security certification program: “[Vendors] will have to go through one single process in order to obtain a European certificate valid in all Member States. […] Finally, as the demand for more secure solutions is expected to rise worldwide, vendors and providers will also enjoy a competitive advantage to satisfy such a need”. Most likely, such certifications will wear off in serving as a marketing advantage, especially since not every vendor will be able to afford them, so specific (niche-)products will only be available “uncertified”. But what good is it to live in a fortress when some doors do not lock properly?
Software Liability
The third likely approach: Extending manufacturer liabilities, like establishing maximum reaction times to provide updates after a vulnerability has been discovered, with the imposition of fines for non-compliance. That means that during a product’s lifetime there will be costs for dealing with incidents. Easy to implement for governments, since we do have product liability laws that only need to be extended.
Solutions, anyone?
Since I am not a lawyer and making assumptions about tech developments is dangerous in general, just what will be bestowed upon us in the end, I do not know. But come what may, it will swirl things up. Just think of the huge burden the GDPR is placing on companies right now. A perfect example of what happens when politics tries to solve what the markets missed out on.
What I do know, though, is that companies involved with IoT, as well as vendors and users, should prepare themselves. And the vendors, especially, need to start now if they do not want to have to cope with huge costs. FOMO – the “fear of missing out” – should from now on only be dreaded when preparing for securing a product. For starters, it is time to get security into the heads of people involved in software development. At the present time, even universities rarely teach classes about secure software development. If at all, the topic will be treated as a mere side note. Make security a part of every software developer's education, and it will become a given in the design process.
A product that has been designed for security from the ground up will need fewer alterations once we get laws or regulations, and it will pass certification more easily, too.
But the cost, think about the cost!
The only way to avoid additional costs is to start educating software developers. To prove the point that it really is not that hard to get started, here is a short excerpt from our secure development guidelines at ImagineOn: Four simple rules that do not cost any money if they are lived by. Don't get me wrong: I am not suggesting that this list is even remotely complete. I am claiming that if in the past every vendor had followed just a few basic rules like these, we would not be talking about IoT security here and now.
#1: Establish secure defaults
Minimize attack surfaces by disabling anything you do not need. Provide security by avoiding things like default passwords. Yet, remember: “Security at the expense of usability, comes at the expense of security.”
#2: Don’t talk to strangers
Neither trust your users to interact with your device in the desired manner nor trust the cloud service, backend, or whatever else you connect to. Every piece of data provided can be abused for malicious intent.
#3: Security ≠ Secrecy
Avoid security by obscurity and keep things transparent. The black-box approach to protect intellectual property is acceptable, but be under no illusions: Any chip’s data can be read out, any software reverse-engineered.
#4: Accept it, it will be hacked
Develop your software with the ability to react to failure and be prepared to provide updates in case of a disclosed vulnerability. (The money-saving part here is to “be prepared”)
Admittedly, to follow these rules you do need to know what you are doing. So they will not help if you maintain a “Whatever, put a chip in it!” attitude.
But companies without some security expertise among their staff should in any case no longer be developing their own IoT hardware. Go, seek guidance from others or buy the required parts and software from third parties now. Go now, time's a wastin'!
Sebastian Floss is the founder of ImagineOn, a development studio for digital and electronic products and a consultancy for development and manufacturing processes. For nearly twenty years now his mission has been to help create more usable, reliable, and secure software for almost any kind of device. His personal expertise covers development in C/C++ as well as electronics prototyping and design for manufacturability.
Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.