Security Embedded is 15+ years of experience in building secure systems. Learn more about how we can help you by exploring Phil's blog or contacting us.

Security Specifications for the Layperson

IoT and computer security fears are at the forefront of the news cycle. Foreign hackers, malfeasants from America or government-sponsored entities are hacking your home. Your TV is now the world's most effective listening bug. Your wireless home controller is a beach head. Your gaming console (or something more... personal) is sending your habits to its corporate master. It's a tough world to keep a low profile in, today.

The deeper we dive into this world, the more companies try to bamboozle customers into thinking they won't be the next mark. Classic quotes (we've all heard them) include:

  • Secured by MILITARY GRADE encryption
  • Protected using AES-128
  • Uses US GOVERNMENT APPROVED technology

The reality is nobody should care about these facts, lies or not. Or, the "real" reality is that most people aren't qualified to know what these facts mean. How would a customer compare two devices security specs?

Unraveling the Buzzwords

First, the words "Military Grade" are absurd in any security context. American top secret communications are all done using NSA Suite A algorithms. These algorithms are so sensitive that even their codenames are state secrets. So how can one claim to use "Military Grade" encryption with a straight face. Get out of here!

Second, it generally doesn't matter what symmetric cryptographic algorithm you use. We've discussed this in the past: symmetric crypto is only as secure as its keys. Pick the wrong block cipher mode, and you can leak lots of information. This isn't some abstract mathematical definition (but some keys are weaker than others). This is, again, in the sense of safeguarding secrets. If you key your device in plaintext over Zigbee, I hope nobody is eavesdropping.

The paranoid may key their devices with a token that contains black key material through a trusted I/O path. Then you can start to make assertions about the security of communications with a device. This is not necessary and too ponderous for the average user.

There is also the fact we have trained consumers to compare arbitrary numbers when picking products. For example, one vendor may claim to use AES-256. The other mentions AES-128 on the back of the box. But the device that uses AES-256 uses a fixed IV and the key is the lead engineer's dog's name padded out with NULs. Sure looks better to the consumer, though. Would my product be "more secure" if I used 512-bit RSA? Maybe it would look as such to a layperson. I sure as hell wouldn't buy it based on that, though!

But Does it Matter?

Popping the stack, let's look at this in the context of our previous discussions around BLE. So you don't need to actually hide any data from prying eyes. What are you trying to achieve? Why even use AES at all?

When a device needs to ensure it is talking to the appropriate endpoint, HMAC will get you further. HMAC gives you an idea of what key generated the message authentication code. Creative use of HMAC with a limited message space allows you to not even send the message at all. Of course, it has the same key storage weakness problems that AES does.

For a consumer device, key storage is a lost cause - few will invest in secure enclaves for a $5 light bulb. Besides, a light bulb is something that is at worst annoying if someone takes control of it. But why bother encrypting a bulb's state changes, when an HMAC will be perfect?

Helping Consumers Make the Right Choices

A challenge we all face in the security space is increasing the SnR of the real security conversation. Snakeoil salesmen exist everywhere, especially in the information security space. Companies will throw out buzzwords to sound secure, without actually saying anything. Letters and numbers sound fancy and military-esque. Hell, use some military terminology and people will think they're in a sci-fi movie.

Of course, this is a multi-part solution. We need to hold vendors accountable for their implementation. This is equal parts incentive and education.

First, software developers generally are not well trained in building secure systems. This means that whatever gets built is ad-hoc and often an afterthought. The Dunning-Kruger effect takes over. Once you've implemented AES-128 in ECB mode, you're secure, right? Educational material about cryptography, and how it works are crucial. Providing tools that cut down optionality are necessary, too. You can't trust most engineers to decide on the right block cipher mode. How do you do key exchange when pairing a device? All these things are where most security models fall apart. Developers are also bad at determining what needs to be secure. Guidelines, education on available techniques and a good suite of tools will help improve this situation. 

Second, vendors are churning out product where their major differentiator is price. Low price means fewer resources in the development cycle. Paying for security audits and pen tests are out of the question. The only way to break this cycle is to make the vendor liable for their failings. The cost of a data breach today is minimal. Some light reputational harm. An exec (or two) who might leave the firm. Pay some lip service to how it won't happen again and you're off scot-free. In the aftermath though, the users bear the pain. After a breach, fraudulent credit card charges, stolen identities, persistent and targeted phishing all become a reality for these users.

Finally, we need to educate customers. Provide them with tools that allow making a real decision about security. Educate them about what security schemes mean. Teach them about the risk of devices stealing their personal information. We need to define a universal language to talk about security in a personal setting. Stripping away buzzwords, replacing them with meaningful descriptions and iconography is the first step. Disclosing how certain conveniences can lead to different types of risks. Finally, the constant lie of "our systems are completely secure" needs to stop - it just makes this all the more farcical.

This is difficult, but it must happen, and the relevant information must stop being buried in a 30 page EULA. Until then, users will continue to bear the heavy price of being kept in the dark about the realities of how insecure these systems really are.

A Pragmatic Look at Trust

Weaponizing Reverse Engineered Knowledge