At WWDC in 2020, Apple announced Touch ID and Face ID authentication for the Web as a new feature in Safari.
All in Principia Securitas
At WWDC in 2020, Apple announced Touch ID and Face ID authentication for the Web as a new feature in Safari.
On November 12, 2020 Apple released macOS Big Sur. In the hours after the release went live, somewhere in Apple's infrastructure an OCSP esponder cried out in pain…
IoT and computer security fears are at the forefront of the news cycle. Foreign hackers, malfeasants from America or government-sponsored entities are hacking your home...
We're starting to see Bluetooth Low Energy (BLE) show up everywhere. Fitness trackers, IoT doohickeys, deadbolt locks and even security tokens are showing up with BLE. By treating your cell phone as the center of life, things have never been more convenient.
Devices that provide 'local' APIs (i.e. services exposed to the local network) tend to be a lot easier to exploit. A buffer overflow here. A command injection there. Pre-authorization exploits abound! But, devices that only listen to external services for commands tend to be harder targets.
Sometimes you just want to verify that the user has a secret. A secret comes in many forms - a key, a random value, a secret function. How could we verify a user has a secret without building a heavy, cryptographically secure channel?
Most cryptosystems rely on access to true random data. For public key schemes like RSA, you need to generate two random primes to generate your keys. Let's look at how hard it can be to generate real random numbers.
Firmware is made up of many layers. These are obvious: a bootloader, an RTOS, your application(s), etc. At startup you want to be able to guarantee the integrity of all that code.
We've discussed why you don't fix IVs for AES-CBC. We've touched on the limitations of only using symmetric keys in your application. We've even covered the challenges of protecting symmetric keys. But one sin we have not discussed is fixing your keys.
Public Key Cryptography simplifies authentication. We can use a public key to authenticate firmware updates signed with the private key. Everything seems pretty clear at this point. But we need to keep our keys secure! How can we approach that...
It seems that Pokemon Go has taken the world by storm. Let's zero in on the issue of application capabilities, or permissions in Android parlance. And let's talk about asking for too much of the user.
Now that we know how the firmware is loaded, it's time to look at what the firmware looks like. For this attack to work, we need to be able to load our own code. Ideally, the device will continue to function as it was intended. How hard will this be?
We now know that a naive, hash-based approach has trivial weaknesses. HMAC on its own prevents image modification. But it's likely easy to steal the key for either scheme. If all devices use the same key, forging a compromised firmware image is easy. So what are our options?
Nothing will leave your product more vulnerable than a badly designed firmware update process.
A large number of attacks on IoT devices rely on being able to write to where code can execute from. Dump your shell code into a buffer. Overwrite the return pointer on stack. Presto, you're running unauthorized code!
If there's one thing that is often screwed up, in all systems, it's cryptography.
Embedded systems security is a balancing act. On one hand, you need a comprehensive threat model. Chances are, your device is in a malfeasant actor's hands. But, you also have limited resources with which to defend against a wide range of attacks this opens up.