While the About Me page mentions that this blog does not have a posting schedule, I must say, I have been on writing hiatus. Writer’s block, you could say. Since the start of summer last year I’ve wanted to add a decryption option to my LUKS boot drive that uses a USB key, in such a way that it would provide two-factor authentication using a keyfile.
Despite several guides online regarding the implementation using just a keyfile, building a method for two-factor authentication proved time-consuming and as of yet fruitless. Due to this disappointing result, I decided that it was time to drop trying to roll my own and throw some money at the problem, so I got myself two Yubikeys. However, these devices offer much more functionality than just a second factor for LUKS disk unlocking. Then again, is using this functionality not giving in to paranoia?
Who can you trust?
There are many great articles out there on trust in the software world. As I am only an enthusiast, and a learning one at that, I won’t claim that this one will be any good, but I do want to add my two cents. In the physical world, social bonds are built on trust. Not only that, trust must always be a two way street there. Friendships and other types of meaningful relationships cannot be formed without both sides showing themselves vulnerable. For the more you invest in a relationship, the more it will hurt you when it is destroyed. That sounds dramatic, but if you are lucky enough to have a very close friend, and I hope that all of you are, then imagine what they could do to hurt you today. How they could employ the personal things they know about you and influence your life. How angry that would make you at them.
The same trust must given to the software we use every day. After all, we leave a huge trail of personal data every time we read a blog on the internet or send an e-mail to a colleague. You trust that the software, and by extension the creators and maintainers of the software and the hardware it’s running on, will not use that trail of data against you. Most worryingly of all: Trust only goes one way when software is involved. The creators of the software do not have to trust you not to abuse their personal data.
If this sounds worrisome: It gets worse. The internet is full of Eves and Mallorys, who are preying on the valuable data of us simple Alices and Bobs. These attackers are employing sophisticated means and theft of your data by them can lead to phishing attacks, spam sent in your name, identity theft or even blackmail. You do not know of their existence and can never see them coming and by extension you can thus not trust any software that you use, even if you have written it yourself, compiled it yourself and are checking every single byte in memory during runtime, unless you also trust the designers of the chips that run your software. With Intels Management Engine, Spectre, Meltdown and related vulnerabilities and attack vectors, of course you cannot do so.
Can you trust yourself?
Humans make mistakes all the time. My mistake is staying awake up to 1AM in order to write this blog post – inspiration strikes at the most inconvenient of times – but more serious mistakes are very common place. Even experts in the cryptography and software development fields make mistakes, otherwise, for example, Heartbleed would never have been possible. Can you trust yourself to do any better, even at your very peak performance? No, that cannot even be up for consideration.
Who should you trust?
Security is all about trust. A president trusts his secret service agents, a country trusts its army for protection and most users place trust in their e-mail providers, devices and software manufacturers. The key to security must then be choosing who and what to place trust in, and how much. There are various levels of trust possible and they apply to both hardware and software, ranging from blind trust to full paranoia, with the Stallman-level one up from the paranoia.
Many people agree that going ‘full Stallman’ on the software that you use is impractical. I quite agree, and for the paranoia level even more so. At some point you have to start trusting, because you cannot verify everything, let alone create everything, you use. Even if you check the signature of every binary you ever run, do you trust the hardware and algorithms that created these signatures?
If a piece of software is open-source and widely used, this is generally quite a good argument for arguing that it is safe to use. Or, at least, the chance of it being maliciously compromised are much smaller, because the chances of detection are much higher. Security-through-obscurity has never been a good practice, something that has been argued for centuries and is described in the all-important Kerckhoff’s Principle, paraphrased:
[The design of a system] must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience. [Only the key must be required to be kept confidential.]
Then, in the end, we must rely on the expertise and skills of others that create our software and hardware to keep the design of our systems open and protected by evaluating possible security issues on regular basis. In general this works quite well. Even though most hardware is absolutely closed-source and it doesn’t look like this will change in the near future, security flaws are found and patched eventually.
Who do you trust with your keys?
Following from this trust, we must then only protect our secret keys in such a way that only we can access them and no one else can, while we trust that the system that the keys are used for is secure. This trust may be strengthened by a particular argued choice of system, but no system can ever be guaranteed to be flawless, not even the in marketing often cited ‘military grade encryption’.
So, where to store your keys? Well, you could of course write them down on a piece of paper, but doing so for a 4096-bit PGP key is going to be tedious and prone to error. Digital storage is really the only practical option. If you store them in a file on your computer, they might get stolen by malware, so that option should be out. Storage on a computer without a network connection is not very practical either and if you store them on a network-connected computer with a passphrase, even a strong passphrase, they can still be stolen and cracked in the future.
The advantage of writing a secret key on a piece of paper is that the piece of paper is a physical thing: It has to be physically stolen in order for the key to be compromised, which makes an attack impractical in most cases. The advantage of a digital key storage is that it is easily accessible and may be protected by a password or some other encryption method. Fortunately, there is a solution that offers both advantages: A USB security key.
USB Security Keys
A USB security key is a physical thing that you can carry with you at all times, yet provides a convenient digital interface to use your keys. If designed properly – so something more than just an Arduino Nano that prints your keys to the serial connection every time you press the reset button, as I have heard proposed at one point – the secret keys should be very hard to extract from the key by themselves. The key operations are then performed on the key itself and the keys never leave the USB security key. Once added to the key, it should even be impossible for the owner of the USB security key to extract their private keys.
These requirements are fulfilled by various available USB Security Keys, including the devices made by Yubico, the Nitrokey or the Google Titan. These three are only examples, but they do appear to be the most popular. Even if the design of a particular key you like is not open source, extraction of a private key through a vulnerability – maliciously intended or as a bug exploit – would still require the physical device, adding a level of security not offered by not having a security key at all.
The place of the Yubikey
Recently I acquired a duo of Yubikey 4’s for a relatively low price. There are disadvantages to the approach that Yubico is taking to creating these keys. They do not allow upgrading of firmware after manufacturing, for example, so possible bugs cannot be fixed. There is something that can be said for this, as some exploits indeed do get implemented using a firmware update, but having open source hardware and software like the Nitrokey is still the more secure option.
There is a mitigation to this, however. As mentioned, full trust in the manufacturer for keeping your keys secure is not required. After all, they can only be extracted from the physical device, and obtaining it is quite hard. The Yubikey therefore does have a position in the security space, but getting a device with an open source hardware and software design is the better choice for even better security. But, when you’re on a budget or are just looking to try out the technology without falling deeper into the black hole that is paranoia in digital security, then getting a Yubikey is a justifiable choice.
After-note: This post does not go into the details of what types of keys can be stored on USB security keys, as this differs from key to key. If you do decide to get a key, I highly recommend getting one with at least WebAuthn support for 2FA as well as support for GPG keys (which may be used for e-mail encryption and git commit signing, for example).