Encrypt Everything! (or maybe not)

This thread on misc@openbsd has got me miffed again.

Every time someone discovers how lax we have been about important things in our technology, someone gets excited and says, "Everyone must do <mitigation-techinque-A />! Right away!"

And a lot of people do <mitigation-techinque-A /> without understanding what they are doing.

And screw it up. So the world is not safer.

In fact, the world is less safe, because people think they are protected, and behave as if they were protected, when they are not.

Before we discuss information systems security, we need to start with valid assumptions.

When I was a kid of fifteen or so, my dad had some old books on systems design in his library. He was a chemical engineer sidetracked into linguistics, and a bit of a geek. :-)

I tried to read those books, and, of course, they were above my head. But I remember one thing that was basically assumed in those books, and was stated explicitly in a few places.

It is a fundamental property of all systems, that you cannot make them unassailable.

You cannot make an invulnerable fortress, and you cannot make a system without vulnerabilities. Safety and security are a matter of cooperation, or they are a matter of illusion.

Usually both, unfortunately.

There were mathematical proofs in some of the books, somewhat corrolary to the discussion on NP-completeness. Fundamental assumptions in those proofs include certain axioms that are empirically established. We have not been able to produce contradictions in several thousand years of trying. People keep trying, as we should, but the axioms should not be dropped casually.

(If you accept the existence of an immortal God, then, the immortality breaks the most telling assumptions here. God can work NP complete problems in deterministic order -- not time, obviously, that's where God breaks the assumptions, but deterministic order.)

You can improve the safety and security of systems within a certain practical context, sure. But you can't even make them perfectly secure within any real context of practice.

As long as you are mortal and less than God, the best you can do always relies on cooperation at some level.

(Relying on illusion is the worst you can do, of course, but people who dream of power seem to always go for illusion over real cooperation, and I think there is a reason for that. False reason, but reason.)

Setting aside the blather, it is known that encryption is a race. We can invent new ways to encrypt data, and as soon as enough people start using our new ways of encryption that we start catching the interest of the mathematicians, they start finding ways to defeat our new ways.

And the more data we give them to work with, the easier their job is.

So, why does the NSA want us to go wall-to-wall https?

Well?

Do you have a good answer to that?

No?

I didn't think so.

That said, yes, Microsoft is culpable for including software that didn't even try to use best practices of the time when they put Internet Explorer and Outlook into their OS products twenty years ago.

Monopolistic practices? Sure. We should have split the company up. Claiming that the judge's loose lips were some sort of excuse to cancel the plans for splitting Microsoft up, well, that was just politics as usual.

We should have brought charges of criminal negligence against them.

We should still bring charges of criminal negligence against them, because, instead of fixing the old OSses, they quit maintaining them.

The fact that we cannot yet reliable encrypt electronic messages (e-mail, http, etc.) and expect non-technically inclined recipients of our messages to be able to decrypt them is, well, a stain on the industry.

If I say just how bad it is, you'll think I'm just exaggerating, so I won't.

But wall-to-wall encryption can be worse than none at all, if it is all one encryption system. You are basically giving the mathematicians at the NSAs off the world all the data they could ask for, to attack the encryption system with.

(Mind you, they have the source code. Hiding source code was never any use anyway.)

Ideally, we should encrypt only what needs to be encrypted, and leave the rest plaintext.

(It would help if plaintext weren't so vulnerable in and of itself, but that's a really big problem.)

Knowing what to encrypt requires understanding some fundamental things about security, like what is really valuable. Unfortunately, that's not a problem with a generalizable solution. (The blatherings above were not entirely specious.)

Another problem is that reliable encryption requires knowing your conversant: Who are you talking to?

How do you exchange encryption tokens without compromising your tokens, if you don't know who you are talking with?

(Yes, it's also a problem in public key protocols. Remember that question you regularly see about whether you really trust the server on the other end -- enough to store its public key? Strange that you are never asked where the key should be stored, or how it should be rated.)

Right now we have to rely on global certificate authorities to establish the identity of individuals. Why can't we see the inherent contradiction in that?

Do really believe that some global institution should know who you are, and who all your friends are?

Or we have to rely on key-signing parties where we trade keys with people we've just met. That's not really a solution, either.

And we don't have a good way to bring the results of these non-solutions together in a meaningful way.

We don't really have a way, currently, to separate
  • the kind of trust we give to our OS provider
  • from the kind of trust we give to our bank  
  • from the kind of trust we give our network service providers 
  • from the kind of trust we shouldn't give the stores we visit
  • from the kind of trust we give our teachers
  • from the kind of trust we give our preachers
  • from the kind of trust we give our co-workers
  • from the kind of trust we give our friends 
  • from the kind of trust we give our siblings 
  • from the kind of trust we give our parents 
  • from the kind of trust we give our spouses
  • from the kind of trust we give our God(s).
Well, the last one doesn't really have anything to do with information technology, I guess, at least not human information technology, unless we indulge in thinking of the box on our desk as that miracle magic box of ancient lore -- You know, the one that everyone now seems to think the people of Israel thought the Ark of the Covenant was (and maybe many of them did).

If we go wall-to-wall encryption, we currently have to assume that we know all of our conversants. And that really does require relying on Comodo, Symantec, GoDaddy, and their ilk for telling us we that we really are talking with family members or friends, etc.

And most of us don't know how to split that up from the on-line shopping, the bank business, and the bringing work home.

Again, I'm going to refrain from connecting dots here to avoid people thinking I'm just a crackpot.

Now there is an apparent conundrum here. I raised this while talking about why you shouldn't put your bank PIN on a postcard.

If everything is plaintext except the most sensitive information, encryption raises a flag. If I send my bank PIN in an e-mail message, and the only thing encrypted is the PIN and the account number:
Hey, Margot, here's my account number and PIN: Rtt!@!F&~!E14ABCDE>FGHIJ>AJIHGFEDCB1aZ_1HHDE&[
It's encrypted with brandX and you remember my key from last year, right?
If the encryption is done correctly, and the key is not obvious, it will be a hard problem. (The above is a counter-example. Decrypt it in your head for fun if you are so inclined.)

But, ...

If you use the same key for both, and especially if you include labels like "Account #" and "PIN", or even a space between the account number and the PIN, if the attacker feeds it to a program that tries billions of keys until it gets something that looks like an acccount number and a PIN, the attacker has an idea when to stop.

And then it's just a matter of time. 

It may seem counter-intuitive, but it would be better to leave the labels un-encrypted:
Hey, Margot, my account number is ;<=>?8@ABCD8;DCBA@?>=< and my PIN is HHDE.
If the attacker didn't have clues from the first, he might guess that I just used a basic rotation cipher and play with the account number until he got something that looked like an account number, then use the offset from that to decode the PIN with some confidence, but the three-strikes-and-the-bank-locks-you-out rule helps you here, just a little. (It only helps a little -- The attacker doesn't really care a lot whether the bank locks you out in most cases. He just starts with his next stolen number.)

If Margot knew in advance that you would use one key or method for the account number and another for the PIN, the attacker could have a lot of confidence about the number, but little about the PIN. (Unless you use something obvious for the PIN, like I did above, or like your birthdate or something.)

Now, if the whole thing were encrypted, the casual reader wouldn't have any way to know there was even a PIN number in there, unless she were decrypting everything. (Reference the NSA's efforts to decrypt everything.)

This is the best argument for wall-to-wall encryption.

But it does assume that you wouldn't mind any cranks in any particular government or other large, well-funded organization decrypting your mail.

(If the US, or Russia, or Britain is able to do it, it's only a matter of time before they are all doing it, and pretty soon at least some of the criminal organizations and terrorist organizations are, too. Don't forget the principle of the arms race.)

When you send a letter, it is usually in an envelope. When you send a postcard, it usually is not. (I mentioned that rant above, I'll mention it again.)

If we had a default encryption that were reasonable hard for even ISPs to decrypt, that would be similar to a physical envelope. Not quite the same.

Steaming an envelope without leaving tracks takes some level of skill and patience. But no decryption method normally used would leave wrinkles or tears in the encrypted data packet, even when used carelessly.

But now we need two different hard methods of encryption, or, at least, two different keys -- one for the standard encryption that would be effectively putting a real envelope on the data, and one for encrypting bank information and other really sensitive data within the encrypted message.

(It used to be hard enough to look at data on the wire that many people thought the elecronic encoding was equivalent to one level of encryption. But any curious teenager with a slightly mathematical bent can read that. Having the computer to receive a message is equivalent to having the tools to read ASCII or Unicode encoded messages. And it is part of the job of an ISP network administrator to be able to use those tools, so, even if your teenage son might not read your un-encrypted stuff, the BOFH sysad at your ISP might.)

Until we have standardizable methods for encrypting data within messages, it really is not in our best interests to encrypt whole messages now.

I think I'll make an exception to my usual unwillingness to properly wrap up my posts.

Instead of putting all our energies into enrypting everything, let's get the rest of the fundamentals in order:
  • Let's focus first on reforming the identity systems, so we can have a chance to be sure we know whom we are talking with. 
  • Let's develop better messaging standards, that allow embedding encrypted data into our messages in a way that the recipient can know how to decrypt the embedded encryption, and that puts barriers up for non-recipients.
  • Let's figure out how to pass keys to our intended recipients reliably. (This requires teaching people to use out-of-bounds channels, such as "sneakernet". The electronic one-time-pads that some banks offer are a marginal step in the right direction, but have their own problems (hardcoding? sent in mail?) currently. I guess those rate a rant of their own, but not now.)
  • Let's develop specialized browsers, so we aren't using the same browser to talk to the bank that we use to watch videos on youtube or wherever.
  • Let's get the graphics hardware to support separating the graphics memory in the hardware, so that scraping screen buffers can't find other users' data. (Either get Intel out of the way or get them to behave, preferably both. We need real MMUs.)
  • Let's develop chroot-ed "sub-logins" (May I call them that?), so that when a rogue shopping site breaks the browser I'm browsing it with, it can't touch my encryption tokens database without serious follow-on escalations, and so that we can erase the whole browsing session when I think we need to. 
  • And then, when we have the rest of this going, the "envelope" encryption which is far more appropriate than the current panicked pretenses to wall-to-wall becomes meaningful. (I've pretty much described it above. Go back and re-read if you haven't understood what an enveloped encryption would look like.)
(And this rant has consumed one weekend. I'll have to post the code for the toy encryption tool I used above sometime.)


Popular Posts