In the battle over whether the government should require backdoors in cryptography products, the primary objection from those who actually know what they’re talking about is that we’re not smart enough to safely build in backdoors. That point is often met with skepticism or downright dismissal from the nannies and their useful idiots who think we’d be safe from terrorists if only it weren’t for that pesky encryption. Sure, sure, we need it for banking, buying stuff from Amazon, and the thousands of other e-commerce things we’ve come to depend on but why can’t we just have a backdoor to protect us from the bad guys?
In a truly excellent post, Steven Bellovin, a cryptographer of some note, provides a compelling example of how hard encryption really is. The post is probably a bit too technical for Aunt Millie—although definitely not for Irreal readers—but the summary is understandable by anyone.
I won’t give away the details but the TL;DR is that a seemingly simple protocol that anyone would convince themselves is secure (and that was even proved mathematically secure) had a fatal flaw that went undetected for 17 years. This wasn’t some homegrown crypto-thingy that someone whipped up in their basement. It was an actual peer-reviewed protocol that was vetted by the cryptographic community.
The lesson is clear. Even a very simple and seemingly transparent protocol that seemed obviously secure hid a fatal flaw. How then can we expect the hideously complex protocols we’re using today to be well enough understood that they can be safely weakened?
Meanwhile, Matthew Green uses the recent Juniper exploit to explain what happens when you introduce backdoors. Even though this was (presumably) not the work of the NSA, the attacker neatly repurposed the NSA’s infrastructure for their own backdoor. Expect more of the same if the FBI gets its way.