This is in praise of Peter Gutmann‘s book draft Engineering Security, and the title is inspired by his talk Everything You Never Wanted to Know about PKI but were Forced to Find Out.
Chances are high that any non-geek reader is already intimidated by the acronym PKI – sharing the links above on LinkedIn I have been asked Oh. Wait. What the %&$%^ is PKI??
This reaction is spot-on as this post is more about usability and perception of technology by end-users despite or because I have worked for more than 10 years at the geeky end of Public Key Infrastructure. In summary, PKI is a bunch (actually a ton) of standards that should allow for creating the electronic counterparts of signatures, of issuing passports, of transferring data in locked cabinets. It should solve all security issues basically.
The following images from Peter Gutmann’s book might invoke some memories.
Security warnings designed by geeks look like this:

As a normal user, you might rather see this:

The funny thing was that I picked this book to take a break from books on psychology and return to the geeky stuff – and then I was back to all kinds of psychological biases and Kahneman’s Prospect Theory for example.
What I appreciate in particular is the diverse range of systems and technologies considered – Apple, Android, UNIX, Microsoft,…, all evaluated agnosticly, plus a diverse range of interdisciplinary research considered. Now that’s what I call true erudition with a modern touch. Above all, I enjoyed the conversational and irreverent tone – I have never started reading a book for technical reasons and then was not able to put it down because it was so entertaining.
My personal summary – which resonates a lot with my experience – is:
On trying to make systems more secure you might not only make them more unusable and obnoxious but also more insecure.
A concise summary is also given in Gutmann’s talk Things that Make Us Stupid. I liked in particular the ignition key as a real-world example for a device that is smart and easy-to-use, and providing security as a by-product – very different from interfaces of ‘security software’.
Peter Gutmann is not at all siding with ‘experts’ who always chide end-users for being lazy and dumb – writing passwords down and stick the post-its on their screens – and who state that all we need is more training and user awareness. Normal users use systems to get their job done and they apply risk management in an intuitive way: Should I waste time in following an obnoxious policy or should I try to pass that hurdle as quick as possible to do what I am actually paid for?
Geeks are weird – that’s a quote from the lecture slides linked above. Since Peter Gutmann is an academic computer scientist and obviously a down-to-earth practitioner with ample hands-on experience – which would definitely qualify him as a Geek God – his critique is even more convincing. In the book he quotes psychological research which prove that geeks really think different (as per standardized testing of personality types). Geeks constitute a minority of people (7%) that tend to take decisions – such as Should I click that pop-up? – in a ‘rational’ manner, as the simple and mostly wrong theories on decision making have proposed. One example Gutmann uses is testing for basic understanding of logics, such as Does ‘All X are Y’ imply ‘Some X are Y’? Across cultures the majority of people thinks that this is wrong.
Normal people – and I think also geeks when they don’t operate in geek mode, e.g. in the wild, not in their programmer’s cave – fall for many so-called fallacies and biases.
Our intuitive decision making engine runs on autopilot and we get conditioned to click away EULAs, or next-next-finish the dreaded install wizards, or click away pop-ups, including the warnings. As users we don’t generate testable hypotheses or calculate risks but act unconsciously based on our experience what had worked in the past – and usually the click-away-anything approach works just fine. You would need US navy-style constant drilling in order to be alert enough not to fall for those fallacies. This does exactly apply to anonymous end users using their home PCs to do online-banking.
Security indicators like padlocks and browser address bar colors change with every version of popular browsers. Not even tech-savvy users are able to tell from those indicators if they are ‘secure’ now. But what it is extremely difficult: Users would need to watch out for the lack of an indicator (that’s barely visible when it’s there). And we are – owing to confirmation bias – extremely bad at spotting the negative, the lack of something. Gutmann calls this the Simon Says problem.
It is intriguing to see how biases about what ‘the others’ – the users or the attackers – would do enter technical designs. For example it is often assumed that a client machine or user who has authenticated itself is more trustworthy – and servers are more vulnerable to a malformed packet sent after successful authentication. In the Stuxnet attack digitally signed malware (signed by stolen keys) that has been used – ‘if it’s signed it has to be secure’.
To make things worse users are even conditioned for ‘insecure’ behavior: When banks use all kinds fancy domain names to market their latest products, they lure their users into clicking on links to that fancy sites in e-mails and have them logon with their banking user accounts via these sites they train users to fall for phishing e-mails – despite the fact that the same e-mails half-heartedly warn about clicking arbitrary links in e-mails.
I believe in relation to systems like PKI – that require you run some intricate procedures every few years only (these are called ceremonies for a reason) – also admins should also be considered ‘users’.
I have spent many hours in discussing proposed security features like Passwords need to be impossible to remember and never written down with people whose job it is to audit, draft policies, and read articles on what Gutmann calls conference-paper-attacks all the day. These are not the people who have to run systems, deal with helpdesk calls or costs, and with requests from VIP users as top level managers who had on the one hand been extremely paranoid about system administrators sniffing their e-mails but yet on the other hand need instant 24/7 support with recovery of encrypted e-mails (This should be given a name like the Top Managers’ Paranoia Paradox)
As a disclaimer I’d like to add that I don’t underestimate cyber security threats, risk management, policies etc. It is probably the current media hype on governments spying on us that makes me advocate a contrarian view.
I could back this up by tons of stories, many of them too good to be made up (but unfortunately NDA-ed): security geeks in terms of ‘designers’ and ‘policy authors’ often underestimate time and efforts required in running their solutions on a daily basis. It is often the so-called trivial and simple things that go wrong, such as: The documentation of that intricate process to be run every X years cannot be found, or the only employee who really knew about the interdependencies is long gone, or allegedly simple logistics that go wrong (Now we are locked in the secret room to run the key ceremony… BTW did anybody think of having the media ready to install the operating system on that high secure isolated machine?).
A large European PKI setup failed (it made headlines) because the sacred key of a root certification authority had been destroyed – which is the expected behavior for so-called Hardware Security Modules when they are tampered with or at least the sensors say so, and there was no backup. The companies running the project and running operations blamed each other.
I am not quoting this to make fun of others – I made enough blunders myself. The typical response to this often is: Projects or operations have been badly managed and you just need to throw more people and money on them to run secure systems in a robust and reliable way. This might be true but it does simply not reflect the typical budget, time constraints, and lack of human resources typical IT departments of corporations have to deal with.
There is often a very real, palpable risk of trading off business continuity and availability (that is: safety) for security.
Again I don’t want to downplay risks associated with broken algorithms and the NSA reading our e-mail. But as Peter Gutmann points out cryptography is the last thing an attacker would target (even if a conference-paper attack had shown it is broken) – the implementation of cryptography rather guides attackers along the lines of where not to attack. Just consider the spectacular recent ‘hack’ of a prestigious one-letter Twitter account which was actually blackmailing the user after having gained control over a user’s custom domain through social engineering – of most likely underpaid call-center agents who had to face that dilemma of meeting the numbers in terms of customer satisfaction versus following the security awareness training they might have had.
Needless to say, encryption, smart cards, PKI etc. would not have prevented that type of attack.
Peter Gutmann says about himself he is throwing rocks at PKIs, and I believe you can illustrate a particularly big problem using a perfect real-live metaphor: Digital certificates are like passports or driver licenses to users – signed by a trusted agency.
Now consider the following: A user might commit a crime and his driver license is seized. PKI’s equivalent of that seizure is to have the issuing agency publishing a black list regularly, listing all the bad guys. Police officers on the road need to have access to that blacklist in order to check drivers’ legitimacy. What happens if a user isn’t blacklisted but the blacklist publishing service is not available? The standard makes this check optional (as many other things which is the norm when an ancient standard is retrofitted with security features) but let’s assume the police app follows the recommendation what they SHOULD do. If the list is unavailable the user is considered and alleged criminal and has to exit the car.
You could also imagine something similar happening to train riders who have printed out an online ticket that cannot be validated (e.g. distinguished from forgery) by the conductor due to a failure in the train’s IT systems.
Any ’emergency’ / ‘incident’ related to digital certificates I was ever called upon to support with was related to false negative blocking users from doing what they need to do because of missing, misconfigured or (temporarily) unavailable certificate revocation lists (CRLs). The most important question in PKI planning is typically how to workaround or prevent inaccessible CRLs. I am aware of how petty this problem may appear to readers – what’s the big deal in monitoring a web server? But have you ever noticed how many alerts (e.g. via SMS) a typical administrator gets – and how many of them are false negatives? When I ask what will happen if the PKI / CRL signing / the web server breaks on Dec. 24 at 11:30 (in a European country) I am typically told that we need to plan for at least some days until recovery. This means that revocation information on the blacklist will be stale, too, as CRLs can be cached for performance reasons.
As you can imagine most corporations rather tend to follow the reasonable approach of putting business continuity over security so they want to make sure that a glitch in the web server hosting that blacklists will not stop 10.000 employees from accessing the wireless LAN, for example. Of course any weird standard can be worked around given infinite resources. The point I wanted to make was that these standards have been designed having something totally different in mind, by PKI Theologians in the 1980s.
Admittedly though, digitally certificates and cryptography make for a great playground for geeks. I think I was a PKI theologian myself many years ago until I rather morphed in what I call anti-security consultant tongue-in-cheek – trying to help users (and admins) to keep on working despite new security features. I often advocated for not using certificates and proposing alternative approaching boiling down the potential PKI project to a few hours of work, against the typical consultant’s mantra of trying to make yourself indispensable in long-term projects and by designing blackboxes the client will never be able to operate on his own. Not only because of the PKI overhead but because alternatives were as secure – just not as hyped.
So in summary I am recommending Peter Gutmann’s terrific resources (check out his Crypto Tutorial, too!) to anybody who is torn between geek enthusiasm for some obscure technology and questioning its value nonetheless.
I was looking forward to this post when I saw it announced in my email, and glad I’ve read it now. I’ve spent considerable time in the past week trying to block my access to my former employer’s data. For a company that is wildly paranoid that everyone is out to steal their ideas or open a competing business–and we sign confidentiality agreements and non-competitive clauses with our employment contracts, then become subject to many intrusive and punitive processes that monitor and interject upon our interactions with each other and clients–but no one but me has done anything to remove my access to confidential and important company documents in my post-employment.
It is interestingly weird to encounter a business owner strongly convinced his staff is out to sabotage and steal from him, but so stingy with his IT budget that he’s built his entire company on free Google Docs and Drop Box. Furthermore, the company policy is to require employees to pay for their own accounts and data storage upgrades when business needs exceed the free space, so we each come to “own” a piece of the server space in which this data is stored.
I know this is an unusually wide gap between paranoia and extravagant risk-taking, and probably more bizarre than the observations you share, but it does go to show how irrational people can be when it comes to security and money!
Thanks, Michelle, for this great comment! First, I am happy that this post of mine is interesting to people outside the “security community”. Second, thanks for sharing this great story! “I’ve spent considerable time in the past week trying to block my access to my former employer’s data.” … too good to be made up!!
I once worked with an equally paranoid security-guidelines-polices-aware customer. Yet when we needed to exchange data we had to use one of those open sharing services (not exactly Dropbox, but probably not “compliant” though). My conclusion was – confirmed many times: if you lock down your system in a way that prevents employees from doing their jobs (which might including) sharing data with external consultants) – then employees have to get creative. It seems in the case you describe it’s even the same person that both creates that policies and then is creative in circumventing them.
So true… and it is the same person creating and circumventing policy. We were subject to company-wide emails dictating process change one day and then within the week subjected to the a process moving in opposition to the prior dictate. Sometimes the rules changed before we got the memo. I suspect mental illness was a factor, but regardless of the “why” it does make a good story. I’m delighted with your observations; I will be giggling all day!
I lied to you. I said I would not read this post until I had finished the book but, as is often the case, curiosity got the better of me. I was not disappointed. Though I’m only halfway through the book I know you have captured the spirit of it perfectly. I am particularly grateful to you for both passing along the book reference as well as posting this blog.
Who would have thought that digital security could be made interesting? Wow!
This is all of particular interest to me right now as the eLearning project I am currently engaged with is all about data obfuscation (sometimes called data masking) — a very practical, user friendly and secure approach to creating non-production data environments, as I am finding out.
Don’t be surprised to see another comment from me once I finish the book.
Thanks, Maurice – glad you like the book!! But it is a thick book – nearly 800 pages, so reading takes a while…
Creating realistic data for tests is important – there is nothing like real data! I had created test data (e.g. in databases) using scripts … but you miss the pesky specifics e.g. special characters in names and it is hard to create gigabytes of realistically random data.
There are a host of other issues too: dealing with foreign keys–not breaking DB relationships, creating realistic alternatives (e.g. you want the names to be reflective of the region, account numbers to look more or less right and such), obfuscating consistently (“Elke Stangl” may appear in several different tables of the same database – client, student, next-of-kin, notes field, etc and you want to obfuscate each instance the same way). Last–and most important–you need to make sure you get all of the Personally Identifying Information; that part is hard with complex systems.
This is really interesting! Are you someday going to blog about this :-)?(*) I guess the database in question is somehow related to eLearning? (And I guess you can’t answer details because this is covered by the NDA ;-))
(*)Probably I am asking because I absolutely feel the urge to get a bit more technical at this blog :-D
A lot us nda but I may have a bit of a go at it in a general sense.
Lots to digest in this post. I don’t think I would call myself a geek, so I think that it is not logic that is stopping me from dealing with security issues. It’s one of these things that tends to be interesting only if you or someone close to you have been subject to a breach. And even then it slips from your consciousness after a while. Also the tools to deal with it are not always intuitive.
Absolutely – and I don’t act in geek mode all the time either. According to the research presented in the book the more totally rational you are the more unhappy you are. Depressed people are closed to the ideal of the economic risk-calculating automaton. So if you want to stay sane you might become a victim to internet fraud easier.
Well-designed tools should work in terms of “Defend, don’t ask” (Gutmann gives some design examples how this could be done better).
facing iso27002 certification in the next couple of months, and having a very practical approach, this was most interesting to read.
Thanks for the feedback – much appreciated as I know you are a seasoned expert! I believe if you are familar with ‘compliance’ and standards _and_ have a pragmatic approach clients will love you!
What I have sometimes observed is that newcomers are more “academic” in trying to implement some framework – I think I was myself in my first PKI designs. The art of implementing compliance is getting it done so this it fulfils the semi-legal requirement but not drowning people in papers and in concepts too complicated to be ever implemented.
if you only know the theory, you still get to find your way in the practical aspects of the job. ldap can used both in a very practical way, and in a completely absurd way … keep it simple …
love the “what the user sees” image. I’ve made many such dialogs disappear temporarily or permanently, too, mostly because I wanted info on how not to pay for a cool game. Luckily, those sites never asked for any banking or personal information…
There are even installation handbooks that provide step-by-step instructions how to get rid of security pop-ups – when vendors didn’t want to certify their components. So I really believe users are not too blame.