The Orphaned Internet Domain Risk

I have clicked on company websites of social media acquaintances, and something is not right: Slight errors in formatting, encoding errors for special German characters.

Then I notice that some of the pages contain links to other websites that advertize products in a spammy way. However, the links to the spammy sites are embedded in this alleged company websites in a subtle way: Using the (nearly) correct layout, or  embedding the link in a ‘news article’ that also contains legit product information – content really related to the internet domain I am visiting.

Looking up whois information tells me that these internet domain are not owned by my friends anymore – consistent with what they actually say on the social media profiles. So how come that they ‘have given’ their former domains to spammers? They did not, and they didn’t need to: Spammers simply need to watch out for expired domains, seize them when they are available – and then reconstruct the former legit content from public archives, and interleave it with their spammy messages.

The former content of legitimate sites is often available on the web archive. Here is the timeline of one of the sites I checked:

Clicking on the details shows:

  • Last display of legit content in 2008.
  • In 2012 and 2013 a generic message from the hosting provider was displayed: This site has been registered by one of our clients
  • After that we see mainly 403 Forbidden errors – so the spammers don’t want their site to be archived – but at one time a screen capture of the spammy site had been taken.

The new site shows the name of the former owner at the bottom but an unobtrusive link had been added, indicating the new owner – a US-based marketing and SEO consultancy.

So my take away is: If you ever feel like decluttering your websites and free yourself of your useless digital possessions – and possibly also social media accounts, think twice: As soon as your domain or name is available, somebody might take it, and re-use and exploit your former content and possibly your former reputation for promoting their spammy stuff in a shady way.

This happened a while ago, but I know now it can get much worse: Why only distribute marketing spam if you can distribute malware through channels still considered trusted? In this blog post Malwarebytes raises the question if such practices are illegal or not – it seems that question is not straight-forward to answer.

Visitors do not even have to visit the abandoned domain explicitly to get hacked by malware served. I have seen some reports of abandoned embedded plug-ins turned into malicious zombies. Silly example: If you embed your latest tweets, Twitter goes out-of-business, and its domains are seized by spammers – you Follow Me icon might help to spread malware.

If a legit site runs third-party code, they need to trust the authors of this code. For example, Equifax’ website recently served spyware:

… the problem stemmed from a “third-party vendor that Equifax uses to collect website performance data,” and that “the vendor’s code running on an Equifax Web site was serving malicious content.”

So if you run any plug-ins, embedded widgets or the like – better check out regularly if the originating domain is still run by the expected owner – monitor your vendors often; and don’t run code you do not absolutely need in the first place. Don’t use embedded active badges if a simple link to your profile would do.

Do a painful boring inventory and assessment often – then you will notice how much work it is to manage these ‘partners’ and rather stay away from signing up and registering for too much services.

Update 2017-10-25: And as we speak, we learn about another example – snatching a domain used for a Dell backup software, preinstalled on PCs.

When I Did Social Engineering without Recognizing It

I planned to read something about history this summer.

Then I picked the history of hacking. My favorite was Kevin Mitnick’s autobiography – the very definition of a page-turner.

The book is free of hardcore technical jargon and written for geeks and lay audience alike. Readers are introduced to the spirit of a hacker in the older sense of the word: Mitnick’s hacks were motivated by the thrill of exploring systems but he never gained financially.

Kevin Mitnick successfully obtained the latest source code of cell phones,

reports on security vulnerabilities in operating systems, and legitimately looking birth certificates of deceased children to setup new identity – due to his combination of technical skills and mastery of social engineering. He got people to reveal corporate information they should not. Pieces of information are seemingly innocuous in their own rights – a name of server, a corporate directory of employees – but it helps the social engineer to learn the lingo and pose as a trusted insider.

Computer-police

I adhere to the conventions re hackneyed images (Wikimedia).

I often had been called way too honest – and thus not getting anywhere in life, professionally. So I was asking myself:

Could I con people into breaking rules? The intuitive answer was of course No.

But then the following anecdote emerged from a dark corner of my mind.

A long time ago I had worked as an IT Infrastructure Manager – responsible for quite a colorful IT environment run partly by subversive non-official admins. I actually transitioned into that role from supporting some of the latter. One of the less delightful duties was to keep those subversive elements from building rogue websites and circumvent the bureaucratic corporate content management system – by purchasing internet domains like super-fancy-product-name.com and hosting these services where they figured I would not find it.

I also had to clean up legacy mess.

One time we had to migrate an internet domain hosted on behalf of an Another Very Important Organization to one of their servers. Routine stuff, had the domain been under our control. But it was tied to a subversive website a department had once set up, working with an external marketing consultancy. The consulting company was – as per the whois records – the official owner of the domain.

Actually the owner listed was not even that company was a person employed by that company but not working for them anymore. I consulted with the corporate lawyers in it would have been a legal knot hard to disentangle.

However, I had to transfer the stuff right now. Internet domains have a legal owner and an administrative and a technical contact. The person able to do the transfer is the latter but he or she must not do it unless instructed to do so.

I tracked down and the technical contact and called him up. The tech-c’s phone number is public information, very easy to find back then – nowadays you might need a tiny bit of social engineering to obtain it.

I explained the whole case to him – the whole truth in all details. He was a helpful network administrator working for a small internet provider. Having to deal with a typical network admin’s predicament immediately built a kind of bond. This is one of the things that makes working in IT infrastructure management enjoyable – in a job you are only noticed if something goes wrong. (The rest of the time you are scolded for needing too much money and employing too much personnel).

The result was that the domain was technically transferred to the intended target organization’s server immediately. But: If somebody asks you how this has been done – it wasn’t me!

This is the same concluding remark uttered by an admin in another telco later – whom I had convinced to provide me some password of a company. Also that inquiry of mine and reasons given were true and legitimate as I was doing it on behalf of a client – the password owner.

In both cases there was a third party, a client or colleague or employer, who was quite happy with the results.

But there weren’t any formal checks involved – people did not ask me for a verifiable phone number to call me back or wanted to talk to my boss or to the client. If I just had fabricated the stories I would have managed to get a domain transferred and obtain a hosting customer’s password.

Rusty and Crusty PadlockThe psychologically interesting part of my job was that I didn’t have real power to tell departments what they must or must not do. I could just persuade them.

I think this is an aspect very common to many corporate jobs today – jobs with with grand titles but just a bunch of feeble dotted lines to the rest of the corporate universe and its peripheral contractors’ satellites – some of which you never meet face-to-face.

Combine that with an intricate tangle of corporate guidelines and rules – many of them set up to enforce security and compliance. In some environments people hardly get their jobs done without breaking or bending a subset of those rules.

Social engineering in some sense is probably what makes companies still being able to function at all.

5 Years Anniversary: When My Phone Got Hacked

I like to play with phones.

Phone, 1970s, Austria

This is the phone we inherited when we bought our house. I kept it as it is the same 1970s type of phone I grew up with. I have recently resurrected it and connected it to an analog port of our phone system in this makeshift fashion. Great ringer!

5 years ago my cell phone decided it wanted to play on its own. It did participate in a TV voting – so the provider said and the itemized bill proved. This was for a music show I wouldn’t even watch if somebody paid me for doing so.

The bill showed that my phone sent SMSes every few seconds, faster than a human being would be able to type. At that time I had two mobile phones with the same number. None of them showed any SMSes sent at that time.

The costs amounted to about € 27,- but this was negligible in comparison the opportunity costs of me spending considerable time in preparing documentation for the provider – assuming naively that they would appreciate my input.

My arguments were:

  • None of my phones send the SMSes, see attached screenshots of messages sent. On the day in questions I did neither place or receive any calls at all.
  • At this evening nobody was in the house who might have sent these SMSes for fun or accidentally. No kids, no drunk friends at a party. I even offered to show them my calendar, entries to my time tracking software or driver’s logbook to prove I was at home.
  • Sure – I could have used another phone in addition to the two I had. But if I did not I had to remove the SIM cards from the primary phones and insert them to a hacker phone. For doing that I would have needed to turn the phones off and the other one on – and this should show up in their log files. And I hadn’t turned off the phones for a long time.

Things I didn’t say but figured were obvious:

  •  We are a business customer with typical bills amounting to hundreds of Euros per month. I did not make sense from a commercial perspective to invest time in researching an issue related to a loss of € 27,-
  • I am working in security myself, and I would have more lucrative things to do right now than putting together that documentation. I am friendly patient researcher informing a company about a security issue privately and not describing that on my blog.

It was all in vain and not obvious to them. Their reply was: The bill shows that you sent these SMSs. Period. They claimed to have done technical investigation, yet this took just a few hours.

I appealed to Austrian Regulatory Authority for Broadcasting and Telecommunications (RTR) that handles such issues. They said they could not do anything either.

One year later I found a news article about a similar case – calls that allegedly have been made in the middle of the night, every few seconds, and the customer wasn’t believed either. (For German readers: Article from archive.org).

How could my phone(s) have been hacked?

Many how-to’s can be found on the internet on cloning a GSM SIM card when having physical contact to the original, given the proper tools.

Over-the-air cloning was an option for the sophisticated hacker 10 years ago, but at the security conference Blackhat 2013 a German researcher presented his findings about breaking SIM cards protection mechanism. He is quoted with:

Give me any phone number and there is some chance I will, a few minutes later, be able to remotely control this SIM card and even make a copy of it.

I had found also found a few hints to a bluetooth-related hack but I had been paranoid enough anyway to never turn on bluetooth for such reasons – and I considered it absurd that  some evil hacker was lurking in the fields behind our backyard trying to control my phone over bluetooth … for the sole purpose of placing these votes.

Accidentally, some time later I had access to an itemized phone bill issued by the same provider to a client of mine.

On the other customer’s bill I found different uncommon phone numbers, in this case for other silly games – but the pattern was the same: A small amount of money spent on dubious services compared to the total bill. Isn’t’ that perfect business model? Rip off business customers whose bill is likely to be much higher than the costs of the fraudulent calls and whose lengthy detailed bills would not be checked. I only discovered both incidents as I am quite obsessed with a semi-automated nerdy analysis of phone bills.

Of course I called the phone company again on behalf of my client, and we are again treated as clueless participants in online games who tried to deny the obvious.

I am not such a hardcore phone phreak – so I am still looking for clues.

In the only feasible explanations were:

  • Somebody doing that elaborate over-the-air-hack that was – in 2009 – quite leading edge.
  • A manipulation of the data in the provider’s data center – that’s why I thought my inquiry could be helpful and I would not be treated as the most stupid phone user or as a liar.

But probably SMS spoofing does not require so elaborate a hack as it seems to be surprisingly easy to make a text message appear to originate from another number. Many sites offer SMS spoofing for pranks and for legitimate marketing. This article describes a scenario involving a malicious user impersonating subscriber with number 1112221111 and explains that

The larger problem is that the subscriber attached to the 1112221111 number is billed for the SMS message and is likely to balk at the incorrect charge.

(Yes. If the customer has a chance to balk.)

Now I am waiting for some offers from lawyers reading this who might want to help me fight for my € 27,- in the future. I promise this is going to be as exciting as a Michael Crichton movie.

German desk phone W48

Legendary German post-war era phone “W48” (Wikimedia). I am proud owner of the same type of phone though mine does not shine that nicely, and I lack such suitable cloth.

I was tempted to add – alluding to my nostalgic images: Those were so much safer! But the history of phone phreaking actually shows that the ancient phone system had suffered from glaring vulnerabilities re-discovered again and again since the 1950s. What did they expect from a system that uses the same line for sending voice and control signals? Kids with perfect pitch, often blind, discovered how to whistle their way to free long-distance calls.

I celebrated my phone hacking anniversary by reading this book I can only give my highest recommendations:

Exploding the Phone:
The Untold Story of the Teenagers and Outlaws Who Hacked Ma Bell
by Phil Lapsley.

The blurb is apt: Before smartphones and iPads, before the Internet or the personal computer, a misfit group of technophiles, blind teenagers, hippies, and outlaws figured out how to hack the world’s largest machine: the telephone system.

The Strange World of Public Key Infrastructure and Certificates

An e-mail discussion related to my recent post on IT security has motivated me to ponder about issues with Public Key Infrastructure once more. So I attempt – most likely in vain – to merge a pop-sci introduction to certificates with sort of an attachment to said e-mail discussion.

So this post might be opaque to normal users and too epic and introductory for security geeks. I apologise for the inconvenience.

I mentioned the failed governmental PKI pilot project in that post – a hardware security device destroying the key and there was no backup. I would have made fun of this – hadn’t I experienced it often that it is the so-called simple processes and logistics that can go wrong.

Ponte Milvio love padlocks

I didn’t expect to find such a poetic metaphor for “security systems” rendered inaccessible. Padlocks at Ponte Milvio in Italy – legend has it that lovers attaching a padlock to the bridge and throwing the key into the water will be together forever.

When compiling the following I had in mind what I call infrastructure PKIs – company-internal systems to be used mainly for internal purposes and very often for use by devices rather than by humans. (Ah, the internet of things.)

Issues often arise due to a combination of the following:

  • Human project resources assigned to such projects are often limited.
  • Many applications simply demand certificates so you need to create them.

Since the best way to understand certificates is probably by comparing them to passports or driver licenses I will nonetheless use one issued to me as a human life-form:

Digital Certificate

In Austria the chipcards used to authorize you to medical doctors as a patient can also be used as digital ID cards. That is, the card’s chip also holds the cryptographic private key, and the related certificate ties your identity as a citizen to the corresponding public key. A certificate is a file digitally signed by a Certificate Authority which in this case has the name a-sign-Token-03. The certificate can be downloaded here or searched for in the directory (German sute).

Digital X.509 Certificate: Details

The Public key related to my identity as a citizen (or better a database record representing myself as a citizen). As a passport, the certificate has an end of life and requires renewal.

Alternatives to Hardware Security Modules

An HSM is protecting that sacred private key of the certification authority. It is often a computer, running a locked-down version of an operating system, and it is equipped with sensors that detect and attempt to access the key store physically – it should actually destroy the key rather than having an attacker gain access to it.

It allows for implementing science-fiction-style (… Kirk Alpha 2Spock Omega 3 …) split administration and provides strong key protection that cannot be provided if the private key is stored in software – somewhere on the hard disk of the machine running the CA.

Captain Jean-Luc Picard transfers command of the USS Enterprise-D to Captain Edward Jellico

Yes, a key ceremony – the initiation of a certification authority – sometimes feel like that (memory-alpha.org). Here is the definitive list of Star Trek authorization codes.

Modern HSMs have become less cryptic in terms of usage but still: It is a hardware device not used on a daily basis, and requires additional training and management. Storage of physical items like the keys for unlocking the device and the corresponding password(s) is a challenge as is keeping the know-how of admins up to date.

Especially for infrastructure CAs I propose a purely organizational split administration for offline CAs such as a Root CA: Storing the key in software, but treating the whole CA machine as a device to be protected physically. You could store the private key of the Root CA or the virtual machine running the Root CA server on removable media (and at least one backup). The “protocol” provides spilt administration: E.g. one party has the key to the room, the other party has the password to decrypt the removable medium. Or the unencrypted medium is stored in a location protected by a third party – which in turn does only allow two persons to enter the room together.

But before any split administration is applied an evaluation of risks it should be made sure that the overall security strategy does not look like this:

Steps to nowhere^ - geograph.org.uk - 666960

From the description on Wikimedia: The gate is padlocked, though the fence would not prevent any moderately determined person from gaining access.

You might have to question the holy order (hierarchy) and the security implemented at the lowest levels of CA hierarchies.

Hierarchies and Security

In the simplest case a certification authority issues certificates to end-entities – users or computers. More complex PKIs consist of hierarchies of CAs and thus tree-like structures. The theoretical real-world metaphor would be an agency issuing some license to a subordinate agency that issues passports to citizens.

Chain of certificates associated with this blog

Chain of certificates associated with this blog: *.wordpress.com is certified by Go Daddy Secure Certification Authority which is in turn certified by Go Daddy Class 2 Certification Authority. The asterisk in the names makes it usable with any wordpress.com site – but it defies the purpose of denoting one specific entity.

The Root CA at the top of the hierarchy should be the most secure as if it is compromised (that is: it’s private key has – probably – been stolen) all certificates issued somewhere in the tree should be invalidated.

However, this logic only makes sense:

  • if there is or will with high probability be at least a second Issuing CA – otherwise the security of the Issuing CA is as important as that of the Root CA.
  • if the only purpose of that Root CA is to revoke the certificate of the Issuing CA. The Root CA’s key is going to sign a blacklist referring to the Issuing CA. Since the Root should not revoke itself its key signing the revocation list should be harder to compromise than the key of the to-be-revoked Issuing CA.
Certificate Chain

The certificate chain associated with my “National ID” certificate. Actually, these certificates stored on chipcards are invalidated every time the card (which serves another purpose primarily) is retired as a physical item. Invalidation of tons of certificates can create other issues I will discuss below.

Discussions of the design of such hierarchies focus a lot on the security of the private keys and cryptographic algorithms involved

But yet the effective security of an infrastructure PKI in terms of Who will be able to enroll for certificate type X (that in turn might entitle you to do Y) is often mainly determined by typical access control lists in databases or directories system that are integrated with an infrastructure PKI. Think would-be subscribers logging on to a web portal or to a Windows domain in order to enroll for a certificates. Consider e.g. Windows Autoenrollment (licensed also by non-Windows CAs) or the Simple Certificate Enrollment Protocol used with devices.

You might argue that it should be a no-no to make allegedly weak  software-credential-based authentication the only prerequisite for the issuance of certificates that are then considered strong authentication. However, this is one of the things that distinguish primarily infrastructure-focused CAs from, say, governmental CAs, or “High Assurance” smartcard CAs that require a face-to- face enrollment process.

In my opinion certificates are often deployed because their is no other option to provide platform-independent authentication – as cumbersome as it may be to import key and certificate to something like a printer box. Authentication based on something else might be as secure, considering all risks, but not as platform-agnostic. (For geeks: One of my favorites is 802.1x computer authentication via PEAP-TLS versus EAP-TLS.)

It is finally the management of group memberships or access control lists or the like that will determine the security of the PKI.

Hierarchies and Cross-Certification

It is often discussed if it does make sense to deploy more intermediate levels in the hierarchy – each level associated with additional management efforts. In theory you could delegate the management of a whole branch of the CA tree to different organizations, e.g. corresponding to continents in global organizations. Actually, I found that the delegation argument is often used for political reasons – which results in CA-per-local-fiefdom instead of the (in terms of performance much more reasonable) CA-per-continent.

I believe the most important reason to introduce the middle level is for (future) cross-certification: If an external CA cross-certifies yours it issues a certificate to your CA:

Cross Certification

Cross Certification between two CA hierarchies, each comprising three levels. Within a hierarchy each CA issues a certificate for its subordinate CA (orange lines). In addition the middle-tier CAs in each hierarchy issue certificates to the Root CAs of the other hierarchy – effectively creating logical chains consisting of 4 CAs. Image credits mine.

Any CA on any level could on principle be cross-certified. It would be easier to cross-certificate the Root CA but then the full tree of CAs subordinate to it will also be certified (For the experts: I am not considering name or other constraints here). If a CA an intermediate level is issued the cross-certificate trust is limited to this branch.

Cross-Certification constitutes a bifurcation in the CA tree and its consequences can be as weird and sci-fi as this sounds. It means that two different paths exists that connect an end-entity certificate to a different Root CA. Which path is actually chosen depends on the application validating the certificate and the protocol involved in exchanging or collecting certificates.

In an SSL handshake (which happens if you access your blog via https://yourblog.wordpress.com, using the certificate with that asterisk) happens if you access the web server is so kind to send the full certificate chain – usually excl. the Root CA – to the client. So the path finally picked by the client depends on the chain the server knows or that takes precedence at the server.

Cross-certification is usually done by CAs considered external, and it is expected that an application in the external world sees the path chaining to the External CAs.

Tongue-in-cheek I had once depicted the world of real PKI hierarchies and their relations as:

CA hierarchies in the real world.

CA hierarchies in the real world. Sort of. Image credits mine.

Weird things can happen if a web server is available on an internal network and accessible by the external world (…via a reverse proxy. I am assuming there is no deliberate termination of the SSL connection at the proxy – what I call a corporate-approved man-in-the-middle attack). This server knows the internal certificate chain and sends it to the external client – which does not trust the corresponding internal-only Root CA. But the chain sent in the handshake may take precedence over any other chain found elsewhere so the client throws an error.

How to Really Use “Cross-certification”

As confusing cross-certification is – it can be  used in a peculiar way to solve other PKI problems – those with applications that cannot deal with the validation of a hierarchy at all or who can deal with only a one-level hierarchy. This is interesting in particular in relation to devices such as embedded industry systems or iPhones.

Assuming that only the needed certificates can be safely injected to the right devices and that you really know what you are doing the fully pesky PKI hierarchy can be circumvented by providing an alternative Root CA certificate to the CA at the bottom of the hierarchy:

The real, full blown hierarchy is

  1. Root CA issued a root certificate for Root CA (itself). It contains the key 1234.
  2. Root CA issues a certificate to Some Other CA related to key 5678.

… then the shortcut hierarchy for “dumb devices” looks like:

  1. Some Other CA issues a root certificate to itself, thus to Subject named Some Other CA. The public key listed in this certificate is 5678 the same as in certificate (2) of the extended hierarchy.

Client certificates can then use either chain – the long chain including several levels or the short one consisting of a single CA only. Thus if certificates have been issued by the full-blown hierarchy they can be “dumbed-down to devices” by creating the “one-level hierarchy” in addition.

Names and Encoding

In the chain of certificates the Issuer field in the certificate of the Subordinate CA needs to be the same as the Subject field of the Root CA – just as the Subject field in my National ID certificate contains my name and the Issuer field that of the signing CA. And it depends on the application how names with be checked. In a global world, names are not simple ASCII strings anymore, but encoding matters.

Certificates are based on an original request sent by the subordinate CA, and this request most often contains the name – the encoded name. I have sometimes seen that CAs changed the encoding of the names when issuing the certificates, or they reshuffled the components of the name – the order of tags like organization and country. An application may except that or not, and the reasons for rejections can be challenging to troubleshoot if the application is running in a blackbox-style device.

Revocation List Headaches

Certificates (X.509) can be invalidated by adding their respective serial number to a blacklist. This list is – or actually: may – be checked by relying parties. So full-blown certificate validation comprises collecting all certificates in the chain up to a self-signed Root CA (Subject=Issuer) and then checking each blacklist signed by each CA in the chain for the serial number of the entity one level below:

Certificate Validation

Validation of a certificate chain (“path). You start from the bottom and locate both CA certificates and the revocation lists via URLs in each subordinate certificate. Image credits mine.

The downside: If the CRL isn’t available at all applications following the recommended practices will for example deny network access to thousands of clients. With infrastructure PKIs that means that e.g. access to WLAN or remote access via VPN will fail.

This makes desperate PKI architects (or rather the architects accountable for the application requiring certificate based logon) build all kinds of workarounds, such as switching off CRL checking in case of an emergency or configuring grace periods. Note that this is all heavily application dependent and has to be figured out and documented individually for emergencies for all VPN servers, exotic web servers, Windows domain controllers etc.

A workaround is imperative if a very important application is dependent on a CRL issued by an “external” certificates’ provider. If I would use my Austrian’s digital ID card’s certificate for logging on to server X, that server would need tp have a valid version of this CRL which only lives for 6 hours.

Certificate Revocation List

A Certificate Revocation List (CRL) looks similar to a certificate. It is a file signed the Certification Authority that also signed the certificates that might be invalidated via that CRL. From downloading this CRL frequently I conclude that it a current version is published every hour – so there are 5 hours of overlap.

The predicament is that CRLs may be cached for performance reasons. Thus if you publish short-lived CRLs frequently you might face “false negative” outages due to operational issues (web server down…) but if the CRL is too long-lived it does not serve its purpose.

Ideally, CRLs would be valid for a few days, but a current CRL would be published, say every day, AND you could delete the CRL at the validating application every day. That’s exactly how I typically try to configure it. VPN servers, for example, have allowed to delete the CRL cache for a long time and Windows has a supported way to do that since Vista. This allows for reasonable continuity but revocation information would still be current.

If you cannot control the CRL issuance process one workaround is: Pro-active fetching of the CRL in case it is published with an overlap – that is: the next CRL is published while the current one is still valid – and mirroring the repository in question.

As an aside: It is more difficult as it sounds to give internal machines access to a “public” external URL. Machines not necessarily use the proxy server configured for user (which cause false positive results – Look, I tested it by accessing it in the browser and it works), and/or machines in the servers’ network are not necessarily allowed to access “the internet”.

CRLs might also simply be too big – for some devices with limited processing capabilities. Some devices of a major vendor used to refuse to process CRLs larger than 256kB. The CRL associated with my sample certificate is about 700kB:

LDAP CDP URL

How the revocation is located – via a URL embedded in the certificate. For the experts: OCSP is supported, too, and it is the recommended method. However considering older devices it might be necessary to resort to CRLs.

CRL Details - Blacklist

The actual blacklist part of the CRL. The scrollbar is misleading – the list contains about 20.000 entries (best viewed with openssl or Windows certutil).

Emergency Revocation List

In case anything goes wrong – HSM inaccessible, passwords lost, datacenter 1 flooded abd backup datacenter 2 destroyed by a meteorite – there is one remaining option to keep PKI-dependent applications happy:

Prepare a revocation list in advance whose end of life (NextUpdate date) is after the end of validity of the CA certificate. In contrast to any backup of key material this CRL can be “backed up” by pasting the BASE64 string to the documentation as it does not contain sensitive information.

In an emergency this CRL will be published to the locations embedded in certificates. You will never be able to revoke anything anymore as CRLs might be cached – but business continuity is secured.

Emergency CRL

An Emergency CRL for my home-grown CA. It seems 9999 days is the maximum I can use with Windows certutil. Actually, the question of How many years should the lifetime be so that I will not be bothered anymore until retirement? comes up often in relation to all kinds of validity dates.

What I Never Wanted to Know about Security but Found Extremely Entertaining to Read

This is in praise of Peter Gutmann‘s book draft Engineering Security, and the title is inspired by his talk Everything You Never Wanted to Know about PKI but were Forced to Find Out.

Chances are high that any non-geek reader is already intimidated by the acronym PKI – sharing the links above on LinkedIn I have been asked Oh. Wait. What the %&$%^ is PKI??

This reaction is spot-on as this post is more about usability and perception of technology by end-users despite or because I have worked for more than 10 years at the geeky end of Public Key Infrastructure. In summary, PKI is a bunch (actually a ton) of standards that should allow for creating the electronic counterparts of signatures, of issuing passports, of transferring data in locked cabinets. It should solve all security issues basically.

The following images from Peter Gutmann’s book  might invoke some memories.

Security warnings designed by geeks look like this:

Peter Gutmann, Engineering Security, certificate warning - What the developers wrote

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.167. Also shown in Things that Make us Stupid, https://www.cs.auckland.ac.nz/~pgut001/pubs/stupid.pdf, p.3.

As a normal user, you might rather see this:

Peter Gutmann, Engineering Security, certificate warning - What the user sees

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.168.

The funny thing was that I picked this book to take a break from books on psychology and return to the geeky stuff – and then I was back to all kinds of psychological biases and Kahneman’s Prospect Theory for example.

What I appreciate in particular is the diverse range of systems and technologies considered – Apple, Android, UNIX, Microsoft,…, all evaluated agnosticly, plus a diverse range of interdisciplinary research considered. Now that’s what I call true erudition with a modern touch. Above all, I enjoyed the conversational and irreverent tone – I have never started reading a book for technical reasons and then was not able to put it down because it was so entertaining.

My personal summary – which resonates a lot with my experience – is:
On trying to make systems more secure you might not only make them more unusable and obnoxious but also more insecure.

A concise summary is also given in Gutmann’s talk Things that Make Us Stupid. I liked in particular the ignition key as a real-world example for a device that is smart and easy-to-use, and providing security as a by-product – very different from interfaces of ‘security software’.

Peter Gutmann is not at all siding with ‘experts’ who always chide end-users for being lazy and dumb – writing passwords down and stick the post-its on their screens – and who state that all we need is more training and user awareness. Normal users use systems to get their job done and they apply risk management in an intuitive way: Should I waste time in following an obnoxious policy or should I try to pass that hurdle as quick as possible to do what I am actually paid for?

Geeks are weird – that’s a quote from the lecture slides linked above. Since Peter Gutmann is an academic computer scientist and obviously a down-to-earth practitioner with ample hands-on experience – which would definitely qualify him as a Geek God – his critique is even more convincing. In the book he quotes psychological research which prove that geeks really think different (as per standardized testing of personality types). Geeks constitute a minority of people (7%) that tend to take decisions – such as Should I click that pop-up? – in a ‘rational’ manner, as the simple and mostly wrong theories on decision making have proposed. One example Gutmann uses is testing for basic understanding of logics, such as Does ‘All X are Y’ imply ‘Some X are Y’? Across cultures the majority of people thinks that this is wrong.

Normal people – and I think also geeks when they don’t operate in geek mode, e.g. in the wild, non in their programmer’s cave – fall for many so-called fallacies and biases.

Our intuitive decision making engine runs on autopilot and we get conditioned to click away EULAs, or next-next-finish the dreaded install wizards, or click away pop-ups, including the warnings. As users we don’t generate testable hypotheses or calculate risks but act unconsciously based on our experience what had worked in the past – and usually the click-away-anything approach works just fine. You would need US navy-style constant drilling in order to be alert enough not to fall for those fallacies. This does exactly apply to anonymous end users using their home PCs to do online-banking.

Security indicators like padlocks and browser address bar colors change with every version of popular browsers. Not even tech-savvy users are able to tell from those indicators if they are ‘secure’ now. But what it is extremely difficult: Users would need to watch out for the lack of an indicator (that’s barely visible when it’s there). And we are – owing to confirmation bias – extremely bad at spotting the negative, the lack of something. Gutmann calls this the Simon Says problem.

It is intriguing to see how biases about what ‘the others’ – the users or the attackers – would do enter technical designs. For example it is often assumed that a client machine or user who has authenticated itself is more trustworthy – and servers are more vulnerable to a malformed packet sent after successful authentication. In the Stuxnet attack digitally signed malware (signed by stolen keys) that has been used – ‘if it’s signed it has to be secure’.

To make things worse users are even conditioned for ‘insecure’ behavior: When banks use all kinds fancy domain names to market their latest products, they lure their users into clicking on links to that fancy sites in e-mails and have them logon with their banking user accounts via these sites they train users to fall for phishing e-mails – despite the fact that the same e-mails half-heartedly warn about clicking arbitrary links in e-mails.

I want to stress that I believe in relation to systems like PKI – that require you run some intricate procedures every few years only (these are called ceremonies for a reason), but then it is extremely critical – also admins should also be considered ‘users’.

I have spent many hours in discussing proposed security features like Passwords need to be impossible to remember and never written down with people whose job it is to audit, draft policies, and read articles on what Gutmann calls conference-paper-attacks all the day. These are not the people who have to run systems, deal with helpdesk calls or costs, and with requests from VIP users as top level managers who had on the one hand been extremely paranoid about system administrators sniffing their e-mails but yet on the other hand need instant 24/7 support with recovery of encrypted e-mails (This should be given a name like the Top Managers’ Paranoia Paradox)

As a disclaimer I’d like to add that I don’t underestimate cyber security threats, risk management, policies etc. It is probably the current media hype on governments spying on us that makes me advocate a contrarian view.

I could back this up by tons of stories, many of them too good to be made up (but unfortunately NDA-ed): security geeks in terms of ‘designers’ and ‘policy authors’ often underestimate time and efforts required in running their solutions on a daily basis. It is often the so-called trivial and simple things that go wrong, such as: The documentation of that intricate process to be run every X years cannot be found, or the only employee who really knew about the interdependencies is long gone, or allegedly simple logistics that go wrong (Now we are locked in the secret room to run the key ceremony… BTW did anybody think of having the media ready to install the operating system on that high secure isolated machine?).

A large European PKI setup failed (it made headlines) because the sacred key of a root certification authority had been destroyed – which is the expected behavior for so-called Hardware Security Modules when they are tampered with or at least the sensors say so, and there was no backup – the companies running the project and running operations blamed each other.

I am not quoting this to make fun of others although the typical response here is to state that projects or operations have been badly managed and you just need to throw more people and money on them to run secure systems in a robust and reliable way. This might be true but it does simply not reflect the typical budget, time constraints, and lack of human resources typical IT departments of corporations have to deal with.

There is often a very real, palpable risk of trading off business continuity and availability (that is: safety) for security.

Again I don’t want to downplay risks associated with broken algorithms and the NSA reading our e-mail. But as Peter Gutmann points out cryptography is the last thing an attacker would target (even if a conference-paper attack had shown it is broken) – the implementation of cryptography rather guides attackers along the lines of where not to attack. Just consider the spectacular recent ‘hack’ of a prestigious one-letter Twitter account which was actually blackmailing the user after having gained control over a user’s custom domain through social engineering – of most likely underpaid call-center agents who had to face that dilemma of meeting the numbers in terms of customer satisfaction versus following the security awareness training they might have had.

Needless to say, encryption, smart cards, PKI etc. would not have prevented that type of attack.

Peter Gutmann says about himself he is throwing rocks at PKIs, and I believe you can illustrate a particularly big problem using a perfect real-live metaphor: Digital certificates are like passports or driver licenses to users – signed by a trusted agency.

Now consider the following: A user might commit a crime and his driver license is seized. PKI’s equivalent of that seizure is to have the issuing agency publishing a black list regularly, listing all the bad guys. Police officers on the road need to have access to that blacklist in order to check drivers’ legitimacy. What happens if a user isn’t blacklisted but the blacklist publishing service is not available? The standard makes this check optional (as many other things which is the norm when an ancient standard is retrofitted with security features) but let’s assume the police app follows the recommendation what they SHOULD do.  If the list is unavailable the user is considered and alleged criminal and has to exit the car.

You could also imagine something similar happening to train riders who have printed out an online ticket that cannot be validated (e.g. distinguished from forgery) by the conductor due to a failure in the train’s IT systems.

Any ’emergency’ / ‘incident’ related to digital certificates I was ever called upon to support with was related to false negative blocking users from doing what they need to do because of missing, misconfigured or (temporarily) unavailable certificate revocation lists (CRLs). The most important question in PKI planning is typically how to workaround or prevent inaccessible CRLs. I am aware of how petty this problem may appear to readers – what’s the big deal in monitoring a web server? But have you ever noticed how many alerts (e.g. via SMS) a typical administrator gets – and how many of them are false negatives? When I ask what will happen if the PKI / CRL signing / the web server breaks on Dec. 24 at 11:30 (in a European country) I am typically told that we need to plan for at least some days until recovery. This means that revocation information on the blacklist will be stale, too, as CRLs can be cached for performance reasons.

As you can imagine most corporations rather tend to follow the reasonable approach of putting business continuity over security so they want to make sure that a glitch in the web server hosting that blacklists will not stop 10.000 employees from accessing the wireless LAN, for example. Of course any weird standard can be worked around given infinite resources. The point I wanted to make was that these standards have been designed having something totally different in mind, by PKI Theologians in the 1980s.

Admittedly though, digitally certificates and cryptography is great playground for geeks. I think I was a PKI theologian myself many years ago until I rather morphed in what I call anti-security consultant tongue-in-cheek – trying to help users (and admins) to keep on working despite new security features. I often advocated for not using certificates and proposing alternative approaching boiling down the potential PKI project to a few hours of work, against the typical consultant’s mantra of trying to make yourself indispensable in long-term projects and by designing blackboxes the client will never be able to operate on his own. Not only because of the  PKI overhead but because alternatives were as secure – just not as hyped.

So in summary I am recommending Peter Gutmann’s terrific resources (check out his Crypto Tutorial, too!) to anybody who is torn between geek enthusiasm for some obscure technology and questioning its value nonetheless.

Rusty Padlock

No post on PKI, certificates and key would be complete without an image like this.I found the rusty one particularly apt here. (Wikimedia, user Garretttaggs)