The Orphaned Internet Domain Risk

I have clicked on company websites of social media acquaintances, and something is not right: Slight errors in formatting, encoding errors for special German characters.

Then I notice that some of the pages contain links to other websites that advertize products in a spammy way. However, the links to the spammy sites are embedded in this alleged company websites in a subtle way: Using the (nearly) correct layout, or  embedding the link in a ‘news article’ that also contains legit product information – content really related to the internet domain I am visiting.

Looking up whois information tells me that these internet domain are not owned by my friends anymore – consistent with what they actually say on the social media profiles. So how come that they ‘have given’ their former domains to spammers? They did not, and they didn’t need to: Spammers simply need to watch out for expired domains, seize them when they are available – and then reconstruct the former legit content from public archives, and interleave it with their spammy messages.

The former content of legitimate sites is often available on the web archive. Here is the timeline of one of the sites I checked:

Clicking on the details shows:

  • Last display of legit content in 2008.
  • In 2012 and 2013 a generic message from the hosting provider was displayed: This site has been registered by one of our clients
  • After that we see mainly 403 Forbidden errors – so the spammers don’t want their site to be archived – but at one time a screen capture of the spammy site had been taken.

The new site shows the name of the former owner at the bottom but an unobtrusive link had been added, indicating the new owner – a US-based marketing and SEO consultancy.

So my take away is: If you ever feel like decluttering your websites and free yourself of your useless digital possessions – and possibly also social media accounts, think twice: As soon as your domain or name is available, somebody might take it, and re-use and exploit your former content and possibly your former reputation for promoting their spammy stuff in a shady way.

This happened a while ago, but I know now it can get much worse: Why only distribute marketing spam if you can distribute malware through channels still considered trusted? In this blog post Malwarebytes raises the question if such practices are illegal or not – it seems that question is not straight-forward to answer.

Visitors do not even have to visit the abandoned domain explicitly to get hacked by malware served. I have seen some reports of abandoned embedded plug-ins turned into malicious zombies. Silly example: If you embed your latest tweets, Twitter goes out-of-business, and its domains are seized by spammers – you Follow Me icon might help to spread malware.

If a legit site runs third-party code, they need to trust the authors of this code. For example, Equifax’ website recently served spyware:

… the problem stemmed from a “third-party vendor that Equifax uses to collect website performance data,” and that “the vendor’s code running on an Equifax Web site was serving malicious content.”

So if you run any plug-ins, embedded widgets or the like – better check out regularly if the originating domain is still run by the expected owner – monitor your vendors often; and don’t run code you do not absolutely need in the first place. Don’t use embedded active badges if a simple link to your profile would do.

Do a painful boring inventory and assessment often – then you will notice how much work it is to manage these ‘partners’ and rather stay away from signing up and registering for too much services.

Update 2017-10-25: And as we speak, we learn about another example – snatching a domain used for a Dell backup software, preinstalled on PCs.

Give the ‘Thing’ a Subnet of Its Own!

To my surprise, the most clicked post ever on this blog is this:

Network Sniffing for Everyone:
Getting to Know Your Things (As in Internet of Things)

… a step-by-step guide to sniff the network traffic of your ‘things’ contacting their mothership, plus a brief introduction to networking. I wanted to show how you can trace your networked devices’ traffic without any specialized equipment but being creative with what many users might already have, by turning a Windows PC into a router with Internet Connection Sharing.

Recently, an army of captured things took down part of the internet, and this reminded me of this post. No, this is not one more gloomy article about the Internet of Things. I just needed to use this Internet Sharing feature for the very purpose it was actually invented.

The Chief Engineer had finally set up the perfect test lab for programming and testing freely programmable UVR16x2 control systems (successor of UVR1611). But this test lab was a spot not equipped with wired ethernet, and the control unit’s data logger and ethernet gateway, so-called CMI (Control and Monitoring Interface), only has a LAN interface and no WLAN.

So an ages-old test laptop was revived to serve as a router (improving its ecological footprint in passing): This notebook connects to the standard ‘office’ network via WLAN: This wireless connection is thus the internet connection that can be shared with a device connected to the notebook’s LAN interface, e.g. via a cross-over cable. As explained in detail in the older article the router-laptop then allows for sniffing the traffic, – but above all it allows the ‘thing’ to connect to the internet at all.

This is the setup:

Using a notebook with Internet Connection Sharing enabled as a router to connect CMI (UVR16x2's ethernet gatway) to the internet

The router laptop is automatically configured with IP address 192.168.137.1 and hands out addresses in the 192.168.137.x network as a DHCP server, while using an IP address provided by the internet router for its WLAN adapter (indicated here as commonly used 192.168.0.x addresses). If Windows 10 is used on the router-notebook, you might need to re-enable ICS after a reboot.

The control unit is connected to the CMI via CAN bus – so the combination of test laptop, CMI, and UVR16x2 control unit is similar to the setup used for investigating CAN monitoring recently.

The CMI ‘thing’ is tucked away in a private subnet dedicated to it, and it cannot be accessed directly from any ‘Office PC’ – except the router PC itself. A standard office PC (green) effectively has to access the CMI via the same ‘cloud’ route as an Internet User (red). This makes the setup a realistic test for future remote support – when the CMI plus control unit has been shipped to its proud owner and is configured on the final local network.

The private subnet setup is also a simple workaround in case several things can not get along well with each other: For example, an internet TV service flooded CMI’s predecessor BL-NET with packets that were hard to digest – so BL-NET refused to work without a further reboot. Putting the sensitive device in a private subnet – using a ‘spare part’ router, solved the problem.

The Chief Engineer's quiet test lab for testing and programming control units

What I Never Wanted to Know about Security but Found Extremely Entertaining to Read

This is in praise of Peter Gutmann‘s book draft Engineering Security, and the title is inspired by his talk Everything You Never Wanted to Know about PKI but were Forced to Find Out.

Chances are high that any non-geek reader is already intimidated by the acronym PKI – sharing the links above on LinkedIn I have been asked Oh. Wait. What the %&$%^ is PKI??

This reaction is spot-on as this post is more about usability and perception of technology by end-users despite or because I have worked for more than 10 years at the geeky end of Public Key Infrastructure. In summary, PKI is a bunch (actually a ton) of standards that should allow for creating the electronic counterparts of signatures, of issuing passports, of transferring data in locked cabinets. It should solve all security issues basically.

The following images from Peter Gutmann’s book  might invoke some memories.

Security warnings designed by geeks look like this:

Peter Gutmann, Engineering Security, certificate warning - What the developers wrote

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.167. Also shown in Things that Make us Stupid, https://www.cs.auckland.ac.nz/~pgut001/pubs/stupid.pdf, p.3.

As a normal user, you might rather see this:

Peter Gutmann, Engineering Security, certificate warning - What the user sees

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.168.

The funny thing was that I picked this book to take a break from books on psychology and return to the geeky stuff – and then I was back to all kinds of psychological biases and Kahneman’s Prospect Theory for example.

What I appreciate in particular is the diverse range of systems and technologies considered – Apple, Android, UNIX, Microsoft,…, all evaluated agnosticly, plus a diverse range of interdisciplinary research considered. Now that’s what I call true erudition with a modern touch. Above all, I enjoyed the conversational and irreverent tone – I have never started reading a book for technical reasons and then was not able to put it down because it was so entertaining.

My personal summary – which resonates a lot with my experience – is:
On trying to make systems more secure you might not only make them more unusable and obnoxious but also more insecure.

A concise summary is also given in Gutmann’s talk Things that Make Us Stupid. I liked in particular the ignition key as a real-world example for a device that is smart and easy-to-use, and providing security as a by-product – very different from interfaces of ‘security software’.

Peter Gutmann is not at all siding with ‘experts’ who always chide end-users for being lazy and dumb – writing passwords down and stick the post-its on their screens – and who state that all we need is more training and user awareness. Normal users use systems to get their job done and they apply risk management in an intuitive way: Should I waste time in following an obnoxious policy or should I try to pass that hurdle as quick as possible to do what I am actually paid for?

Geeks are weird – that’s a quote from the lecture slides linked above. Since Peter Gutmann is an academic computer scientist and obviously a down-to-earth practitioner with ample hands-on experience – which would definitely qualify him as a Geek God – his critique is even more convincing. In the book he quotes psychological research which prove that geeks really think different (as per standardized testing of personality types). Geeks constitute a minority of people (7%) that tend to take decisions – such as Should I click that pop-up? – in a ‘rational’ manner, as the simple and mostly wrong theories on decision making have proposed. One example Gutmann uses is testing for basic understanding of logics, such as Does ‘All X are Y’ imply ‘Some X are Y’? Across cultures the majority of people thinks that this is wrong.

Normal people – and I think also geeks when they don’t operate in geek mode, e.g. in the wild, non in their programmer’s cave – fall for many so-called fallacies and biases.

Our intuitive decision making engine runs on autopilot and we get conditioned to click away EULAs, or next-next-finish the dreaded install wizards, or click away pop-ups, including the warnings. As users we don’t generate testable hypotheses or calculate risks but act unconsciously based on our experience what had worked in the past – and usually the click-away-anything approach works just fine. You would need US navy-style constant drilling in order to be alert enough not to fall for those fallacies. This does exactly apply to anonymous end users using their home PCs to do online-banking.

Security indicators like padlocks and browser address bar colors change with every version of popular browsers. Not even tech-savvy users are able to tell from those indicators if they are ‘secure’ now. But what it is extremely difficult: Users would need to watch out for the lack of an indicator (that’s barely visible when it’s there). And we are – owing to confirmation bias – extremely bad at spotting the negative, the lack of something. Gutmann calls this the Simon Says problem.

It is intriguing to see how biases about what ‘the others’ – the users or the attackers – would do enter technical designs. For example it is often assumed that a client machine or user who has authenticated itself is more trustworthy – and servers are more vulnerable to a malformed packet sent after successful authentication. In the Stuxnet attack digitally signed malware (signed by stolen keys) that has been used – ‘if it’s signed it has to be secure’.

To make things worse users are even conditioned for ‘insecure’ behavior: When banks use all kinds fancy domain names to market their latest products, they lure their users into clicking on links to that fancy sites in e-mails and have them logon with their banking user accounts via these sites they train users to fall for phishing e-mails – despite the fact that the same e-mails half-heartedly warn about clicking arbitrary links in e-mails.

I want to stress that I believe in relation to systems like PKI – that require you run some intricate procedures every few years only (these are called ceremonies for a reason), but then it is extremely critical – also admins should also be considered ‘users’.

I have spent many hours in discussing proposed security features like Passwords need to be impossible to remember and never written down with people whose job it is to audit, draft policies, and read articles on what Gutmann calls conference-paper-attacks all the day. These are not the people who have to run systems, deal with helpdesk calls or costs, and with requests from VIP users as top level managers who had on the one hand been extremely paranoid about system administrators sniffing their e-mails but yet on the other hand need instant 24/7 support with recovery of encrypted e-mails (This should be given a name like the Top Managers’ Paranoia Paradox)

As a disclaimer I’d like to add that I don’t underestimate cyber security threats, risk management, policies etc. It is probably the current media hype on governments spying on us that makes me advocate a contrarian view.

I could back this up by tons of stories, many of them too good to be made up (but unfortunately NDA-ed): security geeks in terms of ‘designers’ and ‘policy authors’ often underestimate time and efforts required in running their solutions on a daily basis. It is often the so-called trivial and simple things that go wrong, such as: The documentation of that intricate process to be run every X years cannot be found, or the only employee who really knew about the interdependencies is long gone, or allegedly simple logistics that go wrong (Now we are locked in the secret room to run the key ceremony… BTW did anybody think of having the media ready to install the operating system on that high secure isolated machine?).

A large European PKI setup failed (it made headlines) because the sacred key of a root certification authority had been destroyed – which is the expected behavior for so-called Hardware Security Modules when they are tampered with or at least the sensors say so, and there was no backup – the companies running the project and running operations blamed each other.

I am not quoting this to make fun of others although the typical response here is to state that projects or operations have been badly managed and you just need to throw more people and money on them to run secure systems in a robust and reliable way. This might be true but it does simply not reflect the typical budget, time constraints, and lack of human resources typical IT departments of corporations have to deal with.

There is often a very real, palpable risk of trading off business continuity and availability (that is: safety) for security.

Again I don’t want to downplay risks associated with broken algorithms and the NSA reading our e-mail. But as Peter Gutmann points out cryptography is the last thing an attacker would target (even if a conference-paper attack had shown it is broken) – the implementation of cryptography rather guides attackers along the lines of where not to attack. Just consider the spectacular recent ‘hack’ of a prestigious one-letter Twitter account which was actually blackmailing the user after having gained control over a user’s custom domain through social engineering – of most likely underpaid call-center agents who had to face that dilemma of meeting the numbers in terms of customer satisfaction versus following the security awareness training they might have had.

Needless to say, encryption, smart cards, PKI etc. would not have prevented that type of attack.

Peter Gutmann says about himself he is throwing rocks at PKIs, and I believe you can illustrate a particularly big problem using a perfect real-live metaphor: Digital certificates are like passports or driver licenses to users – signed by a trusted agency.

Now consider the following: A user might commit a crime and his driver license is seized. PKI’s equivalent of that seizure is to have the issuing agency publishing a black list regularly, listing all the bad guys. Police officers on the road need to have access to that blacklist in order to check drivers’ legitimacy. What happens if a user isn’t blacklisted but the blacklist publishing service is not available? The standard makes this check optional (as many other things which is the norm when an ancient standard is retrofitted with security features) but let’s assume the police app follows the recommendation what they SHOULD do.  If the list is unavailable the user is considered and alleged criminal and has to exit the car.

You could also imagine something similar happening to train riders who have printed out an online ticket that cannot be validated (e.g. distinguished from forgery) by the conductor due to a failure in the train’s IT systems.

Any ’emergency’ / ‘incident’ related to digital certificates I was ever called upon to support with was related to false negative blocking users from doing what they need to do because of missing, misconfigured or (temporarily) unavailable certificate revocation lists (CRLs). The most important question in PKI planning is typically how to workaround or prevent inaccessible CRLs. I am aware of how petty this problem may appear to readers – what’s the big deal in monitoring a web server? But have you ever noticed how many alerts (e.g. via SMS) a typical administrator gets – and how many of them are false negatives? When I ask what will happen if the PKI / CRL signing / the web server breaks on Dec. 24 at 11:30 (in a European country) I am typically told that we need to plan for at least some days until recovery. This means that revocation information on the blacklist will be stale, too, as CRLs can be cached for performance reasons.

As you can imagine most corporations rather tend to follow the reasonable approach of putting business continuity over security so they want to make sure that a glitch in the web server hosting that blacklists will not stop 10.000 employees from accessing the wireless LAN, for example. Of course any weird standard can be worked around given infinite resources. The point I wanted to make was that these standards have been designed having something totally different in mind, by PKI Theologians in the 1980s.

Admittedly though, digitally certificates and cryptography is great playground for geeks. I think I was a PKI theologian myself many years ago until I rather morphed in what I call anti-security consultant tongue-in-cheek – trying to help users (and admins) to keep on working despite new security features. I often advocated for not using certificates and proposing alternative approaching boiling down the potential PKI project to a few hours of work, against the typical consultant’s mantra of trying to make yourself indispensable in long-term projects and by designing blackboxes the client will never be able to operate on his own. Not only because of the  PKI overhead but because alternatives were as secure – just not as hyped.

So in summary I am recommending Peter Gutmann’s terrific resources (check out his Crypto Tutorial, too!) to anybody who is torn between geek enthusiasm for some obscure technology and questioning its value nonetheless.

Rusty Padlock

No post on PKI, certificates and key would be complete without an image like this.I found the rusty one particularly apt here. (Wikimedia, user Garretttaggs)

Cyber Security Satire?

I am a science fiction fan. In particular, I am a fan of movies featuring Those Lonesome Nerds who are capable of controlling this planet’s critical infrastructure – from their gloomy basements.

But is it science fiction? In the year Die Hard 4.0 has been released a classified video has been recorded – showing an electrical generator dying from a cyber attack.

Fortunately, “Aurora” was just a test attack against a replica of a power plant:

Now some of you know that the Subversive El(k)ement calls herself a Dilettante Science Blogger on Twitter.

But here is an epic story to be unearthed, and it would take a novelist to do that. I can imagine the long-winded narrative unfolding – of people who cannot use their showers or toilets any more after the blackout. Of sinister hackers sending their evil commands into the command centers of the intricate blood circulation of our society we call The Power Grid. Of course they use smart meters to start their attack.

Unfortunately my feeble attempts of tipping my toes into novel writing have been crashed before I even got started: This novel does exist already – in German. I will inform you if is has been translated – either to a novel or directly into a Hollywood movie script.

As I am probably not capable of writing a serious thriller anyway I would rather go for dark satire.

Douglas Adams did cover so many technologies in The Hitchhiker’s Guide the Galaxy – existing and imagined ones – but he did not elaborate much on intergalactic power transmission. So here is room for satire.

What if our Most Critical Infrastructure would not be attacked by sinister hacker nerds but by our smart systems’ smartness dumbness? (Or their operators’.)

(To all you silent readers and idea grabbers out there: Don’t underestimate the cyber technology I had built into that mostly harmless wordpress.com blog: I know all of you who are reading this and if you are going to exploit this idea on behalf of me I will time-travel back and forth and ruin your online reputation.)

That being said I start crafting the plot:

As Adams probably drew his inspiration from his encounters with corporations and bureaucracy when describing the Vogons and InfiniDim enterprises I will extrapolate my cyber security nightmare from an anecdote:

Consider a programmer (a geek. Sorry for the redundant information!) trying to test his code. (Sorry for the gender stereotype. As a geekess I am allowed to do this. It could be female geek also!)

The geek’s code should send messages to other computers in a Windows domain. “Domain” is a technical term, not some geeky reference to Dominion or the like.  He is using net send. Generation Y-ers and other tablet and smartphone freak: This is like social media status message junk lacking images.

But our geek protagonist makes a small mistake: He does not send the test command to his test computer only – but to “EUROPE”. This does nearly refer to the whole continent, actually it addresses all computers in all European subsidiaries of a true Virtual Cyber Empire.

Fortunately modern IT networks are built on nearly AI powered devices called switches which make the cyber attack petering out at the borders of That Large City.

How could we turn this into a story about an attack on the power grid, adding your typical ignorant non-tech sensationalist writer’s cliched ideas:

  1. A humanoid life-form (or flawed android that tests his emotions chip) is tinkering with sort of a Hello World! command – sent to The Whole World literally.
  2. The attack that is just a glitch, an unfortunate concatenation of events, that is been launched in an unrelated part of the cyber space. E.g. by a command displayed on a hacker’s screen in a Youtube video. Or it was launched from the gas grid.
  3. The Command of Death spreads pandemically over the continent, replicating itself more efficiently than cute cat videos on social networks.
Circuit Breaker 115 kV

Any pop-sci article related to the power grid need to show-off some infrastructure like that (Circuit Breaker, Wikimedia)

I contacted my agent immediately.

Shattering my enthusiasm she told me:

This is not science-fiction – this is simply boring. Something like that happened recently in a small country in the middle of Europe.

According to this country’s news a major power blackout had barely been avoided in May 2013. Engineers needed to control the delicate balance of power supply and demand manually as the power grid’s control system has been flooded with gibberish – data that could not be interpreted.

The alleged originator of these commands was a gas transmission system operator in the neighboring country. This company tested a new control system and tried to poll all of its meters for a status update.  Somehow the command found its way from the gas grid to the European power grid and has been replicated.

_________________________

Update –  Bonus material – making of: For the first time I felt the need to tell this story twice – in German and in English. This is not a translation, rather different versions in parallel universes. German-speaking readers – this is the German instance of the post.