Hacking

I am joining the ranks of self-proclaimed productivity experts: Do you feel distracted by social media? Do you feel that too much scrolling feeds transforms your mind – in a bad way? Solution: Go find an online platform that will put your mind in a different state. Go hacking on hackthebox.eu.

I have been hacking boxes over there for quite a while – and obsessively. I really wonder why I did not try to attack something much earlier. It’s funny as I have been into IT security for a long time – ‘infosec’ as it seems to be called now – but I was always a member of the Blue Team, a defender: Hardening Windows servers, building Public Key Infrastructures, always learning about attack vectors … but never really testing them extensively myself.

Earlier this year I was investigating the security of some things. They were black-boxes to me, and I figured I need to learn about some offensive tools finally – so I setup a Kali Linux machine. Then I searched for the best way to learn about these tools, I read articles and books about pentesting. But I had no idea if these ‘things’ were vulnerable at all, and where to start. So I figured: Maybe it is better to attack something made vulnerable intentionally? There are vulnerable web applications, and you can download vulnerable virtual machines … but then I remembered I saw posts about hackthebox some months ago:

As an individual, you can complete a simple challenge to prove your skills and then create an account, allowing you neto connect to our private network (HTB Labs) where several machines await for you to hack them.

Back then I had figured I will not pass this entry challenge nor hack any of these machines. It turned out otherwise, and it has been a very interesting experience so far -to learn about pentesting tools and methods on-the-fly. It has all been new, yet familiar in some sense.

Once I had been a so-called expert for certain technologies or products. But very often I became that expert by effectively reverse engineering the product a few days before I showed off that expertise. I had the exact same mindset and methods that are needed to attack the vulnerable applications of these boxes. I believe that in today’s world of interconnected systems, rapid technological change, [more buzz words here] every ‘subject matter expert’ is often actually reverse engineering – rather than applying knowledge acquired by proper training. I had certifications, too – but typically I never attended a course, but just took the exam after I had learned on the job.

On a few boxes I could use in-depth knowledge about protocols and technologies I had  long-term experience with, especially Active Directory and Kerberos. However, I did not find those boxes easier to own than the e.g. Linux boxes where everything was new to me. With Windows boxes I focussed too much on things I knew, and overlooked the obvious. On Linux I was just a humble learner – and it seemed this made me find the vulnerability or misconfiguration faster.

I felt like time-travelling back to when I started ‘in IT’, back in the late 1990s. Now I can hardly believe that I went directly from staff scientist in a national research center to down-to-earth freelance IT consultant – supporting small businesses. With hindsight, I knew so little both about business and about how IT / Windows / computers are actually used in the real world. I tried out things, I reverse engineered, I was humbled by what remains to be learned. But on the other hand, I was delighted by how many real-live problems – for whose solution people were eager to pay – can be solved pragmatically by knowing only 80%. Writing academic papers had felt more like aiming at 130% all of the time – but before you have to beg governmental entities to pay for it. Some academic colleagues were upset by my transition to the dark side, but I never saw this chasm: Experimental physics was about reverse engineering natural black-boxes – and sometimes about reverse engineering your predecessors enigmatic code. IT troubleshooting was about reverse engineering software. Theoretically it is all about logic and just zero’s and one’s, and you should be able to track down the developer who can explain that weird behavior. But in practice, as a freshly minted consultant without any ‘network’ you can hardly track down that developer in Redmond – so you make educated guesses and poke around the system.

I also noted eerie coincidences: In the months before being sucked into hackthebox’ back-hole, I had been catching up on Python, C/C++, and Powershell – for productive purposes, for building something. But all of that is very useful now, for using or modifying exploits. In addition I realize that my typical console applications for simulations and data analysis are quite similar ‘in spirit’ to typical exploitation tools. Last year I also learned about design patterns and best practices in object-oriented software development – and I was about to over-do it. Maybe it’s good to throw in some Cowboy Coding for good measure!

But above all, hacking boxes is simply addictive in a way that cannot be fully explained. It is like reading novels about mysteries and secret passages. Maybe this is what computer games are to some people. Some commentators say that machines on pentesting platforms are are more Capture-the-Flag-like (CTF) rather than real-world pentesting. It is true that some challenges have a ‘story line’ that takes you from one solved puzzle to the next one. To some extent a part of the challenge has to be fabricated as there are no real users to social engineer. But there are very real-world machines on hackthebox, e.g. requiring you to escalate one one ‘item’ in a Windows domain to another.

And if you ever have seen what stuff is stored in clear text in the real world, or what passwords might be used ‘just for testing’ (and never changed) – then also the artificial guess-the-password challenges do not appear that unrealistic. I want to emphasize that I am not the one to make fun of weak test passwords and the like at all. More often than not I was the one whose job was to get something working / working again, under pressure. Sometimes it is not exactly easy to ‘get it working’ quickly, in an emergency, and at the same time considering all security implications of the ‘fix’ you have just applied – by thinking like an attacker. hackthebox is an excellent platform to learn that, so I cannot recommend it enough!

An article about hacking is not complete if it lacks a clichéd stock photo! I am searching for proper hacker’s attire now – this was my first find!

Infinite Loop: Theory and Practice Revisited.

I’ve unlocked a new achievement as a blogger, or a new milestone as a life-form. As a dinosaur telling the same old stories over and over again.

I started drafting a blog post, as I always do since a while: I do it in my mind only, twist and turn in for days or weeks – until I am ready to write it down in one go. Today I wanted to release a post called On Learning (2) or the like. I knew I had written an early post with a similar title, so I expected this to be a loosely related update. But then I checked the old On Learning post: I found not only the same general ideas but the same autobiographical anecdotes I wanted to use now – even  in the same order.

2014 I had looked back on being both a teacher and a student for the greater part of my professional life, and the patterns were always the same – be the field physics, engineering, or IT security. I had written this post after a major update of our software for analyzing measurement data. This update had required me to acquire new skills, which was a delightful learning experience. I tried to reconcile very different learning modes: ‘Book learning’ about so-called theory, including learning for the joy of learning, and solving problems hands-on based on the minimum knowledge absolutely required.

It seems I like to talk about the The Joys of Theory a lot – I have meta-posted about theoretical physics in general, more than oncegeneral relativity as an example, and about computer science. I searched for posts about hands-on learning now – there aren’t any. But every post about my own research and work chronicles this hands-on learning in a non-meta explicit way. These are the posts listed on the heat pump / engineering page,  the IT security / control page, and some of the physics posts about the calculations I used in my own simulations.

Now that I am wallowing in nostalgia and scrolling through my old posts I feel there is one possibly new insight: Whenever I used knowledge to achieve a result that I really needed to get some job done, I think about this knowledge as emerging from hands-on tinkering and from self-study. I once read that many seasoned software developers also said that in a survey about their background: They checked self-taught despite having university degrees or professional training.

This holds for the things I had learned theoretically – be it in a class room or via my morning routine of reading textbooks. I learned about differential equations, thermodynamics, numerical methods, heat pumps, and about object-oriented software development. Yet when I actually have to do all that, it is always like re-learning it again in a more pragmatic way, even if the ‘class’ was very ‘applied’, not much time had passed since learning only, and I had taken exams. This is even true for the archetype all self-studied disciplines – hacking. Doing it – like here  – white-hat-style 😉 – is always a self-learning exercise, and reading about pentesting and security happens in an alternate universe.

The difference between these learning modes is maybe not only in ‘the applied’ versus ‘the theoretical’, but it is your personal stake in the outcome that matters – Skin In The Game. A project done by a group of students for the final purpose of passing a grade is not equivalent to running this project for your client or for yourself. The point is not if the student project is done for a real-life client, or the task as such makes sense in the real world. The difference is whether it feels like an exercise in an gamified system, or whether the result will matter financially / ‘existentially’ as you might try to empress your future client or employer or use the project results to build your own business. The major difference is in weighing risks and rewards, efforts and long-term consequences. Even ‘applied hacking’ in Capture-the-Flag-like contests is different from real-life pentesting. It makes all the difference if you just loose ‘points’ and miss the ‘flag’, or if you inadvertently take down a production system and violate your contract.

So I wonder if the Joy of Theoretical Learning is to some extent due to its risk-free nature. As long as you just learn about all those super interesting things just because you want to know – it is innocent play. Only if you finally touch something in the real world and touching things has hard consequences – only then you know if you are truly ‘interested enough’.

Sorry, but I told you I will post stream-of-consciousness-style now and then 🙂

I think it is OK to re-use the image of my beloved pre-1900 physics book I used in the 2014 post:

The Orphaned Internet Domain Risk

I have clicked on company websites of social media acquaintances, and something is not right: Slight errors in formatting, encoding errors for special German characters.

Then I notice that some of the pages contain links to other websites that advertize products in a spammy way. However, the links to the spammy sites are embedded in this alleged company websites in a subtle way: Using the (nearly) correct layout, or  embedding the link in a ‘news article’ that also contains legit product information – content really related to the internet domain I am visiting.

Looking up whois information tells me that these internet domain are not owned by my friends anymore – consistent with what they actually say on the social media profiles. So how come that they ‘have given’ their former domains to spammers? They did not, and they didn’t need to: Spammers simply need to watch out for expired domains, seize them when they are available – and then reconstruct the former legit content from public archives, and interleave it with their spammy messages.

The former content of legitimate sites is often available on the web archive. Here is the timeline of one of the sites I checked:

Clicking on the details shows:

  • Last display of legit content in 2008.
  • In 2012 and 2013 a generic message from the hosting provider was displayed: This site has been registered by one of our clients
  • After that we see mainly 403 Forbidden errors – so the spammers don’t want their site to be archived – but at one time a screen capture of the spammy site had been taken.

The new site shows the name of the former owner at the bottom but an unobtrusive link had been added, indicating the new owner – a US-based marketing and SEO consultancy.

So my take away is: If you ever feel like decluttering your websites and free yourself of your useless digital possessions – and possibly also social media accounts, think twice: As soon as your domain or name is available, somebody might take it, and re-use and exploit your former content and possibly your former reputation for promoting their spammy stuff in a shady way.

This happened a while ago, but I know now it can get much worse: Why only distribute marketing spam if you can distribute malware through channels still considered trusted? In this blog post Malwarebytes raises the question if such practices are illegal or not – it seems that question is not straight-forward to answer.

Visitors do not even have to visit the abandoned domain explicitly to get hacked by malware served. I have seen some reports of abandoned embedded plug-ins turned into malicious zombies. Silly example: If you embed your latest tweets, Twitter goes out-of-business, and its domains are seized by spammers – you Follow Me icon might help to spread malware.

If a legit site runs third-party code, they need to trust the authors of this code. For example, Equifax’ website recently served spyware:

… the problem stemmed from a “third-party vendor that Equifax uses to collect website performance data,” and that “the vendor’s code running on an Equifax Web site was serving malicious content.”

So if you run any plug-ins, embedded widgets or the like – better check out regularly if the originating domain is still run by the expected owner – monitor your vendors often; and don’t run code you do not absolutely need in the first place. Don’t use embedded active badges if a simple link to your profile would do.

Do a painful boring inventory and assessment often – then you will notice how much work it is to manage these ‘partners’ and rather stay away from signing up and registering for too much services.

Update 2017-10-25: And as we speak, we learn about another example – snatching a domain used for a Dell backup software, preinstalled on PCs.

Give the ‘Thing’ a Subnet of Its Own!

To my surprise, the most clicked post ever on this blog is this:

Network Sniffing for Everyone:
Getting to Know Your Things (As in Internet of Things)

… a step-by-step guide to sniff the network traffic of your ‘things’ contacting their mothership, plus a brief introduction to networking. I wanted to show how you can trace your networked devices’ traffic without any specialized equipment but being creative with what many users might already have, by turning a Windows PC into a router with Internet Connection Sharing.

Recently, an army of captured things took down part of the internet, and this reminded me of this post. No, this is not one more gloomy article about the Internet of Things. I just needed to use this Internet Sharing feature for the very purpose it was actually invented.

The Chief Engineer had finally set up the perfect test lab for programming and testing freely programmable UVR16x2 control systems (successor of UVR1611). But this test lab was a spot not equipped with wired ethernet, and the control unit’s data logger and ethernet gateway, so-called CMI (Control and Monitoring Interface), only has a LAN interface and no WLAN.

So an ages-old test laptop was revived to serve as a router (improving its ecological footprint in passing): This notebook connects to the standard ‘office’ network via WLAN: This wireless connection is thus the internet connection that can be shared with a device connected to the notebook’s LAN interface, e.g. via a cross-over cable. As explained in detail in the older article the router-laptop then allows for sniffing the traffic, – but above all it allows the ‘thing’ to connect to the internet at all.

This is the setup:

Using a notebook with Internet Connection Sharing enabled as a router to connect CMI (UVR16x2's ethernet gatway) to the internet

The router laptop is automatically configured with IP address 192.168.137.1 and hands out addresses in the 192.168.137.x network as a DHCP server, while using an IP address provided by the internet router for its WLAN adapter (indicated here as commonly used 192.168.0.x addresses). If Windows 10 is used on the router-notebook, you might need to re-enable ICS after a reboot.

The control unit is connected to the CMI via CAN bus – so the combination of test laptop, CMI, and UVR16x2 control unit is similar to the setup used for investigating CAN monitoring recently.

The CMI ‘thing’ is tucked away in a private subnet dedicated to it, and it cannot be accessed directly from any ‘Office PC’ – except the router PC itself. A standard office PC (green) effectively has to access the CMI via the same ‘cloud’ route as an Internet User (red). This makes the setup a realistic test for future remote support – when the CMI plus control unit has been shipped to its proud owner and is configured on the final local network.

The private subnet setup is also a simple workaround in case several things can not get along well with each other: For example, an internet TV service flooded CMI’s predecessor BL-NET with packets that were hard to digest – so BL-NET refused to work without a further reboot. Putting the sensitive device in a private subnet – using a ‘spare part’ router, solved the problem.

The Chief Engineer's quiet test lab for testing and programming control units

What I Never Wanted to Know about Security but Found Extremely Entertaining to Read

This is in praise of Peter Gutmann‘s book draft Engineering Security, and the title is inspired by his talk Everything You Never Wanted to Know about PKI but were Forced to Find Out.

Chances are high that any non-geek reader is already intimidated by the acronym PKI – sharing the links above on LinkedIn I have been asked Oh. Wait. What the %&$%^ is PKI??

This reaction is spot-on as this post is more about usability and perception of technology by end-users despite or because I have worked for more than 10 years at the geeky end of Public Key Infrastructure. In summary, PKI is a bunch (actually a ton) of standards that should allow for creating the electronic counterparts of signatures, of issuing passports, of transferring data in locked cabinets. It should solve all security issues basically.

The following images from Peter Gutmann’s book  might invoke some memories.

Security warnings designed by geeks look like this:

Peter Gutmann, Engineering Security, certificate warning - What the developers wrote

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.167. Also shown in Things that Make us Stupid, https://www.cs.auckland.ac.nz/~pgut001/pubs/stupid.pdf, p.3.

As a normal user, you might rather see this:

Peter Gutmann, Engineering Security, certificate warning - What the user sees

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.168.

The funny thing was that I picked this book to take a break from books on psychology and return to the geeky stuff – and then I was back to all kinds of psychological biases and Kahneman’s Prospect Theory for example.

What I appreciate in particular is the diverse range of systems and technologies considered – Apple, Android, UNIX, Microsoft,…, all evaluated agnosticly, plus a diverse range of interdisciplinary research considered. Now that’s what I call true erudition with a modern touch. Above all, I enjoyed the conversational and irreverent tone – I have never started reading a book for technical reasons and then was not able to put it down because it was so entertaining.

My personal summary – which resonates a lot with my experience – is:
On trying to make systems more secure you might not only make them more unusable and obnoxious but also more insecure.

A concise summary is also given in Gutmann’s talk Things that Make Us Stupid. I liked in particular the ignition key as a real-world example for a device that is smart and easy-to-use, and providing security as a by-product – very different from interfaces of ‘security software’.

Peter Gutmann is not at all siding with ‘experts’ who always chide end-users for being lazy and dumb – writing passwords down and stick the post-its on their screens – and who state that all we need is more training and user awareness. Normal users use systems to get their job done and they apply risk management in an intuitive way: Should I waste time in following an obnoxious policy or should I try to pass that hurdle as quick as possible to do what I am actually paid for?

Geeks are weird – that’s a quote from the lecture slides linked above. Since Peter Gutmann is an academic computer scientist and obviously a down-to-earth practitioner with ample hands-on experience – which would definitely qualify him as a Geek God – his critique is even more convincing. In the book he quotes psychological research which prove that geeks really think different (as per standardized testing of personality types). Geeks constitute a minority of people (7%) that tend to take decisions – such as Should I click that pop-up? – in a ‘rational’ manner, as the simple and mostly wrong theories on decision making have proposed. One example Gutmann uses is testing for basic understanding of logics, such as Does ‘All X are Y’ imply ‘Some X are Y’? Across cultures the majority of people thinks that this is wrong.

Normal people – and I think also geeks when they don’t operate in geek mode, e.g. in the wild, not in their programmer’s cave – fall for many so-called fallacies and biases.

Our intuitive decision making engine runs on autopilot and we get conditioned to click away EULAs, or next-next-finish the dreaded install wizards, or click away pop-ups, including the warnings. As users we don’t generate testable hypotheses or calculate risks but act unconsciously based on our experience what had worked in the past – and usually the click-away-anything approach works just fine. You would need US navy-style constant drilling in order to be alert enough not to fall for those fallacies. This does exactly apply to anonymous end users using their home PCs to do online-banking.

Security indicators like padlocks and browser address bar colors change with every version of popular browsers. Not even tech-savvy users are able to tell from those indicators if they are ‘secure’ now. But what it is extremely difficult: Users would need to watch out for the lack of an indicator (that’s barely visible when it’s there). And we are – owing to confirmation bias – extremely bad at spotting the negative, the lack of something. Gutmann calls this the Simon Says problem.

It is intriguing to see how biases about what ‘the others’ – the users or the attackers – would do enter technical designs. For example it is often assumed that a client machine or user who has authenticated itself is more trustworthy – and servers are more vulnerable to a malformed packet sent after successful authentication. In the Stuxnet attack digitally signed malware (signed by stolen keys) that has been used – ‘if it’s signed it has to be secure’.

To make things worse users are even conditioned for ‘insecure’ behavior: When banks use all kinds fancy domain names to market their latest products, they lure their users into clicking on links to that fancy sites in e-mails and have them logon with their banking user accounts via these sites they train users to fall for phishing e-mails – despite the fact that the same e-mails half-heartedly warn about clicking arbitrary links in e-mails.

I believe in relation to systems like PKI – that require you run some intricate procedures every few years only (these are called ceremonies for a reason), but then it is extremely critical – also admins should also be considered ‘users’.

I have spent many hours in discussing proposed security features like Passwords need to be impossible to remember and never written down with people whose job it is to audit, draft policies, and read articles on what Gutmann calls conference-paper-attacks all the day. These are not the people who have to run systems, deal with helpdesk calls or costs, and with requests from VIP users as top level managers who had on the one hand been extremely paranoid about system administrators sniffing their e-mails but yet on the other hand need instant 24/7 support with recovery of encrypted e-mails (This should be given a name like the Top Managers’ Paranoia Paradox)

As a disclaimer I’d like to add that I don’t underestimate cyber security threats, risk management, policies etc. It is probably the current media hype on governments spying on us that makes me advocate a contrarian view.

I could back this up by tons of stories, many of them too good to be made up (but unfortunately NDA-ed): security geeks in terms of ‘designers’ and ‘policy authors’ often underestimate time and efforts required in running their solutions on a daily basis. It is often the so-called trivial and simple things that go wrong, such as: The documentation of that intricate process to be run every X years cannot be found, or the only employee who really knew about the interdependencies is long gone, or allegedly simple logistics that go wrong (Now we are locked in the secret room to run the key ceremony… BTW did anybody think of having the media ready to install the operating system on that high secure isolated machine?).

A large European PKI setup failed (it made headlines) because the sacred key of a root certification authority had been destroyed – which is the expected behavior for so-called Hardware Security Modules when they are tampered with or at least the sensors say so, and there was no backup – the companies running the project and running operations blamed each other.

I am not quoting this to make fun of others – I made enough blunders myself. The typical response to this often is: Projects or operations have been badly managed and you just need to throw more people and money on them to run secure systems in a robust and reliable way. This might be true but it does simply not reflect the typical budget, time constraints, and lack of human resources typical IT departments of corporations have to deal with.

There is often a very real, palpable risk of trading off business continuity and availability (that is: safety) for security.

Again I don’t want to downplay risks associated with broken algorithms and the NSA reading our e-mail. But as Peter Gutmann points out cryptography is the last thing an attacker would target (even if a conference-paper attack had shown it is broken) – the implementation of cryptography rather guides attackers along the lines of where not to attack. Just consider the spectacular recent ‘hack’ of a prestigious one-letter Twitter account which was actually blackmailing the user after having gained control over a user’s custom domain through social engineering – of most likely underpaid call-center agents who had to face that dilemma of meeting the numbers in terms of customer satisfaction versus following the security awareness training they might have had.

Needless to say, encryption, smart cards, PKI etc. would not have prevented that type of attack.

Peter Gutmann says about himself he is throwing rocks at PKIs, and I believe you can illustrate a particularly big problem using a perfect real-live metaphor: Digital certificates are like passports or driver licenses to users – signed by a trusted agency.

Now consider the following: A user might commit a crime and his driver license is seized. PKI’s equivalent of that seizure is to have the issuing agency publishing a black list regularly, listing all the bad guys. Police officers on the road need to have access to that blacklist in order to check drivers’ legitimacy. What happens if a user isn’t blacklisted but the blacklist publishing service is not available? The standard makes this check optional (as many other things which is the norm when an ancient standard is retrofitted with security features) but let’s assume the police app follows the recommendation what they SHOULD do.  If the list is unavailable the user is considered and alleged criminal and has to exit the car.

You could also imagine something similar happening to train riders who have printed out an online ticket that cannot be validated (e.g. distinguished from forgery) by the conductor due to a failure in the train’s IT systems.

Any ’emergency’ / ‘incident’ related to digital certificates I was ever called upon to support with was related to false negative blocking users from doing what they need to do because of missing, misconfigured or (temporarily) unavailable certificate revocation lists (CRLs). The most important question in PKI planning is typically how to workaround or prevent inaccessible CRLs. I am aware of how petty this problem may appear to readers – what’s the big deal in monitoring a web server? But have you ever noticed how many alerts (e.g. via SMS) a typical administrator gets – and how many of them are false negatives? When I ask what will happen if the PKI / CRL signing / the web server breaks on Dec. 24 at 11:30 (in a European country) I am typically told that we need to plan for at least some days until recovery. This means that revocation information on the blacklist will be stale, too, as CRLs can be cached for performance reasons.

As you can imagine most corporations rather tend to follow the reasonable approach of putting business continuity over security so they want to make sure that a glitch in the web server hosting that blacklists will not stop 10.000 employees from accessing the wireless LAN, for example. Of course any weird standard can be worked around given infinite resources. The point I wanted to make was that these standards have been designed having something totally different in mind, by PKI Theologians in the 1980s.

Admittedly though, digitally certificates and cryptography make for a great playground for geeks. I think I was a PKI theologian myself many years ago until I rather morphed in what I call anti-security consultant tongue-in-cheek – trying to help users (and admins) to keep on working despite new security features. I often advocated for not using certificates and proposing alternative approaching boiling down the potential PKI project to a few hours of work, against the typical consultant’s mantra of trying to make yourself indispensable in long-term projects and by designing blackboxes the client will never be able to operate on his own. Not only because of the  PKI overhead but because alternatives were as secure – just not as hyped.

So in summary I am recommending Peter Gutmann’s terrific resources (check out his Crypto Tutorial, too!) to anybody who is torn between geek enthusiasm for some obscure technology and questioning its value nonetheless.

Rusty Padlock

No post on PKI, certificates and key would be complete without an image like this.I found the rusty one particularly apt here. (Wikimedia, user Garretttaggs)

Cyber Security Satire?

I am a science fiction fan. In particular, I am a fan of movies featuring Those Lonesome Nerds who are capable of controlling this planet’s critical infrastructure – from their gloomy basements.

But is it science fiction? In the year Die Hard 4.0 has been released a classified video has been recorded – showing an electrical generator dying from a cyber attack.

Fortunately, “Aurora” was just a test attack against a replica of a power plant:

Now some of you know that the Subversive El(k)ement calls herself a Dilettante Science Blogger on Twitter.

But here is an epic story to be unearthed, and it would take a novelist to do that. I can imagine the long-winded narrative unfolding – of people who cannot use their showers or toilets any more after the blackout. Of sinister hackers sending their evil commands into the command centers of the intricate blood circulation of our society we call The Power Grid. Of course they use smart meters to start their attack.

Unfortunately my feeble attempts of tipping my toes into novel writing have been crashed before I even got started: This novel does exist already – in German. I will inform you if is has been translated – either to a novel or directly into a Hollywood movie script.

As I am probably not capable of writing a serious thriller anyway I would rather go for dark satire.

Douglas Adams did cover so many technologies in The Hitchhiker’s Guide the Galaxy – existing and imagined ones – but he did not elaborate much on intergalactic power transmission. So here is room for satire.

What if our Most Critical Infrastructure would not be attacked by sinister hacker nerds but by our smart systems’ smartness dumbness? (Or their operators’.)

(To all you silent readers and idea grabbers out there: Don’t underestimate the cyber technology I had built into that mostly harmless wordpress.com blog: I know all of you who are reading this and if you are going to exploit this idea on behalf of me I will time-travel back and forth and ruin your online reputation.)

That being said I start crafting the plot:

As Adams probably drew his inspiration from his encounters with corporations and bureaucracy when describing the Vogons and InfiniDim enterprises I will extrapolate my cyber security nightmare from an anecdote:

Consider a programmer (a geek. Sorry for the redundant information!) trying to test his code. (Sorry for the gender stereotype. As a geekess I am allowed to do this. It could be female geek also!)

The geek’s code should send messages to other computers in a Windows domain. “Domain” is a technical term, not some geeky reference to Dominion or the like.  He is using net send. Generation Y-ers and other tablet and smartphone freak: This is like social media status message junk lacking images.

But our geek protagonist makes a small mistake: He does not send the test command to his test computer only – but to “EUROPE”. This does nearly refer to the whole continent, actually it addresses all computers in all European subsidiaries of a true Virtual Cyber Empire.

Fortunately modern IT networks are built on nearly AI powered devices called switches which make the cyber attack petering out at the borders of That Large City.

How could we turn this into a story about an attack on the power grid, adding your typical ignorant non-tech sensationalist writer’s cliched ideas:

  1. A humanoid life-form (or flawed android that tests his emotions chip) is tinkering with sort of a Hello World! command – sent to The Whole World literally.
  2. The attack that is just a glitch, an unfortunate concatenation of events, that is been launched in an unrelated part of the cyber space. E.g. by a command displayed on a hacker’s screen in a Youtube video. Or it was launched from the gas grid.
  3. The Command of Death spreads pandemically over the continent, replicating itself more efficiently than cute cat videos on social networks.
Circuit Breaker 115 kV

Any pop-sci article related to the power grid need to show-off some infrastructure like that (Circuit Breaker, Wikimedia)

I contacted my agent immediately.

Shattering my enthusiasm she told me:

This is not science-fiction – this is simply boring. Something like that happened recently in a small country in the middle of Europe.

According to this country’s news a major power blackout had barely been avoided in May 2013. Engineers needed to control the delicate balance of power supply and demand manually as the power grid’s control system has been flooded with gibberish – data that could not be interpreted.

The alleged originator of these commands was a gas transmission system operator in the neighboring country. This company tested a new control system and tried to poll all of its meters for a status update.  Somehow the command found its way from the gas grid to the European power grid and has been replicated.

_________________________

Update –  Bonus material – making of: For the first time I felt the need to tell this story twice – in German and in English. This is not a translation, rather different versions in parallel universes. German-speaking readers – this is the German instance of the post.