Infinite Loop: Theory and Practice Revisited.

I’ve unlocked a new achievement as a blogger, or a new milestone as a life-form. As a dinosaur telling the same old stories over and over again.

I started drafting a blog post, as I always do since a while: I do it in my mind only, twist and turn in for days or weeks – until I am ready to write it down in one go. Today I wanted to release a post called On Learning (2) or the like. I knew I had written an early post with a similar title, so I expected this to be a loosely related update. But then I checked the old On Learning post: I found not only the same general ideas but the same autobiographical anecdotes I wanted to use now – even  in the same order.

2014 I had looked back on being both a teacher and a student for the greater part of my professional life, and the patterns were always the same – be the field physics, engineering, or IT security. I had written this post after a major update of our software for analyzing measurement data. This update had required me to acquire new skills, which was a delightful learning experience. I tried to reconcile very different learning modes: ‘Book learning’ about so-called theory, including learning for the joy of learning, and solving problems hands-on based on the minimum knowledge absolutely required.

It seems I like to talk about the The Joys of Theory a lot – I have meta-posted about theoretical physics in general, more than oncegeneral relativity as an example, and about computer science. I searched for posts about hands-on learning now – there aren’t any. But every post about my own research and work chronicles this hands-on learning in a non-meta explicit way. These are the posts listed on the heat pump / engineering page,  the IT security / control page, and some of the physics posts about the calculations I used in my own simulations.

Now that I am wallowing in nostalgia and scrolling through my old posts I feel there is one possibly new insight: Whenever I used knowledge to achieve a result that I really needed to get some job done, I think about this knowledge as emerging from hands-on tinkering and from self-study. I once read that many seasoned software developers also said that in a survey about their background: They checked self-taught despite having university degrees or professional training.

This holds for the things I had learned theoretically – be it in a class room or via my morning routine of reading textbooks. I learned about differential equations, thermodynamics, numerical methods, heat pumps, and about object-oriented software development. Yet when I actually have to do all that, it is always like re-learning it again in a more pragmatic way, even if the ‘class’ was very ‘applied’, not much time had passed since learning only, and I had taken exams. This is even true for the archetype all self-studied disciplines – hacking. Doing it – like here  – white-hat-style 😉 – is always a self-learning exercise, and reading about pentesting and security happens in an alternate universe.

The difference between these learning modes is maybe not only in ‘the applied’ versus ‘the theoretical’, but it is your personal stake in the outcome that matters – Skin In The Game. A project done by a group of students for the final purpose of passing a grade is not equivalent to running this project for your client or for yourself. The point is not if the student project is done for a real-life client, or the task as such makes sense in the real world. The difference is whether it feels like an exercise in an gamified system, or whether the result will matter financially / ‘existentially’ as you might try to empress your future client or employer or use the project results to build your own business. The major difference is in weighing risks and rewards, efforts and long-term consequences. Even ‘applied hacking’ in Capture-the-Flag-like contests is different from real-life pentesting. It makes all the difference if you just loose ‘points’ and miss the ‘flag’, or if you inadvertently take down a production system and violate your contract.

So I wonder if the Joy of Theoretical Learning is to some extent due to its risk-free nature. As long as you just learn about all those super interesting things just because you want to know – it is innocent play. Only if you finally touch something in the real world and touching things has hard consequences – only then you know if you are truly ‘interested enough’.

Sorry, but I told you I will post stream-of-consciousness-style now and then 🙂

I think it is OK to re-use the image of my beloved pre-1900 physics book I used in the 2014 post:

Where Are the Files? [Winsol – UVR16x2]

Recently somebody has asked me where the log files are stored. This question is more interesting then it seems.

We are using the freely programmable controller UVR16x2 (and its predecessor) UVR1611) …

.. and their Control and Monitoring Interface – CMI:The CMI is a data logger and runs a web server. It logs data from the controllers (and other devices) via CAN bus – I have demonstrated this in a contrived example recently, and described the whole setup in this older post.

IT / smart home nerds asked me why there are two ‘boxes’ as other solutions only use a ‘single box’ as both controller and logger. I believe separating these functions is safer and more secure: A logger / web server should not be vital to run the controller, and any issues with these auxiliary components must impact the controller’s core functions.

Log files are stored on the CMI in a proprietary format, and they can retrieved via HTTP using the software Winsol. Winsol lets you visualize data for 1 or more days, zoom in, define views etc. – and data can be exported as CSV files. This is the tool we use for reverse engineering hydraulics and control logic (German blog post about remote hydraulics surgery):

In the latest versions of Winsol, log files are per default stored in the user’s profile on Windows:
C:\Users\[Username]\Documents\Technische Alternative\Winsol

I had never paid much attention to this; I had always changed that path in the configuration to make backup and automation easier. The current question about the log files’ location was actually about how I managed to make different users work with the same log files.

The answer might not be obvious because of the historical location of the log files:

Until some version of Winsol in use in 2017 log files were by stored in the Program Files folder, or at least Winsol tried to use that folder. Windows does not allow this anymore for security reasons.

If Winsol is upgraded from an older version, settings might be preserved. I did my tests  with Winsol 2.07 upgraded from an earlier version. I am a bit vague about versions as I did not test different upgrade paths in detail My point is users of control system’s software tend to be conservative when it comes to changing a running system – an older ‘logging PC’ with an older or upgraded version of Winsol is not an unlikely setup.

I started debugging on Windows 10 with the new security feature Controlled Folder Access enabled. CFA, of course, did not know Winsol, considered it an unfriendly app … to be white-listed.

Then I was curious about the default log file folders, and I saw this:

In the Winsol file picker dialogue (to the right) the log folders seem to be in the Program Files folder:
C:\Program Files\Technische Alternative\Winsol\LogX
But in Windows Explorer (to the left) there are no log files at that location.

What does Microsoft Sysinternals Process Monitor say?

There is a Reparse Point, and the file access is redirected to the folder:
C:\Users\[User]\AppData\Local\VirtualStore\Program Files\Technische Alternative\Winsol
Selecting this folder directly in Windows Explorer shows the missing files:

This location can be re-configured in Winsol to allow different users to access the same files (Disclaimer: Perhaps unsupported by the vendor…)

And there are also some truly user-specific configuration files in the user’s profile, in
C:\Users\[User]\AppData\Roaming\Technische Alternative\Winsol

Winsol.xml is e.g. for storing the list of ‘clients’ (logging profiles) that are included in automated processing of log files, and cookie.txt is the logon cookie for access to the online logging portal provided by Technische Alternative. If you absolutely want to switch Windows users *and* switch logging profiles often *and* sync those you have to tinker with Winsol.xml, e.g. by editing it using a script (Disclaimer again: Unlikely to be a supported way of doing things ;-))

As a summary, I describe the steps required to migrate Winsol’s configuration to a new PC and prepare it for usage by different users.

  • Install the latest version of Winsol on the target PC.
  • If you use Controlled Folder Access on Windows 10: Exempt Winsol as a friendly app.
  • Copy the contents of C:\Users\[User]\AppData\Roaming\Technische Alternative\Winsol from the user’s profile on the old machine to the new machine (user-specific config files).
  • If the log file folder shows up at a different path on the two machines – for example when using the same folder via a network share – edit the path in Winsol.xml or configure it in General Settings in Winsol.
  • Copy your existing log data to this new path. LogX contains the main log files, Infosol contain clients’ data. The logging configuration for each client, e.g. the IP address or portal name of the logger, is included in the setup.xml file in the root of each client’s folder.

Note: If you skip some Winsol versions on migrating/upgrading the structure of files might have changed – be careful! Last time that happened by the end of 2016 and Data Kraken had to re-configure some tentacles.

Bots, Like This! I am an Ardent Fan of HTTPS and Certificates!

This is an experiment in Machine Learning, Big Data, Artificial Intelligence, whatever.

But I need proper digression first.

Last autumn, I turned my back on social media and went offline for a few days.

There, in that magical place, the real world was offline as well. A history of physics museum had to be opened, just for us.

The sign says: Please call XY and we open immediately.

Scientific instruments of the past have a strange appeal, steampunk-y, artisanal, timeless. But I could not have enjoyed it, hadn’t I locked down the gates of my social media fortresses before.

Last year’ improved’ bots and spammers seem to have invaded WordPress. Did their vigilant spam filters feel a disturbance of the force? My blog had been open for anonymous comments since more than 5 years, but I finally had to restrict access. Since last year every commentator needs to have one manually approved comment.

But how to get attention if I block the comments? Spam your links by Liking other blogs. Anticipate that clickers will be very dedicated: Clicking on your icon only takes the viewer to your gravatar profile. The gravatar shows a link to the actual spammy website.

And how to pick suitable – likeable – target blog posts? Use your sophisticated artificial intelligence: If you want to sell SSL certificates (!) pick articles that contain key words like SSL or domain – like this one. BTW, I take the ads for acne treatment personally. Please stick to marketing SSL certificates. Especially in the era of free certificates provided by Let’s Encrypt.

Please use a different image for your different gravatars. You have done rather well when spam-liking the post on my domains and HTTPS, but what was on your mind when you found my post on hijacking orphaned domains for malvertizing?

Did statements like this attract the army of bots?

… some of the pages contain links to other websites that advertize products in a spammy way.

So what do I need to do to make you all like this post? Should I tell you that have a bunch of internet domains? That I migrated my non-blogs to HTTPS last year? That WordPress migrated blogs to HTTPS some time ago? That they use Let’s Encrypt certificates now, just as the hosting provider of my other websites does?

[Perhaps I should quote ‘SSL’ and ‘TLS’, too.]

Or should I tell you that I once made a fool of myself for publishing my conspiracy theories – about how Google ditched my blog from their index? While I actually had missed that you need to add the HTTPS version as a separate item in Google Webmaster Tools?

So I despearately need help with Search Engine Optimization and Online Marketing. Google shows me ads for their free online marketing courses on Facebook all the time now.

Or I need help with HTTPS (TLS/SSL) – embarrassing, as for many years I did nothing else than implementing Public Key Infrastructures and troubleshooting certificates? I am still debugging of all kinds weird certificate chaining and browser issues. The internet is always a little bit broken, says Sir Tim Berners-Lee.

[Is X.509 certificate a good search term? No, too nerdy, I guess.]

Or maybe you are more interested in my pioneering Search Term Poetry and Spam Poetry.  I need new raw material.

Like this! Like this! Like this!

Maybe I am going to even approve a comment and talk to you. It would not be the first time I fail the Turing test on this blog.

Don’t let me down, bots! I count on you!

Update 2018-02-13: So far, this post was a success. The elkemental blog has not seen this many likes in years.… and right now I noticed that the omnipresent suit bot also started to market solar energy and to like my related posts!

Update 2018-02-18: They have not given up yet – we welcome another batch of bots!

bots-welcome-experiment-success-2

Update 2018-04-01: They become more subtle – now they spam-like comments – albeit (sadly) not the comments on this article. Too bad I don’t display the comment likes – only I see them in the admin console 😉

bots-welcome-experiment-success-3

The Orphaned Internet Domain Risk

I have clicked on company websites of social media acquaintances, and something is not right: Slight errors in formatting, encoding errors for special German characters.

Then I notice that some of the pages contain links to other websites that advertize products in a spammy way. However, the links to the spammy sites are embedded in this alleged company websites in a subtle way: Using the (nearly) correct layout, or  embedding the link in a ‘news article’ that also contains legit product information – content really related to the internet domain I am visiting.

Looking up whois information tells me that these internet domain are not owned by my friends anymore – consistent with what they actually say on the social media profiles. So how come that they ‘have given’ their former domains to spammers? They did not, and they didn’t need to: Spammers simply need to watch out for expired domains, seize them when they are available – and then reconstruct the former legit content from public archives, and interleave it with their spammy messages.

The former content of legitimate sites is often available on the web archive. Here is the timeline of one of the sites I checked:

Clicking on the details shows:

  • Last display of legit content in 2008.
  • In 2012 and 2013 a generic message from the hosting provider was displayed: This site has been registered by one of our clients
  • After that we see mainly 403 Forbidden errors – so the spammers don’t want their site to be archived – but at one time a screen capture of the spammy site had been taken.

The new site shows the name of the former owner at the bottom but an unobtrusive link had been added, indicating the new owner – a US-based marketing and SEO consultancy.

So my take away is: If you ever feel like decluttering your websites and free yourself of your useless digital possessions – and possibly also social media accounts, think twice: As soon as your domain or name is available, somebody might take it, and re-use and exploit your former content and possibly your former reputation for promoting their spammy stuff in a shady way.

This happened a while ago, but I know now it can get much worse: Why only distribute marketing spam if you can distribute malware through channels still considered trusted? In this blog post Malwarebytes raises the question if such practices are illegal or not – it seems that question is not straight-forward to answer.

Visitors do not even have to visit the abandoned domain explicitly to get hacked by malware served. I have seen some reports of abandoned embedded plug-ins turned into malicious zombies. Silly example: If you embed your latest tweets, Twitter goes out-of-business, and its domains are seized by spammers – you Follow Me icon might help to spread malware.

If a legit site runs third-party code, they need to trust the authors of this code. For example, Equifax’ website recently served spyware:

… the problem stemmed from a “third-party vendor that Equifax uses to collect website performance data,” and that “the vendor’s code running on an Equifax Web site was serving malicious content.”

So if you run any plug-ins, embedded widgets or the like – better check out regularly if the originating domain is still run by the expected owner – monitor your vendors often; and don’t run code you do not absolutely need in the first place. Don’t use embedded active badges if a simple link to your profile would do.

Do a painful boring inventory and assessment often – then you will notice how much work it is to manage these ‘partners’ and rather stay away from signing up and registering for too much services.

Update 2017-10-25: And as we speak, we learn about another example – snatching a domain used for a Dell backup software, preinstalled on PCs.

Other People Have Lives – I Have Domains

These are just some boring update notifications from the elkemental Webiverse.

The elkement blog has recently celebrated its fifth anniversary, and the punktwissen blog will turn five in December. Time to celebrate this – with new domain names that says exactly what these sites are – the ‘elkement.blog‘ and the ‘punktwissen.blog‘.

Actually, I wanted to get rid of the ads on both blogs, and with the upgrade came a free domain. WordPress has a detailed cookie policy – and I am showing it dutifully using the respective widget, but they have to defer to their partners when it comes to third-party cookies. I only want to worry about research cookies set by Twitter and Facebook, but not by ad providers, and I am also considering to remove social media sharing buttons and the embedded tweets. (Yes, I am thinking about this!)

On the websites under my control I went full dinosaur, and the server sends only non-interactive HTML pages sent to the client, not requiring any client-side activity. I now got rid of the last half-hearted usage of a session object and the respective cookie, and I have never used any social media buttons or other tracking.

So there are no login data or cookies to protect, but yet I finally migrated all sites to HTTPS.

It is a matter of principle: I of all website owners should use https. Since 15 years I have been planning and building Public Key Infrastructures and troubleshooting X.509 certificates.

But of course I fear Google’s verdict: They have announced long ago to HTTPS is considered a positive ranking by its search engine. Pages not using HTTPS will be tagged as insecure using more and more terrifying icons – e.g. http-only pages with login buttons already display a striked-through padlock in Firefox. In the past years I migrated a lot of PKIs from SHA1 to SHA256 to fight the first wave of Insecure icons.

Finally Let’s Encrypt has started a revolution: Free SSL certificates, based on domain validation only. My hosting provider uses a solution based on Let’s Encrypt – using a reverse proxy that does the actual HTTPS. I only had to re-target all my DNS records to the reverse proxy – it would have been very easy would it not have been for all my already existing URL rewriting and tweaking and redirecting. I also wanted to keep the option of still using HTTP in the future for tests and special scenario (like hosting a revocation list), so I decided on redirecting myself in the application(s) instead of using the offered automated redirect. But a code review and clean-up now and then can never hurt 🙂 For large complex sites the migration to HTTPS is anything but easy.

In case I ever forget which domains and host names I use, I just need to check out this list of Subject Alternative Names again:

(And I have another certificate for the ‘test’ host names that I need for testing the sites themselves and also for testing various redirects ;-))

WordPress.com also uses Let’s Encrypt (Automattic is a sponsor), and the SAN elkement.blog is lumped together with several other blog names, allegedly the ones which needed new certificates at about the same time.

It will be interesting what the consequences for phishing websites will be. Malicious websites will look trusted as being issued certificates automatically, but revoking a certificate might provide another method for invalidating a malicious website.

Anyway, special thanks to the WordPress.com Happiness Engineers and support staff at my hosting provider Puaschitz IT. Despite all the nerdiness displayed on this blog I prefer hosted / ‘shared’ solutions when it comes to my own websites because I totally like it when somebody else has to patch the server and deal with attacks. I am an annoying client – with all kinds of special needs and questions – thanks for the great support! 🙂

Give the ‘Thing’ a Subnet of Its Own!

To my surprise, the most clicked post ever on this blog is this:

Network Sniffing for Everyone:
Getting to Know Your Things (As in Internet of Things)

… a step-by-step guide to sniff the network traffic of your ‘things’ contacting their mothership, plus a brief introduction to networking. I wanted to show how you can trace your networked devices’ traffic without any specialized equipment but being creative with what many users might already have, by turning a Windows PC into a router with Internet Connection Sharing.

Recently, an army of captured things took down part of the internet, and this reminded me of this post. No, this is not one more gloomy article about the Internet of Things. I just needed to use this Internet Sharing feature for the very purpose it was actually invented.

The Chief Engineer had finally set up the perfect test lab for programming and testing freely programmable UVR16x2 control systems (successor of UVR1611). But this test lab was a spot not equipped with wired ethernet, and the control unit’s data logger and ethernet gateway, so-called CMI (Control and Monitoring Interface), only has a LAN interface and no WLAN.

So an ages-old test laptop was revived to serve as a router (improving its ecological footprint in passing): This notebook connects to the standard ‘office’ network via WLAN: This wireless connection is thus the internet connection that can be shared with a device connected to the notebook’s LAN interface, e.g. via a cross-over cable. As explained in detail in the older article the router-laptop then allows for sniffing the traffic, – but above all it allows the ‘thing’ to connect to the internet at all.

This is the setup:

Using a notebook with Internet Connection Sharing enabled as a router to connect CMI (UVR16x2's ethernet gatway) to the internet

The router laptop is automatically configured with IP address 192.168.137.1 and hands out addresses in the 192.168.137.x network as a DHCP server, while using an IP address provided by the internet router for its WLAN adapter (indicated here as commonly used 192.168.0.x addresses). If Windows 10 is used on the router-notebook, you might need to re-enable ICS after a reboot.

The control unit is connected to the CMI via CAN bus – so the combination of test laptop, CMI, and UVR16x2 control unit is similar to the setup used for investigating CAN monitoring recently.

The CMI ‘thing’ is tucked away in a private subnet dedicated to it, and it cannot be accessed directly from any ‘Office PC’ – except the router PC itself. A standard office PC (green) effectively has to access the CMI via the same ‘cloud’ route as an Internet User (red). This makes the setup a realistic test for future remote support – when the CMI plus control unit has been shipped to its proud owner and is configured on the final local network.

The private subnet setup is also a simple workaround in case several things can not get along well with each other: For example, an internet TV service flooded CMI’s predecessor BL-NET with packets that were hard to digest – so BL-NET refused to work without a further reboot. Putting the sensitive device in a private subnet – using a ‘spare part’ router, solved the problem.

The Chief Engineer's quiet test lab for testing and programming control units

Internet of Things. Yet Another Gloomy Post.

Technically, I work with Things, as in the Internet of Things.

As outlined in Everything as a Service many formerly ‘dumb’ products – such as heating systems – become part of service offerings. A vital component of the new services is the technical connection of the Thing in your home to that Big Cloud. It seems every energy-related system has got its own Internet Gateway now: Our photovoltaic generator has one, our control unit has one, and the successor of our heat pump would have one, too. If vendors don’t bundle their offerings soon, we’ll end up with substantial electricity costs for powering a lot of separate gateways.

Experts have warned for years that the Internet of Things (IoT) comes with security challenges. Many Things’ owners still keep default or blank passwords, but the most impressive threat is my opinion is not hacking individual systems: Easily hacked things can be hijacked to serve as zombie clients in a botnet and lauch a joint Distributed Denial of Service attack against a single target. Recently the blog of renowned security reporter Brian Krebs has been taken down, most likely as an act of revenge by DDoSers (Crime is now offered as a service as well.). The attack – a tsunami of more than 600 Gbps – was described as one of the largest the internet had seen so far. Hosting provider OVH was subject to a record-breaking Tbps attack – launched via captured … [cue: hacker movie cliché] … cameras and digital video recorders on the internet.

I am about the millionth blogger ‘reporting’ on this, nothing new here. But the social media news about the DDoS attacks collided with another social media micro outrage  in my mind – about seemingly unrelated IT news: HP had to deal with not-so-positive reporting about its latest printer firmware changes and related policies –  when printers started to refuse to work with third-party cartridges. This seems to be a legal issue or has been presented as such, and I am not interested in that aspect here. What I find interesting is the clash of requirements: After the DDoS attacks many commentators said IoT vendors should be held accountable. They should be forced to update their stuff. On the other hand, end users should remain owners of the IT gadgets they have bought, so the vendor has no right to inflict any policies on them and restrict the usage of devices.

I can relate to both arguments. One of my main motivations ‘in renewable energy’ or ‘in home automation’ is to make users powerful and knowledgable owners of their systems. On the other hand I have been ‘in security’ for a long time. And chasing firmware for IoT devices can be tough for end users.

It is a challenge to walk the tightrope really gracefully here: A printer may be traditionally considered an item we own whereas the internet router provided by the telco is theirs. So we can tinker with the printer’s inner workings as much as we want but we must not touch the router and let the telco do their firmware updates. But old-school devices are given more ‘intelligence’ and need to be connected to the internet to provide additional services – like that printer that allows to print from your smartphone easily (Yes, but only if your register it at the printer manufacturer’s website before.). In addition, our home is not really our castle anymore. Our computers aren’t protected by the telco’s router / firmware all the time, but we work in different networks or in public places. All the Things we carry with us, someday smart wearable technology, will check in to different wireless and mobile networks – so their security bugs should better be fixed in time.

If IoT vendors should be held accountable and update their gadgets, they have to be given the option to do so. But if the device’s host tinkers with it, firmware upgrades might stall. In order to protect themselves from legal persecution, vendors need to state in contracts that they are determined to push security updates and you cannot interfere with it. Security can never be enforced by technology only – for a device located at the end user’s premises.

It is horrible scenario – and I am not sure if I refer to hacking or to proliferation of even more bureaucracy and over-regulation which should protect us from hacking but will add more hurdles for would-be start-ups that dare to sell hardware.

Theoretically a vendor should be able to separate the security-relevant features from nice-to-have updates. For example, in a similar way, in smart meters the functions used for metering (subject to metering law) should be separated from ‘features’ – the latter being subject to remote updates while the former must not. Sources told me that this is not an easy thing to achieve, at least not as easy as presented in the meters’ marketing brochure.

Linksys's Iconic Router

That iconic Linksys router – sold since more than 10 years (and a beloved test devices of mine). Still popular because you could use open source firmware. Something that new security policies might seek to prevent.

If hardware security cannot be regulated, there might be more regulation of internet traffic. Internet Service Providers could be held accountable to remove compromised devices from their networks, for example after having noticed the end user several times. Or smaller ISPs might be cut off by upstream providers. Somewhere in the chain of service providers we will have to deal with more monitoring and regulation, and in one way or other the playful days of the earlier internet (romanticized with hindsight, maybe) are over.

When I saw Krebs’ site going offline, I wondered what small business should do in general: His site is now DDoS-protected by Google’s Project Shield, a service offered to independent journalists and activists after his former pro-bono host could not deal with the load without affecting paying clients. So one of the Siren Servers I commented on critically so often came to rescue! A small provider will not be able to deal with such attacks.

WordPress.com should be well-protected, I guess. I wonder if we will all end up hosting our websites at such major providers only, or ‘blog’ directly to Facebook, Google, or LinkedIn (now part of Microsoft) to be safe. I had advised against self-hosting WordPress myself: If you miss security updates you might jeopardize not only your website, but also others using the same shared web host. If you live on a platform like WordPress or Google, you will complain from time to time about limited options or feature updates you don’t like – but you don’t have to care about security. I compare this to avoiding legal issues as an artisan selling hand-made items via Amazon or the like, in contrast to having to update your own shop’s business logic after every change in international tax law.

I have no conclusion to offer. Whenever I read news these days – on technology, energy, IT, anything in between, The Future in general – I feel reminded of this tension: Between being an independent neutral netizen and being plugged in to an inescapable matrix, maybe beneficial but Borg-like nonetheless.