The Future of Small Business?

If I would be asked which technology or ‘innovation’ has had the most profound impact on the way I work I would answer: Working remotely – with clients and systems I hardly ever see.

20 years ago I played with modems, cumbersome dial-in, and Microsoft’s Netmeeting. Few imagined yet, that remote work will once be the new normal. Today I am reading about Industry 4.0, 3D printing, the Internet of Things, and how every traditional company has to compete with Data Krakens like Google and Amazon. Everything will be offered as a service, including heating. One consequence: Formerly independent craftsmen become preferred partners or subcontractors of large companies, of vendors of smart heating solutions. Creative engineering is replaced by calling the Big Vendor’s hotline. Human beings cover the last mile that robots or software cannot deal with – yet.

Any sort of customization, consulting, support, and systems integration might be automated in the long run: Clients will use an online configurator and design their systems, and possibly print them out at home. Perhaps someday our clients will print out their heat exchangers from a blueprint generated on Kraken’s website, instead of using our documentation to build them.

Allowing you to work remotely also allows everybody else in the world to do so, and you might face global competition once the barriers of language and culture have been overcome (by using ubiquitous US culture and ‘business English’). Large IT service providers have actually considered to turn their consulting and support staff into independent contractors and let them compete globally – using an online bidding platform. Well-known Data Krakens match clients and freelancers, and I’ve seen several start-ups that aspire at becoming the next matching Kraken platform for computer / tech support. Clients will simply not find you if you are not on the winning platform. Platform membership becomes as important as having a website or an entry in a business directory.

One seemingly boring and underappreciated point that works enormously in favor of the platforms is bureaucracy: As a small business you have to deal with many rules and provisions, set forth by large entities – governments, big clients, big vendors. Some of those rules are conflicting, and meeting them all in the best possible way does not allow for much creativity. Krakens’ artificial intelligence – and their lawyers and lobbyists – might be able to fend off bureaucracy better than a freelancer. If you want to sell things to clients in different countries you better defer the legally correct setup of the online shop to the Kraken Platform, who deals with the intricacies of ever evolving international tax law – while you become their subcontractor or franchisee. In return, you will dutiful sign the Vendor’s Code of Conduct every year, and follow the logo guidelines when using Kraken’s corporate identity.

In my gloomy post about Everything as a Service I came to the conclusion that we – small businesses who don’t want to grow and become start-ups – aspiring at Krakenhood themselves – will either work as the Kraken’s hired hands, or …

… a lucky few will carve out a small niche and produce or customize bespoke units for clients who value luxurious goods for the sake of uniqueness or who value human imperfection as a fancy extra.

My personal credo is rather a very positive version of this quote minus the cynicism. I am happy as a small business owner. This is just a single data-point, and I don’t have a self-consistent theory on this. But I have Skin in this Game so I share my anecdotes and some of the things I learned.

Years ago I officially declared my retirement from IT Security and global corporations – to plan special heat pump systems for private home owners instead. Today we indeed work on such systems, and the inside joke of doing this remote-only – ‘IT-style’ – has become routine. Clients find us via our blog that is sometimes mistaken for a private fun blog and whose writing feels like that. I have to thank Kraken Google, begrudgingly. A few of my Public Key Infrastructure clients insisted on hiring me again despite my declarations of looming ignorance in all things IT. All this allows for very relaxed, and self-marketing-pressure-free collaborations.

  • I try to stay away, or move farther away from anything strictly organized, standardized, or ‘platform-mediated’. Agreements are made by handshake. I don’t submit any formal applications or replies to Request for Proposals.
  • “If things do not work without a written contract, they don’t work with a contract either.”
  • I hardly listen to business experts, especially if they try to give well-meant, but unsolicited advice. Apply common sense!
  • Unspectacular time-tested personal business relationships beat 15 minutes of fame any time.
  • My work has to speak for itself, and ‘marketing’ has to be a by-product. I cannot compete with companies who employ people full-time for business development.
  • The best thing to protect your inner integrity is to know and to declare what you do not want and what you would never do. Removing the absolute negatives leaves a large area of positive background, and counter the mantra of specific ‘goals’ this approach lets you discover unexpected upsides. This is Nassim Taleb’s Via Negativa – and any career or business advice that speaks to me revolves around that.
  • There is no thing as the True Calling or the One and Only Passion – I like the notion of a Portfolio of Passions. I think you are getting to enjoy what you are learning to be good at – not the other way around.
  • All this is the result of years of experimenting in an ‘hyperspace of options’ – there is no shortcut. I have to live with the objection that I have just been lucky, but I can say that I made many conscious decisions whose ‘goal’ was to increase the number of options rather than to narrow them down (Taleb’s Optionality).

So I will finally quote Nassim Taleb, who nailed as usual – in his Facebook post about The New Artisan:

Anything you do to optimize your work, cut some corners, squeeze more “efficiency” out of it (and out of your life) will eventually make you hate it.

I have bookmarked this link for a while – because sometimes I need to remind myself of all the above.

Taleb states that an Artisan …

1) does things for existential reasons,
2) has some type of “art” in his/her profession, stays away from most aspects of industrialization, combines art and business in some manner (his decision-making is never fully economic),
3) has some soul in his/her work: would not sell something defective or even of compromised quality because what people think of his work matters more than how much he can make out of it,
4) has sacred taboos, things he would not do even if it markedly increased profitability.

… and I cannot agree more. I have lots of Sacred Taboos, and they have served me well.

Other People Have Lives – I Have Domains

These are just some boring update notifications from the elkemental Webiverse.

The elkement blog has recently celebrated its fifth anniversary, and the punktwissen blog will turn five in December. Time to celebrate this – with new domain names that says exactly what these sites are – the ‘elkement.blog‘ and the ‘punktwissen.blog‘.

Actually, I wanted to get rid of the ads on both blogs, and with the upgrade came a free domain. WordPress has a detailed cookie policy – and I am showing it dutifully using the respective widget, but they have to defer to their partners when it comes to third-party cookies. I only want to worry about research cookies set by Twitter and Facebook, but not by ad providers, and I am also considering to remove social media sharing buttons and the embedded tweets. (Yes, I am thinking about this!)

On the websites under my control I went full dinosaur, and the server sends only non-interactive HTML pages sent to the client, not requiring any client-side activity. I now got rid of the last half-hearted usage of a session object and the respective cookie, and I have never used any social media buttons or other tracking.

So there are no login data or cookies to protect, but yet I finally migrated all sites to HTTPS.

It is a matter of principle: I of all website owners should use https. Since 15 years I have been planning and building Public Key Infrastructures and troubleshooting X.509 certificates.

But of course I fear Google’s verdict: They have announced long ago to HTTPS is considered a positive ranking by its search engine. Pages not using HTTPS will be tagged as insecure using more and more terrifying icons – e.g. http-only pages with login buttons already display a striked-through padlock in Firefox. In the past years I migrated a lot of PKIs from SHA1 to SHA256 to fight the first wave of Insecure icons.

Finally Let’s Encrypt has started a revolution: Free SSL certificates, based on domain validation only. My hosting provider uses a solution based on Let’s Encrypt – using a reverse proxy that does the actual HTTPS. I only had to re-target all my DNS records to the reverse proxy – it would have been very easy would it not have been for all my already existing URL rewriting and tweaking and redirecting. I also wanted to keep the option of still using HTTP in the future for tests and special scenario (like hosting a revocation list), so I decided on redirecting myself in the application(s) instead of using the offered automated redirect. But a code review and clean-up now and then can never hurt 🙂 For large complex sites the migration to HTTPS is anything but easy.

In case I ever forget which domains and host names I use, I just need to check out this list of Subject Alternative Names again:

(And I have another certificate for the ‘test’ host names that I need for testing the sites themselves and also for testing various redirects ;-))

WordPress.com also uses Let’s Encrypt (Automattic is a sponsor), and the SAN elkement.blog is lumped together with several other blog names, allegedly the ones which needed new certificates at about the same time.

It will be interesting what the consequences for phishing websites will be. Malicious websites will look trusted as being issued certificates automatically, but revoking a certificate might provide another method for invalidating a malicious website.

Anyway, special thanks to the WordPress.com Happiness Engineers and support staff at my hosting provider Puaschitz IT. Despite all the nerdiness displayed on this blog I prefer hosted / ‘shared’ solutions when it comes to my own websites because I totally like it when somebody else has to patch the server and deal with attacks. I am an annoying client – with all kinds of special needs and questions – thanks for the great support! 🙂

Mr. Bubble Was Confused. A Cliffhanger.

This year we experienced a record-breaking January in Austria – the coldest since 30 years. Our heat pump system produced 14m3 of ice in the underground tank.

The volume of ice is measured by Mr. Bubble, the winner of The Ultimate Level Sensor Casting Show run by the Chief Engineer last year:

The classic, analog level sensor was very robust and simple, but required continuous human intervention:

Level sensor: The old way

So a multitude of prototypes had been evaluated …

Level sensors: The precursors

The challenge was to measure small changes in level as 1 mm corresponds to about 0,15 m3 of ice.

Mr. Bubble uses a flow of bubbling air in a tube; the measured pressure increases linearly with the distance of the liquid level from the nozzle:

blubber-messrohr-3

Mr. Bubble is fine and sane, as long as ice is growing monotonously: Ice grows from the heat exchanger tubes into the water, and the heat exchanger does not float due to buoyancy, as it is attached to the supporting construction. The design makes sure that not-yet-frozen water can always ‘escape’ to higher levels to make room for growing ice. Finally Mr. Bubble lives inside a hollow cylinder of water inside a block of ice. As long as all the ice is covered by water, Mr. Bubble’s calculation is correct.

But when ambient temperature rises and the collector harvests more energy then needed by the heat pump, melting starts at the heat exchanger tubes. The density of ice is smaller than that of water, so the water level in Mr. Bubble’s hollow cylinder is below the surface level of ice:

Mr. Bubble is utterly confused and literally driven over the edge – having to deal with this cliff of ice:

When ice is melted, the surface level inside the hollow cylinder drops quickly as the diameter of the cylinder is much smaller than the width of the tank. So the alleged volume of ice perceived by Mr. Bubble seems to drop extremely fast and out of proportion: 1m3 of ice is equivalent to 93kWh of energy – the energy our heat pump would need on an extremely cold day. On an ice melting day, the heat pump needs much less, so a drop of more than 1m3 per day is an artefact.

As long as there are ice castles on the surface, Mr. Bubble keeps underestimating the volume of ice. When it gets colder, ice grows again, and its growth is then overestimated via the same effect. Mr. Bubble amplifies the oscillations in growing and shrinking of ice.

In the final stages of melting a slab-with-a-hole-like structure ‘mounted’ above the water surface remains. The actual level of water is lower than it was before the ice period. This is reflected in the raw data – the distance measured. The volume of ice output is calibrated not to show negative values, but the underlying measurement data do:

Only when finally all ice has been melted – slowly and via thermal contact with air – then the water level is back to normal.

In the final stages of melting parts of the suspended slab of ice may break off and then floating small icebergs can confuse Mr. Bubble, too:

So how can we picture the true evolution of ice during melting? I am simulating the volume of ice, based on our measurements of air temperature. To be detailed in a future post – this is my cliffhanger!

>> Next episode.

Give the ‘Thing’ a Subnet of Its Own!

To my surprise, the most clicked post ever on this blog is this:

Network Sniffing for Everyone:
Getting to Know Your Things (As in Internet of Things)

… a step-by-step guide to sniff the network traffic of your ‘things’ contacting their mothership, plus a brief introduction to networking. I wanted to show how you can trace your networked devices’ traffic without any specialized equipment but being creative with what many users might already have, by turning a Windows PC into a router with Internet Connection Sharing.

Recently, an army of captured things took down part of the internet, and this reminded me of this post. No, this is not one more gloomy article about the Internet of Things. I just needed to use this Internet Sharing feature for the very purpose it was actually invented.

The Chief Engineer had finally set up the perfect test lab for programming and testing freely programmable UVR16x2 control systems (successor of UVR1611). But this test lab was a spot not equipped with wired ethernet, and the control unit’s data logger and ethernet gateway, so-called CMI (Control and Monitoring Interface), only has a LAN interface and no WLAN.

So an ages-old test laptop was revived to serve as a router (improving its ecological footprint in passing): This notebook connects to the standard ‘office’ network via WLAN: This wireless connection is thus the internet connection that can be shared with a device connected to the notebook’s LAN interface, e.g. via a cross-over cable. As explained in detail in the older article the router-laptop then allows for sniffing the traffic, – but above all it allows the ‘thing’ to connect to the internet at all.

This is the setup:

Using a notebook with Internet Connection Sharing enabled as a router to connect CMI (UVR16x2's ethernet gatway) to the internet

The router laptop is automatically configured with IP address 192.168.137.1 and hands out addresses in the 192.168.137.x network as a DHCP server, while using an IP address provided by the internet router for its WLAN adapter (indicated here as commonly used 192.168.0.x addresses). If Windows 10 is used on the router-notebook, you might need to re-enable ICS after a reboot.

The control unit is connected to the CMI via CAN bus – so the combination of test laptop, CMI, and UVR16x2 control unit is similar to the setup used for investigating CAN monitoring recently.

The CMI ‘thing’ is tucked away in a private subnet dedicated to it, and it cannot be accessed directly from any ‘Office PC’ – except the router PC itself. A standard office PC (green) effectively has to access the CMI via the same ‘cloud’ route as an Internet User (red). This makes the setup a realistic test for future remote support – when the CMI plus control unit has been shipped to its proud owner and is configured on the final local network.

The private subnet setup is also a simple workaround in case several things can not get along well with each other: For example, an internet TV service flooded CMI’s predecessor BL-NET with packets that were hard to digest – so BL-NET refused to work without a further reboot. Putting the sensitive device in a private subnet – using a ‘spare part’ router, solved the problem.

The Chief Engineer's quiet test lab for testing and programming control units

And Now for Something Completely Different: Rotation Heat Pump!

Heat pumps for space heating are all very similar: Refrigerant evaporates, pressure is increased by a scroll compressor, refrigerant condenses, pressure is reduced in an expansion value. *yawn*

The question is:

Can a compression heat pump be built in a completely different way?

Austrian start-up ECOP did it: They  invented the so-called Rotation Heat Pump.

It does not have a classical compressor, and the ‘refrigerant’ does not undergo a phase transition. A pressure gradient is created by centrifugal forces: The whole system rotates, including the high-pressure (heat sink) and low-pressure (source) heat exchanger. The low pressure part of the system is positioned closer to the center of the rotation axis, and heat sink and heat source are connected at the axis (using heating water). The system rotates at up to 1800 rounds per minute.

A mixture of noble gases is used in a Joule (Brayton) process, driven in a cycle by a ventilator. Gas is compressed and thus heated up; then it is cooled at constant pressure and energy is released to the heat sink. After expanding the gas, it is heated up again at low pressure by the heat source.

In the textbook Joule cycle, a turbine and a compressor share a common axis: The energy released by the turbine is used to drive the compressor. This is essential, as compression and expansion energies are of the same order of magnitude, and both are considerably larger than the net energy difference – the actual input energy.

In contrast to that, a classical compression heat pump uses a refrigerant that is condensed while releasing heat and then evaporated again at low pressure. There is no mini-turbine to reduce the pressure but only an expansion valve, as there is not much energy to gain.

This explains why the Rotation Heat Pumps absolutely have to have compression efficiencies of nearly 100%, compared to, say, 85% efficiency of a scroll compressor in heat pump used for space heating:

Some numbers for a Joule process (from this German ECOP paper): On expansion of the gas 1200kW are gained, but 1300kW are needed for compression – if there would be no losses at all. So the net input power is 100kW. But if the efficiency of the compression is reduced from 100% to 80% about 1600kW are needed and thus a net input power of 500kW – five times the power compared to the ideal compressor! The coefficient of performance would plummet from 10 to 2,3.

I believe these challenging requirements are why Rotation Heat Pumps are ‘large’ and built for industrial processes. In addition to the high COP, this heat pump is also very versatile: Since there are no phase transitions, you can pick your favorite corner of the thermodynamic state diagram at will: This heat pump works for very different combinations temperatures of the hot target and the cold source.

Same Procedure as Every Autumn: New Data for the Heat Pump System

October – time for updating documentation of the heat pump system again! Consolidated data are available in this PDF document.

In the last season there were no special experiments – like last year’s Ice Storage Challenge or using the wood stove. Winter was rather mild, so we needed only ~16.700kWh for space heating plus hot water heating. In the coldest season so far – 2012/13 – the equivalent energy value was ~19.700kWh. The house is located in Eastern Austria, has been built in the 1920s, and has 185m2 floor space since the last major renovation.

(More cross-cultural info:  I use thousands dots and decimal commas).

The seasonal performance factor was about 4,6 [kWh/kWh] – thus the electrical input energy was about 16.700kWh / 4,6 ~ 3.600kWh.

Note: Hot water heating is included and we use flat radiators requiring a higher water supply temperature than the floor heating loops in the new part of the house.

Heating season 2015/2016: Performance data for the 'ice-storage-/solar-powered' heat pump system

Red: Heating energy ‘produced’ by the heat pump – for space heating and hot water heating. Yellow: Electrical input energy. Green: Performance Factor = Ratio of these energies.

The difference of 16.700kWh – 3.600kWh = 13.100kWh was provided by ambient energy, extracted from our heat source – a combination of underground water/ice tank and an unglazed ribbed pipe solar/air collector.

The solar/air collector has delivered the greater part of the ambient energy, about 10.500kWh:

Heating season 2015/2016: Energy harvested from air by the collector versus heating-energy

Energy needed for heating per day (heat pump output) versus energy from the solar/air collector – the main part of the heat pump’s input energy. Negative collector energies indicate passive cooling periods in summer.

Peak Ice was 7 cubic meters, after one cold spell of weather in January:

Heating season 2015/2016: Temperature of ambient air, water tank (heat source) and volume of water frozen in the tank.

Ice is formed in the water tank when the energy from the collector is not sufficient to power the heat pump alone, when ambient air temperatures are close to 0°C.

Last autumn’s analysis on economics is still valid: Natural gas is three times as cheap as electricity but with a performance factor well above three heating costs with this system are lower than they would be with a gas boiler.

Is there anything that changed gradually during all these years and which does not primarily depend on climate? We reduced energy for hot tap water heating – having tweaked water heating schedule gradually: Water is heated up once per day and as late as possible, to avoid cooling off the hot storage tank during the night.

We have now started the fifth heating season. This marks also the fifth anniversary of the day we switched on the first ‘test’ version 1.0 of the system, one year before version 2.0.

It’s been about seven years since first numerical simulations, four years since I have been asked if I was serious in trading in IT security for heat pumps, and one year since I tweeted:

Internet of Things. Yet Another Gloomy Post.

Technically, I work with Things, as in the Internet of Things.

As outlined in Everything as a Service many formerly ‘dumb’ products – such as heating systems – become part of service offerings. A vital component of the new services is the technical connection of the Thing in your home to that Big Cloud. It seems every energy-related system has got its own Internet Gateway now: Our photovoltaic generator has one, our control unit has one, and the successor of our heat pump would have one, too. If vendors don’t bundle their offerings soon, we’ll end up with substantial electricity costs for powering a lot of separate gateways.

Experts have warned for years that the Internet of Things (IoT) comes with security challenges. Many Things’ owners still keep default or blank passwords, but the most impressive threat is my opinion is not hacking individual systems: Easily hacked things can be hijacked to serve as zombie clients in a botnet and lauch a joint Distributed Denial of Service attack against a single target. Recently the blog of renowned security reporter Brian Krebs has been taken down, most likely as an act of revenge by DDoSers (Crime is now offered as a service as well.). The attack – a tsunami of more than 600 Gbps – was described as one of the largest the internet had seen so far. Hosting provider OVH was subject to a record-breaking Tbps attack – launched via captured … [cue: hacker movie cliché] … cameras and digital video recorders on the internet.

I am about the millionth blogger ‘reporting’ on this, nothing new here. But the social media news about the DDoS attacks collided with another social media micro outrage  in my mind – about seemingly unrelated IT news: HP had to deal with not-so-positive reporting about its latest printer firmware changes and related policies –  when printers started to refuse to work with third-party cartridges. This seems to be a legal issue or has been presented as such, and I am not interested in that aspect here. What I find interesting is the clash of requirements: After the DDoS attacks many commentators said IoT vendors should be held accountable. They should be forced to update their stuff. On the other hand, end users should remain owners of the IT gadgets they have bought, so the vendor has no right to inflict any policies on them and restrict the usage of devices.

I can relate to both arguments. One of my main motivations ‘in renewable energy’ or ‘in home automation’ is to make users powerful and knowledgable owners of their systems. On the other hand I have been ‘in security’ for a long time. And chasing firmware for IoT devices can be tough for end users.

It is a challenge to walk the tightrope really gracefully here: A printer may be traditionally considered an item we own whereas the internet router provided by the telco is theirs. So we can tinker with the printer’s inner workings as much as we want but we must not touch the router and let the telco do their firmware updates. But old-school devices are given more ‘intelligence’ and need to be connected to the internet to provide additional services – like that printer that allows to print from your smartphone easily (Yes, but only if your register it at the printer manufacturer’s website before.). In addition, our home is not really our castle anymore. Our computers aren’t protected by the telco’s router / firmware all the time, but we work in different networks or in public places. All the Things we carry with us, someday smart wearable technology, will check in to different wireless and mobile networks – so their security bugs should better be fixed in time.

If IoT vendors should be held accountable and update their gadgets, they have to be given the option to do so. But if the device’s host tinkers with it, firmware upgrades might stall. In order to protect themselves from legal persecution, vendors need to state in contracts that they are determined to push security updates and you cannot interfere with it. Security can never be enforced by technology only – for a device located at the end user’s premises.

It is horrible scenario – and I am not sure if I refer to hacking or to proliferation of even more bureaucracy and over-regulation which should protect us from hacking but will add more hurdles for would-be start-ups that dare to sell hardware.

Theoretically a vendor should be able to separate the security-relevant features from nice-to-have updates. For example, in a similar way, in smart meters the functions used for metering (subject to metering law) should be separated from ‘features’ – the latter being subject to remote updates while the former must not. Sources told me that this is not an easy thing to achieve, at least not as easy as presented in the meters’ marketing brochure.

Linksys's Iconic Router

That iconic Linksys router – sold since more than 10 years (and a beloved test devices of mine). Still popular because you could use open source firmware. Something that new security policies might seek to prevent.

If hardware security cannot be regulated, there might be more regulation of internet traffic. Internet Service Providers could be held accountable to remove compromised devices from their networks, for example after having noticed the end user several times. Or smaller ISPs might be cut off by upstream providers. Somewhere in the chain of service providers we will have to deal with more monitoring and regulation, and in one way or other the playful days of the earlier internet (romanticized with hindsight, maybe) are over.

When I saw Krebs’ site going offline, I wondered what small business should do in general: His site is now DDoS-protected by Google’s Project Shield, a service offered to independent journalists and activists after his former pro-bono host could not deal with the load without affecting paying clients. So one of the Siren Servers I commented on critically so often came to rescue! A small provider will not be able to deal with such attacks.

WordPress.com should be well-protected, I guess. I wonder if we will all end up hosting our websites at such major providers only, or ‘blog’ directly to Facebook, Google, or LinkedIn (now part of Microsoft) to be safe. I had advised against self-hosting WordPress myself: If you miss security updates you might jeopardize not only your website, but also others using the same shared web host. If you live on a platform like WordPress or Google, you will complain from time to time about limited options or feature updates you don’t like – but you don’t have to care about security. I compare this to avoiding legal issues as an artisan selling hand-made items via Amazon or the like, in contrast to having to update your own shop’s business logic after every change in international tax law.

I have no conclusion to offer. Whenever I read news these days – on technology, energy, IT, anything in between, The Future in general – I feel reminded of this tension: Between being an independent neutral netizen and being plugged in to an inescapable matrix, maybe beneficial but Borg-like nonetheless.