Simulations: Levels of Consciousness

In a recent post I showed these results of simulations for our heat pump system:

I focused on the technical details – this post will be more philosophical.

What is a ‘simulation’ – opposed to simplified calculations of monthly or yearly average temperatures or energies? The latter are provided by tools used by governmental agencies or standardization bodies – allowing for a comparison of different systems.

In a true simulation the time intervals so small that you catch all ‘relevant’ changes of a system. If a heating system is turned on for one hour, then turned off again, he time slot needs to be smaller than one hour. I argued before that calculating meaningful monthly numbers requires to incorporate data that had been obtained before by measurements – or by true simulations.

For our system, the heat flow between ground and the water/ice tank is important. In our simplified sizing tool – which is not a simulation – I use average numbers. I validated them by comparing with measurements: The contribution of ground can be determined indirectly; by tallying all the other energies involved. In the detailed simulation I calculate the temperature in ground as a function of time and of distance from the tank, by solving the Heat Equation numerically. Energy flow is then proportional to the temperature gradient at the walls of the tank. You need to make assumptions about the thermal properties of ground, and a simplified geometry of the tank is considered.

Engineering / applied physics in my opinion is about applying a good-enough-approach in order to solve one specific problem. It’s about knowing your numbers and their limits. It is tempting to get carried away by nerdy physics details, and focus on simulating what you know exactly – forgetting that there are huge error bars because of unknowns.

This is the hierarchy I keep in mind:

On the lowest level is the simulation physics, that is: about modelling how ‘nature’ and system’s components react – to changes in the previous time slot. Temperatures change because energies flows, and energy flows because of temperature differences. The heat pump’s output power depends on heating water temperature and brine temperature. Energy of the building is ‘lost’ to the environment via heat conduction; heat exchangers immersed in tanks deposit energy there or retrieve it. I found that getting the serial connection of heat exchangers right in the model was crucial, and it required a self-consistent calculation for three temperatures at the same point of time, rather than trying to ‘follow round the brine’. I used the information on average brine temperatures obtained by these method to run a simplified version of the simulation using daily averages only – for estimating the maximum volume of ice for two decades.

So this means you need to model your exact hydraulic setup, or at least you need to know which features of your setup are critical and worthy to model in detail. But the same also holds for the second level, the simulation of control logic. I try to mirror production control logic as far as possible: This code determines how pumps and valves will react, depending on the system’s prior status before. Both in real life and in the simulation threshold values and ‘hystereses’ are critical: You start to heat if some temperature falls below X, but you only stop heating if it has risen above X plus some Delta. Typical brine-water heat pumps always provide approximately the same output power, so you control operations time and buffer heating energy. If Delta for heating the hot water buffer tank is too large, the heat pump’s performance will suffer. The Coefficient of Performance of the heat pump decreases with increasing heating water temperature. Changing an innocuous parameter will change results a lot, and the ‘control model’ should be given the same vigilance as the ‘physics model’.

Control units can be tweaked at different levels: ‘Experts’ can change the logic, but end users can change non-critical parameters, such as set point temperatures.We don’t restrict expert access in systems we provide the control unit for. But it make sense to require extra input for the expert level though – to prevent accidental changes.

And here we enter level 3 – users’ behavior. We humans are bad at trying to outsmart the controller.

[Life-form in my home] always sets the controller to ‘Sun’. [little sun icon indicating manually set parameters]. Can’t you program something so that nothing actually changes when you pick ‘Sun’?

With heat pumps utilizing ground or water sources – ‘built’ storage repositories with limited capacity – unexpected and irregular system changes are critical: You have to size your source in advance. You cannot simply order one more lorry load of wood pellets or oil if you ‘run out of fuel’. If the source of ambient energy is depleted, the heat pump finally will refuse to work below a certain source temperature. The heat pump’s rated power has match the heating demands and the size of the source exactly. It also must not be oversized in order to avoid turning on and off the compressor too often.

Thus you need good estimates for peak heat load and yearly energy needs, and models should include extreme weather (‘physics’) but also erratic users’ behaviour. The more modern the building, the more important spikes in hot tap water usage get in relation to space heating. A vendor of wood pellet stoves told me that delivering peak energy for hot water – used in private bathrooms that match spas – is a greater challenge today than delivering space heating energy. Energy certificates of modern buildings take into account huge estimated solar and internal energy gains – calculated according to standards. But the true heating power needed on a certain day will depend on the strategy or automation home owners use when managing their shades.

Typical gas boilers are oversized (in terms of kW rated power) by a factor of 2 or more in Germany, but with heat pumps you need to be more careful. However, this also means that heat pump systems cannot and should not be planned for rare peak demands, such as: 10 overnight guests want to shower in the morning one after the other, on an extremely cold day, or for heating up the building quickly after temperature had been decreased during a leave of absence.

The nerdy answer is that a smart home would know when your vacation ends and start heating up well in advance. Not sure what to do about the showering guests as in this case ‘missing’ power cannot be compensated by more time. Perhaps a gamified approach will work: An app will do something funny / provide incentives and notifications so that people wait for the water to heat up again. But what about planning for renting a part of the house out someday? Maybe a very good AI will predict what your grandchildren are likely to do, based on automated genetics monitoring.

The challenge of simulating human behaviour is ultimately governed by constraints on resources – such as the size of the heat source: Future heating demands and energy usage is unknown but the heat source has to be sized today. If the system is ‘open’ and connected to a ‘grid’ in a convenient way problems seem to go away: You order whatever you need, including energy, any time. The opposite is planning for true self-sufficiency: I once developed a simulation for an off-grid system using photovoltaic generators and wind power – for a mountain shelter. They had to meet tough regulations and hygienic standards like any other restaurant, e.g.: to use ‘industry-grade’ dishwashers needing 10kW of power. In order to provide that by solar power (plus battery) you needed to make an estimate on the number of guests likely to visit … and thus on how many people would go hiking on a specific day … and thus maybe on the weather forecast. I tried to factor in the ‘visiting probability’ based on the current weather.

I think many of these problem can be ‘resolved’ by recognizing that they are first world problems. It takes tremendous efforts – in terms of energy use or systems’ complexity – to obtain 100% availability and to cover all exceptional use cases. You would need the design heat load only for a few days every decade. On most winter days a properly sized heat pump is operating for only 12 hours. The simple, low tech solution would be to accept the very very rare intermittent 18,5°C room temperature mitigated by proper clothing. Accepting a 20-minute delay of your shower solves the hot water issue. An economical analysis can reveal the (most likely very small) trade-off of providing exceptional peak energy by a ‘backup’ electrical heating element – or by using that wood stove that you installed ‘as a backup’ but mostly for ornamental reasons because it is dreadful to fetch the wood logs when it is really cold.

But our ‘modern’ expectations and convenience needs are also reflected in regulations. Contractors are afraid of being sued by malicious clients who (quote) sit next their heat pump and count its operating cycles – to compare the numbers with the ones to be ‘guaranteed. In a weather-challenged region at more than 2.000 meters altitude people need to steam clean dishes and use stainless steel instead of wood – where wooden plates have been used for centuries. I believe that regulators are as prone as anybody else to fall into the nerdy trap described above: You monitor, measure, calculate, and regulate the things in detail that you can measure and because you can measure them – not because these things were top priorities or had the most profound impact.

Still harvesting energy from air - during a record-breaking cold January 2017

The Future of Small Business?

If I would be asked which technology or ‘innovation’ has had the most profound impact on the way I work I would answer: Working remotely – with clients and systems I hardly ever see.

20 years ago I played with modems, cumbersome dial-in, and Microsoft’s Netmeeting. Few imagined yet, that remote work will once be the new normal. Today I am reading about Industry 4.0, 3D printing, the Internet of Things, and how every traditional company has to compete with Data Krakens like Google and Amazon. Everything will be offered as a service, including heating. One consequence: Formerly independent craftsmen become preferred partners or subcontractors of large companies, of vendors of smart heating solutions. Creative engineering is replaced by calling the Big Vendor’s hotline. Human beings cover the last mile that robots or software cannot deal with – yet.

Any sort of customization, consulting, support, and systems integration might be automated in the long run: Clients will use an online configurator and design their systems, and possibly print them out at home. Perhaps someday our clients will print out their heat exchangers from a blueprint generated on Kraken’s website, instead of using our documentation to build them.

Allowing you to work remotely also allows everybody else in the world to do so, and you might face global competition once the barriers of language and culture have been overcome (by using ubiquitous US culture and ‘business English’). Large IT service providers have actually considered to turn their consulting and support staff into independent contractors and let them compete globally – using an online bidding platform. Well-known Data Krakens match clients and freelancers, and I’ve seen several start-ups that aspire at becoming the next matching Kraken platform for computer / tech support. Clients will simply not find you if you are not on the winning platform. Platform membership becomes as important as having a website or an entry in a business directory.

One seemingly boring and underappreciated point that works enormously in favor of the platforms is bureaucracy: As a small business you have to deal with many rules and provisions, set forth by large entities – governments, big clients, big vendors. Some of those rules are conflicting, and meeting them all in the best possible way does not allow for much creativity. Krakens’ artificial intelligence – and their lawyers and lobbyists – might be able to fend off bureaucracy better than a freelancer. If you want to sell things to clients in different countries you better defer the legally correct setup of the online shop to the Kraken Platform, who deals with the intricacies of ever evolving international tax law – while you become their subcontractor or franchisee. In return, you will dutiful sign the Vendor’s Code of Conduct every year, and follow the logo guidelines when using Kraken’s corporate identity.

In my gloomy post about Everything as a Service I came to the conclusion that we – small businesses who don’t want to grow and become start-ups – aspiring at Krakenhood themselves – will either work as the Kraken’s hired hands, or …

… a lucky few will carve out a small niche and produce or customize bespoke units for clients who value luxurious goods for the sake of uniqueness or who value human imperfection as a fancy extra.

My personal credo is rather a very positive version of this quote minus the cynicism. I am happy as a small business owner. This is just a single data-point, and I don’t have a self-consistent theory on this. But I have Skin in this Game so I share my anecdotes and some of the things I learned.

Years ago I officially declared my retirement from IT Security and global corporations – to plan special heat pump systems for private home owners instead. Today we indeed work on such systems, and the inside joke of doing this remote-only – ‘IT-style’ – has become routine. Clients find us via our blog that is sometimes mistaken for a private fun blog and whose writing feels like that. I have to thank Kraken Google, begrudgingly. A few of my Public Key Infrastructure clients insisted on hiring me again despite my declarations of looming ignorance in all things IT. All this allows for very relaxed, and self-marketing-pressure-free collaborations.

  • I try to stay away, or move farther away from anything strictly organized, standardized, or ‘platform-mediated’. Agreements are made by handshake. I don’t submit any formal applications or replies to Request for Proposals.
  • “If things do not work without a written contract, they don’t work with a contract either.”
  • I hardly listen to business experts, especially if they try to give well-meant, but unsolicited advice. Apply common sense!
  • Unspectacular time-tested personal business relationships beat 15 minutes of fame any time.
  • My work has to speak for itself, and ‘marketing’ has to be a by-product. I cannot compete with companies who employ people full-time for business development.
  • The best thing to protect your inner integrity is to know and to declare what you do not want and what you would never do. Removing the absolute negatives leaves a large area of positive background, and counter the mantra of specific ‘goals’ this approach lets you discover unexpected upsides. This is Nassim Taleb’s Via Negativa – and any career or business advice that speaks to me revolves around that.
  • There is no thing as the True Calling or the One and Only Passion – I like the notion of a Portfolio of Passions. I think you are getting to enjoy what you are learning to be good at – not the other way around.
  • All this is the result of years of experimenting in an ‘hyperspace of options’ – there is no shortcut. I have to live with the objection that I have just been lucky, but I can say that I made many conscious decisions whose ‘goal’ was to increase the number of options rather than to narrow them down (Taleb’s Optionality).

So I will finally quote Nassim Taleb, who nailed as usual – in his Facebook post about The New Artisan:

Anything you do to optimize your work, cut some corners, squeeze more “efficiency” out of it (and out of your life) will eventually make you hate it.

I have bookmarked this link for a while – because sometimes I need to remind myself of all the above.

Taleb states that an Artisan …

1) does things for existential reasons,
2) has some type of “art” in his/her profession, stays away from most aspects of industrialization, combines art and business in some manner (his decision-making is never fully economic),
3) has some soul in his/her work: would not sell something defective or even of compromised quality because what people think of his work matters more than how much he can make out of it,
4) has sacred taboos, things he would not do even if it markedly increased profitability.

… and I cannot agree more. I have lots of Sacred Taboos, and they have served me well.

Other People Have Lives – I Have Domains

These are just some boring update notifications from the elkemental Webiverse.

The elkement blog has recently celebrated its fifth anniversary, and the punktwissen blog will turn five in December. Time to celebrate this – with new domain names that says exactly what these sites are – the ‘elkement.blog‘ and the ‘punktwissen.blog‘.

Actually, I wanted to get rid of the ads on both blogs, and with the upgrade came a free domain. WordPress has a detailed cookie policy – and I am showing it dutifully using the respective widget, but they have to defer to their partners when it comes to third-party cookies. I only want to worry about research cookies set by Twitter and Facebook, but not by ad providers, and I am also considering to remove social media sharing buttons and the embedded tweets. (Yes, I am thinking about this!)

On the websites under my control I went full dinosaur, and the server sends only non-interactive HTML pages sent to the client, not requiring any client-side activity. I now got rid of the last half-hearted usage of a session object and the respective cookie, and I have never used any social media buttons or other tracking.

So there are no login data or cookies to protect, but yet I finally migrated all sites to HTTPS.

It is a matter of principle: I of all website owners should use https. Since 15 years I have been planning and building Public Key Infrastructures and troubleshooting X.509 certificates.

But of course I fear Google’s verdict: They have announced long ago to HTTPS is considered a positive ranking by its search engine. Pages not using HTTPS will be tagged as insecure using more and more terrifying icons – e.g. http-only pages with login buttons already display a striked-through padlock in Firefox. In the past years I migrated a lot of PKIs from SHA1 to SHA256 to fight the first wave of Insecure icons.

Finally Let’s Encrypt has started a revolution: Free SSL certificates, based on domain validation only. My hosting provider uses a solution based on Let’s Encrypt – using a reverse proxy that does the actual HTTPS. I only had to re-target all my DNS records to the reverse proxy – it would have been very easy would it not have been for all my already existing URL rewriting and tweaking and redirecting. I also wanted to keep the option of still using HTTP in the future for tests and special scenario (like hosting a revocation list), so I decided on redirecting myself in the application(s) instead of using the offered automated redirect. But a code review and clean-up now and then can never hurt 🙂 For large complex sites the migration to HTTPS is anything but easy.

In case I ever forget which domains and host names I use, I just need to check out this list of Subject Alternative Names again:

(And I have another certificate for the ‘test’ host names that I need for testing the sites themselves and also for testing various redirects ;-))

WordPress.com also uses Let’s Encrypt (Automattic is a sponsor), and the SAN elkement.blog is lumped together with several other blog names, allegedly the ones which needed new certificates at about the same time.

It will be interesting what the consequences for phishing websites will be. Malicious websites will look trusted as being issued certificates automatically, but revoking a certificate might provide another method for invalidating a malicious website.

Anyway, special thanks to the WordPress.com Happiness Engineers and support staff at my hosting provider Puaschitz IT. Despite all the nerdiness displayed on this blog I prefer hosted / ‘shared’ solutions when it comes to my own websites because I totally like it when somebody else has to patch the server and deal with attacks. I am an annoying client – with all kinds of special needs and questions – thanks for the great support! 🙂

Mr. Bubble Was Confused. A Cliffhanger.

This year we experienced a record-breaking January in Austria – the coldest since 30 years. Our heat pump system produced 14m3 of ice in the underground tank.

The volume of ice is measured by Mr. Bubble, the winner of The Ultimate Level Sensor Casting Show run by the Chief Engineer last year:

The classic, analog level sensor was very robust and simple, but required continuous human intervention:

Level sensor: The old way

So a multitude of prototypes had been evaluated …

Level sensors: The precursors

The challenge was to measure small changes in level as 1 mm corresponds to about 0,15 m3 of ice.

Mr. Bubble uses a flow of bubbling air in a tube; the measured pressure increases linearly with the distance of the liquid level from the nozzle:

blubber-messrohr-3

Mr. Bubble is fine and sane, as long as ice is growing monotonously: Ice grows from the heat exchanger tubes into the water, and the heat exchanger does not float due to buoyancy, as it is attached to the supporting construction. The design makes sure that not-yet-frozen water can always ‘escape’ to higher levels to make room for growing ice. Finally Mr. Bubble lives inside a hollow cylinder of water inside a block of ice. As long as all the ice is covered by water, Mr. Bubble’s calculation is correct.

But when ambient temperature rises and the collector harvests more energy then needed by the heat pump, melting starts at the heat exchanger tubes. The density of ice is smaller than that of water, so the water level in Mr. Bubble’s hollow cylinder is below the surface level of ice:

Mr. Bubble is utterly confused and literally driven over the edge – having to deal with this cliff of ice:

When ice is melted, the surface level inside the hollow cylinder drops quickly as the diameter of the cylinder is much smaller than the width of the tank. So the alleged volume of ice perceived by Mr. Bubble seems to drop extremely fast and out of proportion: 1m3 of ice is equivalent to 93kWh of energy – the energy our heat pump would need on an extremely cold day. On an ice melting day, the heat pump needs much less, so a drop of more than 1m3 per day is an artefact.

As long as there are ice castles on the surface, Mr. Bubble keeps underestimating the volume of ice. When it gets colder, ice grows again, and its growth is then overestimated via the same effect. Mr. Bubble amplifies the oscillations in growing and shrinking of ice.

In the final stages of melting a slab-with-a-hole-like structure ‘mounted’ above the water surface remains. The actual level of water is lower than it was before the ice period. This is reflected in the raw data – the distance measured. The volume of ice output is calibrated not to show negative values, but the underlying measurement data do:

Only when finally all ice has been melted – slowly and via thermal contact with air – then the water level is back to normal.

In the final stages of melting parts of the suspended slab of ice may break off and then floating small icebergs can confuse Mr. Bubble, too:

So how can we picture the true evolution of ice during melting? I am simulating the volume of ice, based on our measurements of air temperature. To be detailed in a future post – this is my cliffhanger!

>> Next episode.

Give the ‘Thing’ a Subnet of Its Own!

To my surprise, the most clicked post ever on this blog is this:

Network Sniffing for Everyone:
Getting to Know Your Things (As in Internet of Things)

… a step-by-step guide to sniff the network traffic of your ‘things’ contacting their mothership, plus a brief introduction to networking. I wanted to show how you can trace your networked devices’ traffic without any specialized equipment but being creative with what many users might already have, by turning a Windows PC into a router with Internet Connection Sharing.

Recently, an army of captured things took down part of the internet, and this reminded me of this post. No, this is not one more gloomy article about the Internet of Things. I just needed to use this Internet Sharing feature for the very purpose it was actually invented.

The Chief Engineer had finally set up the perfect test lab for programming and testing freely programmable UVR16x2 control systems (successor of UVR1611). But this test lab was a spot not equipped with wired ethernet, and the control unit’s data logger and ethernet gateway, so-called CMI (Control and Monitoring Interface), only has a LAN interface and no WLAN.

So an ages-old test laptop was revived to serve as a router (improving its ecological footprint in passing): This notebook connects to the standard ‘office’ network via WLAN: This wireless connection is thus the internet connection that can be shared with a device connected to the notebook’s LAN interface, e.g. via a cross-over cable. As explained in detail in the older article the router-laptop then allows for sniffing the traffic, – but above all it allows the ‘thing’ to connect to the internet at all.

This is the setup:

Using a notebook with Internet Connection Sharing enabled as a router to connect CMI (UVR16x2's ethernet gatway) to the internet

The router laptop is automatically configured with IP address 192.168.137.1 and hands out addresses in the 192.168.137.x network as a DHCP server, while using an IP address provided by the internet router for its WLAN adapter (indicated here as commonly used 192.168.0.x addresses). If Windows 10 is used on the router-notebook, you might need to re-enable ICS after a reboot.

The control unit is connected to the CMI via CAN bus – so the combination of test laptop, CMI, and UVR16x2 control unit is similar to the setup used for investigating CAN monitoring recently.

The CMI ‘thing’ is tucked away in a private subnet dedicated to it, and it cannot be accessed directly from any ‘Office PC’ – except the router PC itself. A standard office PC (green) effectively has to access the CMI via the same ‘cloud’ route as an Internet User (red). This makes the setup a realistic test for future remote support – when the CMI plus control unit has been shipped to its proud owner and is configured on the final local network.

The private subnet setup is also a simple workaround in case several things can not get along well with each other: For example, an internet TV service flooded CMI’s predecessor BL-NET with packets that were hard to digest – so BL-NET refused to work without a further reboot. Putting the sensitive device in a private subnet – using a ‘spare part’ router, solved the problem.

The Chief Engineer's quiet test lab for testing and programming control units

And Now for Something Completely Different: Rotation Heat Pump!

Heat pumps for space heating are all very similar: Refrigerant evaporates, pressure is increased by a scroll compressor, refrigerant condenses, pressure is reduced in an expansion value. *yawn*

The question is:

Can a compression heat pump be built in a completely different way?

Austrian start-up ECOP did it: They  invented the so-called Rotation Heat Pump.

It does not have a classical compressor, and the ‘refrigerant’ does not undergo a phase transition. A pressure gradient is created by centrifugal forces: The whole system rotates, including the high-pressure (heat sink) and low-pressure (source) heat exchanger. The low pressure part of the system is positioned closer to the center of the rotation axis, and heat sink and heat source are connected at the axis (using heating water). The system rotates at up to 1800 rounds per minute.

A mixture of noble gases is used in a Joule (Brayton) process, driven in a cycle by a ventilator. Gas is compressed and thus heated up; then it is cooled at constant pressure and energy is released to the heat sink. After expanding the gas, it is heated up again at low pressure by the heat source.

In the textbook Joule cycle, a turbine and a compressor share a common axis: The energy released by the turbine is used to drive the compressor. This is essential, as compression and expansion energies are of the same order of magnitude, and both are considerably larger than the net energy difference – the actual input energy.

In contrast to that, a classical compression heat pump uses a refrigerant that is condensed while releasing heat and then evaporated again at low pressure. There is no mini-turbine to reduce the pressure but only an expansion valve, as there is not much energy to gain.

This explains why the Rotation Heat Pumps absolutely have to have compression efficiencies of nearly 100%, compared to, say, 85% efficiency of a scroll compressor in heat pump used for space heating:

Some numbers for a Joule process (from this German ECOP paper): On expansion of the gas 1200kW are gained, but 1300kW are needed for compression – if there would be no losses at all. So the net input power is 100kW. But if the efficiency of the compression is reduced from 100% to 80% about 1600kW are needed and thus a net input power of 500kW – five times the power compared to the ideal compressor! The coefficient of performance would plummet from 10 to 2,3.

I believe these challenging requirements are why Rotation Heat Pumps are ‘large’ and built for industrial processes. In addition to the high COP, this heat pump is also very versatile: Since there are no phase transitions, you can pick your favorite corner of the thermodynamic state diagram at will: This heat pump works for very different combinations temperatures of the hot target and the cold source.

Same Procedure as Every Autumn: New Data for the Heat Pump System

October – time for updating documentation of the heat pump system again! Consolidated data are available in this PDF document.

In the last season there were no special experiments – like last year’s Ice Storage Challenge or using the wood stove. Winter was rather mild, so we needed only ~16.700kWh for space heating plus hot water heating. In the coldest season so far – 2012/13 – the equivalent energy value was ~19.700kWh. The house is located in Eastern Austria, has been built in the 1920s, and has 185m2 floor space since the last major renovation.

(More cross-cultural info:  I use thousands dots and decimal commas).

The seasonal performance factor was about 4,6 [kWh/kWh] – thus the electrical input energy was about 16.700kWh / 4,6 ~ 3.600kWh.

Note: Hot water heating is included and we use flat radiators requiring a higher water supply temperature than the floor heating loops in the new part of the house.

Heating season 2015/2016: Performance data for the 'ice-storage-/solar-powered' heat pump system

Red: Heating energy ‘produced’ by the heat pump – for space heating and hot water heating. Yellow: Electrical input energy. Green: Performance Factor = Ratio of these energies.

The difference of 16.700kWh – 3.600kWh = 13.100kWh was provided by ambient energy, extracted from our heat source – a combination of underground water/ice tank and an unglazed ribbed pipe solar/air collector.

The solar/air collector has delivered the greater part of the ambient energy, about 10.500kWh:

Heating season 2015/2016: Energy harvested from air by the collector versus heating-energy

Energy needed for heating per day (heat pump output) versus energy from the solar/air collector – the main part of the heat pump’s input energy. Negative collector energies indicate passive cooling periods in summer.

Peak Ice was 7 cubic meters, after one cold spell of weather in January:

Heating season 2015/2016: Temperature of ambient air, water tank (heat source) and volume of water frozen in the tank.

Ice is formed in the water tank when the energy from the collector is not sufficient to power the heat pump alone, when ambient air temperatures are close to 0°C.

Last autumn’s analysis on economics is still valid: Natural gas is three times as cheap as electricity but with a performance factor well above three heating costs with this system are lower than they would be with a gas boiler.

Is there anything that changed gradually during all these years and which does not primarily depend on climate? We reduced energy for hot tap water heating – having tweaked water heating schedule gradually: Water is heated up once per day and as late as possible, to avoid cooling off the hot storage tank during the night.

We have now started the fifth heating season. This marks also the fifth anniversary of the day we switched on the first ‘test’ version 1.0 of the system, one year before version 2.0.

It’s been about seven years since first numerical simulations, four years since I have been asked if I was serious in trading in IT security for heat pumps, and one year since I tweeted: