# Cooling Potential

I had an interesting discussion about the cooling potential of our heat pump system – in a climate warmer than ours.

Recently I’ve shown data for the past heating season, including also passive cooling performance:

After the heating season, tank temperature is limited to 10°C as long as possible – the collector is bypassed in the brine circuit (‘switched off’). But with the beginning of May, the tank temperature starts to rise though as the tank is heated by the surrounding ground.

Daily cooling energy hardly exceeds 20kWh, so the average cooling power is always well below 1kW. This is much lower than the design peak cooling load – the power you would need to cool the rooms to 20°C at noon on a hot in summer day (rather ~10kW for our house.)

The blue spikes are single dots for a few days, and they make the curve look more impressive than it really is: We could use about 600kWh of cooling energy – compared to about 15.000kWh for space heating. (Note that I am from Europe – I use decimal commas and thousands dots :-))

There are three ways of ‘harvesting cold’ with this system:

(1) When water in the hygienic storage tank (for domestic hot water) is heated up in summer, the heat pump extracts heat from the underground tank.

Per summer month the heat pump needs about 170kWh of input ambient energy from the cold tank – for producing an output heating energy of about 7kWh per day – 0,3kW on average for two persons, just in line with ‘standards’. This means that nearly all the passive cooling energy we used was ‘produced’ by heating hot water.

You can see the effect on the cooling power available during a hot day here (from this article on passive cooling in the hot summer of 2015)

Blue arrows indicate hot water heating time slots – for half an hour a cooling power of about 4kW was available. But for keeping the room temperature at somewhat bearable levels, it was crucial to cool ‘low-tech style’ – by opening the windows during the night (Vent)

(2) If nights in late spring and early summer are still cool, the underground tank can be cooled via the collector during the night.

In the last season we gained about ~170kWh in total in that way – so only as much as by one month of hot water heating. The effect also depends on control details: If you start cooling early in the season when you ‘actually do not really need it’ you can harvest more cold because of the higher temperature difference between tank and cold air.

(3) You keep the cold or ice you ‘create’ during the heating season.

The set point tank temperature for summer  is a trade-off between saving as much cooling energy as possible and keeping the Coefficient of Performance (COP) reasonably high also in summer – when the heat sink temperature is 50°C because the heat pump only heats hot tap water.

20°C is the maximum heat source temperature allowed by the heat pump vendor. The temperature difference to the set point of 10°C translates to about 300kWh (only) for 25m3 of water. But cold is also transferred to ground and thus the effective store of cold is larger than the tank itself.

What are the options to increase this seasonal storage of cold?

• Turning the collector off earlier. To store as much ice as possible, the collector could even be turned off while still in space heating mode – as we did during the Ice Storage Challenge 2015.
• Active cooling: The store of passive cooling energy is limited – our large tank only contains about 2.000kWh even if frozen completely; If more cooling energy is required, there has to be a cooling backup. Some brine/water heat pumps[#] have a 4-way-valve built into the refrigeration cycle, and the roles of evaporator and condenser can be reversed: The room is cooled and the tank is heated up. In contrast to passive cooling the luke-warm tank and the surrounding ground are useful. The cooling COP would be fantastic because of the low temperature difference between source and sink – it might actually be so high that you need special hydraulic precautions to limit it.

The earlier / the more often the collector is turned off to create ice for passive cooling, the worse the heating COP will be. On the other hand, the more cold you save, the more economic is cooling later:

1. Because the active cooling COP (or EER[*]) will be higher and
2. Because the total cooling COP summed over both cooling phases will be higher as no electrical input energy is needed for passive cooling – only circulation pumps.

([*] The COP is the ratio of output heating energy and electrical energy, and the EER – energy efficiency ratio – is the ratio of output cooling energy and electrical energy. Using kWh as the unit for all energies and assuming condenser and evaporator are completely ‘symmetrical’, the EER or a heat pump used ‘in reverse’ is its heating COP minus 1.)

So there would be four distinct ways / phases of running the system in a season:

1. Standard heating using collector and tank. In a warmer climate, the tank might not even be frozen yet.
2. Making ice: At end of the heating season the collector might be turned off to build up ice for passive cooling. In case of an ’emergency’ / unexpected cold spell of weather, the collector could be turned on intermittently.
3. Passive cooling: After the end of the heating season, the underground tank cools the buffer tank (via its internal heat exchanger spirals that containing cool brine) which in turn cools the heating floor loops turned ‘cooling loops’.
4. When passive cooling power is not sufficient anymore, active cooling could be turned on. The bulk volume of the buffer tank is cooled now directly with the heat pump, and waste heat is deposited in the underground tank and ground. This will also boost the underground heat sink just right to serve as the heat source again in the upcoming heating season.

In both cooling phases the collector could be turned on in colder nights to cool the tank. This will work much better in the active cooling phase – when the tank is likely to be warmer than the air in the night. Actually, night-time cooling might be the main function the collector would have in a warmer climate.

___________________________________

[#] That seems to be valid mainly/only for domestic brine-water heat pumps from North American or Chinese vendors; they offer the reversing valve as a common option. European vendors rather offer a so called Active Cooling box, which is a cabinet that can be nearly as the heat pump itself. It contains a bunch of valves and heat exchangers that allow for ‘externally’ swapping the connections of condenser and evaporator to heat sink and source respectively.

# Reverse Engineering Fun

Recently I read a lot about reverse engineering –  in relation to malware research. I for one simply wanted to get ancient and hardly documented HVAC engineering software to work.

The software in question should have shown a photo of the front panel of a device – knobs and displays – augmented with current system’s data, and you could have played with settings to ‘simulate’ the control unit’s behavior.

I tested it on several machines, to rule out some typical issues quickly: Will in run on Windows 7? Will it run on a 32bit system? Do I need to run it was Administrator? None of that helped. I actually saw the application’s user interface coming up once, on the Win 7 32bit test machine I had not started in a while. But I could not reproduce the correct start-up, and in all other attempts on all other machines I just encountered an error message … that used an Asian character set.

I poked around the files and folders the application uses. There were some .xls and .xml files, and most text was in the foreign character set. The Asian error message was a generic Windows dialogue box: You cannot select the text within it directly, but the whole contents of such error messages can be copied using Ctrl+C. Pasting it into Google Translate it told me:

Failed to read the XY device data file

Checking the files again, there was an on xydevice.xls file, and I wondered if the relative path from exe to xls did not work, or if it was an issue with permissions. The latter was hard to believe, given that I simply copied the whole bunch of files, my user having the same (full) permissions on all of them.

I started Microsoft Sysinternals Process Monitor to check if the application was groping in vain for the file. It found the file just fine in the right location:

Immediately before accessing the file, the application looped through registry entries for Microsoft JET database drivers for Office files – the last one it probed was msexcl40.dll – a  database driver for accessing Excel files.

There is no obvious error in this dump: The xls file was closed before the Windows error popup was brought up; so the application had handled the error somehow.

I had been tinkering a lot myself with database drivers for Excel spreadsheets, Access databases, and even text files – so that looked like a familiar engineering software hack to me 🙂 On start-up the application created a bunch of XML files – I saw them once, right after I saw the GUI once in that non-reproducible test. As far as I could decipher the content in the foreign language, the entries were taken from that problematic xls file which contained a formatted table. It seemed that the application was using a sheet in the xls file as a database table.

What went wrong? I started Windows debugger WinDbg (part of the Debugging tools for Windows). I tried to go the next unhandled or handled exception, and I saw again that it stumbled over msexec40.dll:

But here was finally a complete and googleable error message in nerd speak:

Unexpected error from external database driver (1).

This sounded generic and I was not very optimistic. But this recent Microsoft article was one of the few mentioning the specific error message – an overview of operating system updates and fixes, dated October 2017. It describes exactly the observed issue with using the JET database driver to access an xls file:

Finally my curious observation of the non-reproducible single successful test made sense: When I started the exe on the Win 7 test client, this computer had been started the first time after ~3 months; it was old and slow, and it was just processing Windows Updates – so at the first run the software had worked because the deadly Windows Update had not been applied yet.

Also the ‘2007 timeframe’ mentioned was consistent – as all the application’s executable files were nearly 10 years old. The recommended strategy is to use a more modern version of the database driver, but Microsoft also states they will fix it again in a future version.

So I did not get the software to to run, as I obviously cannot fix somebody else’s compiled code – but I could provide the exact information needed by the developer to repair it.

But the key message in this post is that it was simply a lot of fun to track this down 🙂

# Simulating Life-Forms (2): Cooling Energy

I found this comprehensive research report:
Energy Use in the Australian Residential Sector 1986–2020 (June 2008)

There are many interesting results – and the level of detail is impressive: The authors modelled the energy used per appliance type, by e.g. factoring in how building types change slowly over time or by modelling the development of TV sets and their usage. Occupancy factors for buildings are determined from assumptions about typical usage profiles called Stay At Home, At Work or Night Owl.

I zoom in on simulating and predicting usage of air conditioning and thus cooling energy:

They went to great lengths to simulate the behavior of home owners to model operations of air conditioning and thus total cooling energy for a season, for a state or the whole country.

The authors investigated the official simulation software used for rating buildings (from …part2.pdf):

In the AccuRate software, once cooling is invoked the
program continues to assume that the occupant is willing to
tolerate less than optimal comfort conditions and will therefore terminate cooling if in the absence of such cooling the internal temperature would not rise above the summer neutral temperature noted in Table 57, + 2.5oC plus allowances for humidity and air movement as applicable. While this may be appropriate for rating purposes, it is considered to be an unlikely form of behaviour to be adopted by householders in the field and as such this assumption is likely to underestimate the potential space cooling demand. This theory is supported by the survey work undertaken by McGreggor in South Australia.

This confirms what I am saying all the time: The more modern a building is, or generally nowadays given ‘modern’ home owners’ requirements, the more important would it be to actually simulate humans’ behavior, on top of the physics and the control logic.

The research study also points out e.g. that AC usage has been on the rise, because units got affordable, modern houses are built with less focus on shading, and home owners demand higher standards of comfort. Ducted cooling systems that cover the cooling load of the whole house are being implemented, and they replace systems for cooling single zones only. Those ducted systems have a rated output cooling power greater than 10kW – so the authors (and it seems Australian governmental decision makers) are worried about the impact on the stability of the power grid on hot days [*].

Once AC had been turned on for the first time in the hot season, home owners don’t switch it off again when the theoretical ‘neutral’ summer temperature would be reached again, but they keep it on and try to maintain a lower temperature (22-23°C) that is about constant irrespective of temperature outside. So small differences in actual behavior cause huge error bars in total cooling energy for a season:

The impact of this resetting of the cooling thermostat operation was found to be significant. A comparison was undertaken between cooling loads determined using the AccuRate default thermostat settings and the modified settings as described above. A single-storey brick veneer detached dwelling with concrete slab on ground floor and ceiling insulation was used for the comparison. The comparison was undertaken in both the Adelaide and the Darwin climate zones. In Adelaide the modified settings produced an increased annual cooling load 64% higher than that using the AccuRate default settings.

The report also confirms my anecdotal evidence: In winter (colder regions) people heat rooms to higher temperatures than ‘expected’; in summer (warmer regions) people want to cool to a lower temperature:

This is perhaps not surprising, de Dear notes that: “preferred temperature for a particular building did not necessarily coincide with thermal neutrality, and this semantic discrepancy was most evident in HVAC buildings where preference was depressed below neutrality in warm climates and elevated above neutrality in cold climates (ie people preferred to feel cooler than neutral in warm climates, and warmer than neutral in cold climates)” (Richard de Dear et al 1997, P xi).

I noticed that the same people who (over-)heat their rooms to 24°C in winter might want to cool to 20°C in summer. In middle Europe AC in private homes has been uncommon, but I believe it is on the rise, too, also because home owners got accustomed to a certain level of cooling when they work in typical office buildings.

My conclusion is (yet again) that you cannot reliably ‘predict’ cooling energy. It’s already hard to do so for heating energy for low energy houses, but nearly impossible for cooling energy. All you can do – from a practical / system’s design perspective – is to make sure that there is an ‘infinite’ source of cooling energy available.

_________________________________

[*] Edit: And it actually happenend in February 2017.

# Entropy and Dimensions (Following Landau and Lifshitz)

Some time ago I wrote about volumes of spheres in multi-dimensional phase space – as needed in integrals in statistical mechanics.

The post was primarily about the curious fact that the ‘bulk of the volume’ of such spheres is contained in a thin shell beneath their hyperspherical surfaces. The trick to calculate something reasonable is to spot expressions you can Tayler-expand in the exponent.

Large numbers ‘do not get much bigger’ if multiplied by a factor, to be demonstrated again by Taylor-expanding such a large number in the exponent; I used this example:

Assuming N is about 1025  then its natural logarithm is about 58 and $Ne^N = e^{\ln(N)+N} = e^{58+10^{25}}$, then 58 can be neglected compared to N itself.

However, in the real world numbers associated with locations and momenta of particles come with units. Calling the unit ‘length’ in phase space $R_0$ the large volume can be written as $aN{(\frac{r}{R_0})}^N = ae^{\ln{(N)} + N\ln{(\frac{r}{R_0})}}$, and the impact of an additional factor N also depends on the unit length chosen.

I did not yet mention the related issues with the definition of entropy. In this post I will follow the way Landau and Lifshitz introduce entropy in Statistical Physics, Volume 5 of their Course of Theoretical Physics.

Landau and Lifshitz introduce statistical mechanics top-down, starting from fundamental principles and from Hamiltonian classical mechanics: no applications, no definitions of ‘heat’ and ‘work’, nor historical references needed for motivation. Classical phenomenological thermodynamics is only introduced after their are done with the statistical foundations. Both entropy and temperature are defined – these are useful fundamental properties spotted in the mathematical derivations and thus deserve special names. They cover both classical and quantum statistics in small number of pages – LL’s style has been called terse or elegant.

The behaviour of a system with a large number of particles is encoded in a probability distribution function in phase space, a density. In the classical case this is a continuous function of phase-space co-ordinates. In the quantum case you consider distinct states – whose energy levels are densely packed together though. Moving from classical to quantum statistics means to count those states rather than to integrate the smooth density function over a volume. There are equivalent states created by permutations of identical particles – but factoring in that is postponed and not required for a first definition of entropy. A quasi-classical description is sufficient: using a minimum cell in phase space, whose dimensions are defined by Planck’s constant h that has a dimension of action – length times momentum.

Entropy as statistical weight

Entropy S is defined as the logarithm of the statistical weight $\Delta \Gamma$ – the number of quantum states associated with the part of phase phase used by the (sub)-system. (Landau and Lifshitz use the concept of a – still large – subsystem embedded in a larger volume most consequentially, in order to avoid reliance on the ergodic hypothesis as mentioned in the preface). In the quasi-classical view the statistical weight is the volume in phase space occupied by the system divided by the size of the minimum unit cell defined by Planck’s constant h. Denoting momenta by p, positions by q, using $\Delta p$ and $\Delta q$ as a shortcut applying multiple dimensions equivalent to s degrees of freedom…

$S = log \Delta \Gamma = log \frac {\Delta p \Delta q}{2 \pi \hbar^s}$

An example from solid state physics: if the system is considered a rectangular box in the physical world, possible quantum states related to vibrations can be visualized in terms of possible standing waves that ‘fit’ into the box. The statistical weight would then single out those bunch of states the system actually ‘has’ / ‘uses’ / ‘occupies’ in the long run.

Different sorts of statistical functions are introduced, and one reason for writing this article to emphasize the difference between them: The density function associates each point in phase space – each possible configuration of a system characterized by the momenta and locations of all particles – with a probability. These points are also called microstates. Taking into account the probabilities to find a system in any of these microstates gives you the so-called macrostate characterized by the statistical weight: How large or small a part of phase space the system will use when watched for a long time.

The canonical example is an ideal gas in a vessel: The most probable spacial distribution of particles is to find them spread out evenly, the most unlikely configuration is to have them concentrated in (nearly) the same location, like one corner of the box. The density function assigns probabilities to these configurations. As the even distribution is so much much more likely, the $\Delta q$ part of the statistical weight would cover all of the physical volume available. The statistical weight function has to obtain a maximum value in the most likely case, in equilibrium.

The significance of energies – and why there are logarithms everywhere.

Different sufficiently large subsystems of one big system are statistically independent – as their properties are defined by their bulk volume rather than their surfaces interfacing with other subsystems – and the larger the volume, the larger the ratio of volume and surface.  Thus the probability density function for the combined system – as a function of momenta and locations of all particles in the total phase phase – has to be equal to the product of the densities for each subsystem. Denoting the classical density with $\rho$ and adding a subscript for the set of momenta and positions referring to a subsystem:

$\rho(q,p) = \rho_1(q_1,p_1) \rho_2(q_2,p_2)$

(Since these are probability densities, the actual probability is always obtained by multiplying with the differential(s) $dqdp$).

This means that the logarithm of the composite density is equal to the sum of the logarithms of the individual densities. This the root cause of having logarithms show up everywhere in statistical mechanics.

A mechanical system of particles is characterized by only 7 ‘meaningful’ additive integrals: Energy, momentum and angular momentum – they add up when you combine systems, in contrast to all the other millions of integration constants that would appear when solving the equations of motions exactly. Momentum and angular momentum are not that interesting thermodynamically, as one can change to a frame moving and rotating with the system (LL also cover rotating systems). So energy remains as the integral of outstanding importance.

From counting states to energy intervals

What we want is to relate entropy to energy, so assertions about numbers of states covered need to be translated to statements about energy and energy ranges.

LL denote the probability to find a system in (micro-)state n with energy $E_n$ as $w_n$ – the quantum equivalent of density $\rho$. $w_n$ has to be a linear function of the energy of this micro-state $E_n$ as per the additivity just mentioned above, and thus LL omit the subscript n for w:

$w_n = w(E_n)$

(They omit any symbol ever if possible to keep their notation succinct ;-))

A thermodynamic system has an enormous number of (mechanical) degrees of freedom. Fluctuations are small as per the law of large numbers in statistics, and the probability to find a system with a certain energy can be approximated by a sharp delta-function-like peak at the system’s energy E. So in thermal equilibrium its energy has a very sharp peak. It occupies a very thin ‘layer’ of thickness $\Delta E$ in config space – around the hyperplane that characterizes its average energy E.

Statistical weight $\Delta \Gamma$ can be considered the width of the related function: Energy-wise broadening of the macroscopic state $\Delta E$ needs to be translated to a broadening related to the number of quantum states.

We change variables, so the connection between Γ and E is made via the derivative of Γ with respect to E. E is an integral, statistical property of the whole system, and the probability for the system to have energy E in equilibrium is $W(E)dE$. E is not discrete so this is again a  probability density. It is capital W now – in contrast to $w_n$ which says something about the ‘population’ of each quantum state with energy $E_n$.

A quasi-continuous number of states per energy Γ is related to E by the differential:

$d\Gamma = \frac{d\Gamma}{dE} dE$.

As E peaks so sharply and the energy levels are packed so densely it is reasonable to use the function (small) w but calculate it for an argument value E. Capital W(E) is a probability density as a function of total energy, small w(E) is a function of discrete energies denoting states – so it has to be multiplied by the number of states in the range in question:

$W(E)dE = w(E)d\Gamma$

Thus…

$W(E) = w(E)\frac{d\Gamma}{dE}$.

The delta-function-like functions (of energy or states) have to be normalized, and the widths ΔΓ and ΔE multiplied by the respective heights W and w taken at the average energy $E_\text{avg}$ have to be 1, respectively:

$W(E_\text{avg}) \Delta E = 1$
$w(E_\text{avg}) \Delta \Gamma = 1$

(… and the ‘average’ energy is what is simply called ‘the’ energy in classical thermodynamics).

So $\Delta \Gamma$ is inversely proportional to the probability of the most likely state (of average energy). This can also be concluded from the quasi-classical definition: If you imagine a box full of particles, the least possible state is equivalent to all particles occupying a single cell in phase space. The probability for that is (size of the unit cell) over (size of the box) times smaller than the probability to find the particles evenly distributed on the whole box … which is exactly the definition of $\Delta \Gamma$.

The statistical weight is finally:

$\Delta \Gamma = \frac{d\Gamma(E_\text{avg})}{dE} \Delta E$.

… the broadening in $\Gamma$, proportional to the broadening in $E$

The more familiar (?) definition of entropy

From that, you can recover another familiar definition of entropy, perhaps the more common one. Taking the logarithm…

$log S = log (\Delta \Gamma) = -log (w(E_\text{avg}))$.

As log w is linear in E, the averaging of E can be extended to the whole log function. Then the definition of ‘averaging over states n’ can be used: To multiply the value for each state n by probability $w_n$ and sum up:

$- \sum_{n} w_n log w_n$.

… which is the first statistical expression for entropy I had once learned.

LL do not introduce Boltzmann’s constant k here

It is effectively set to 1 – so entropy is defined without a reference to k. k is is only mentioned in passing later: In case one wishes to measure energy and temperature in different units. But there is no need to do so, if you defined entropy and temperature based on first principles.

Back to units

In a purely classical description based on the volume in phase space instead of the number of states there would be no cell of minimum size, and then instead of the statistical weight we had simply this volume: But then entropy would be calculated in a very awkward unit, the logarithm of action. Every change of the unit for measuring volumes in phase space would result in an additive constant – the deeper reason why entropy in a classical context is only defined up to such a constant.

So the natural unit called $R_0$ above should actually be Planck’s constant taken to the power defined by the number of particles.

Temperature

The first task to be solved in statistical mechanics is to find a general way of formulating a proper density function small $w_n$ as a function of energy $E_n$. You can either assume that the system has a clearly defined energy upfront – the system lives on a ‘energy-hyperplane in phase space’ – or you can consider it immersed in a larger system later identified with a ‘heat bath’ which causes the system to reach thermal equilibrium. These two concepts are called the micro-canonical and the canonical distribution (or Gibbs distribution) and the actual distribution functions don’t differ much because the energy peaks so sharply also in the canonical case. It’s that type of calculations where those hyperspheres are actually needed.

Temperature as a concept emerges from a closer look at these distributions, but LL introduce it upfront from simpler considerations: It is sufficient to know that 1) entropy only depends on energy, 2) both are additive functions of subsystems, and 3) entropy is a maximum in equilibrium. You divide one system in two subsystems. The total change in entropy has to be zero as this is a maximum (in equilibrium), and what energy $dE_1$ leaves one system has to be received as $dE_2$ by the other system. Taking a look at the total entropy S as a function of the energy of one subsystem:

$0 = \frac{dS}{dE_1} = \frac{dS_1}{dE_1} + \frac{dS_2}{dE_1} =$
$= \frac{dS_1}{dE_1} + \frac{dS_2}{dE_2} \frac{dE_2}{dE_1} =$
$= \frac{dS_1}{dE_1} + \frac{dS_2}{dE_2}$

So $\frac{dS_x}{dE_x}$ has to be the same for each subsystem x. Cutting one of the subsystems in two  you can use the same argument again. So there is one very interesting quantity that is the same for every subsystem – $\frac{dS}{dE}$. Let’s call it 1/T and let’s call T the temperature.

Recently I presented the usual update of our system’s and measurement data documentation.The PDF document contains consolidated numbers for each year and month of operations:

Total output heating energy (incl. hot tap water), electrical input energy (incl. brine pump) and its ratio – the performance factor. Seasons always start at Sept.1, except the first season that started at Nov. 2011. For ‘special experiments’ that had an impact on the results see the text and the PDF linked above.

It is finally time to tackle the fundamental questions:

What id the impact of the size of the solar/air collector?

or

What is the typical output power of the collector?

In 2014 the Chief Engineer had rebuilt the collector so that you can toggle between 12m2 instead of 24m

TOP: Full collector – hydraulics as in seasons 2012, 2013. Active again since Sept. 2017. BOTTOM: Half of the collector, used in seasons 201414, 15, and 16.

Do we have data for seasons we can compare in a reasonable way – seasons that (mainly) differ by collector area?

We disregard seasons 2014 and 2016 – we had to get rid of a nearly 100 years old roof truss and only heated the ground floor with the heat pump.

Attic rebuild project – point of maximum destruction – generation of fuel for the wood stove.

Season 2014 was atypical anyway because of the Ice Storage Challenge experiment.

Then seasonal heating energy should be comparable – so we don’t consider the cold seasons 2012 and 2016.

Remaining warm seasons: 2013 – where the full collector was used – and 2015 (half collector). The whole house was heated with the heat pump; heating and energies and ambient energies were similar – and performance factors were basically identical. So we checked the numbers for the ice months Dec/Feb/Jan. Here a difference can be spotted, but it is far less dramatic than expected. For half the collector:

• Collector harvest is about 10% lower
• Performance factor is lower by about 0,2
• Brine inlet temperature for the heat pump is about 1,5K lower

The upper half of the collector is used, as indicated by hoarfrost.

It was counter-intuitive, and I scrutinized Data Kraken to check it for bugs.

But actually we forgot that we had predicted that years ago: Simulations show the trend correctly, and it suffices to do some basic theoretical calculations. You only need to know how to represent a heat exchanger’s power in two different ways:

Power is either determined by the temperature of the fluid when it enters and exits the exchanger tubes …

[1]   T_brine_outlet – T_brine_inlet * flow_rate * specific_heat

… but power can also be calculated from the heat energy flow from brine to air – over the surface area of the tubes:

[2]   delta_T_brine_air * Exchange_area * some_coefficient

Delta T is an average over the whole exchanger length (actually a logarithmic average but using an arithmetic average is good enough for typical parameters). Some_coefficient is a parameter that characterized heat transfer for area or per length of a tube, so Exchange_area * Some_coefficient could also be called the total heat transfer coefficient.

If several heat exchangers are connected in series their powers are not independent as they share common temperatures of the fluid at the intersection points:

The brine circuit connecting heat pump, collector and the underground water/ice storage tank. The three ‘interesting’ temperatures before/after the heat pump, collector and tank can be calculated from the current power of the heat pump, ambient air temperature, and tank temperature.

When the heat pump is off in ‘collector regeneration mode’ the collector and the heat exchanger in the tank necessarily transfer heat at the same power  per equation [1] – as one’s brine inlet temperature is the other one’s outlet temperature, the flow rate is the same, and also specific heat (whose temperature dependence can be ignored).

But powers can also be expressed by [2]: Each exchanger has a different area, a different heat transfer coefficient, and different mean temperature difference to the ambient medium.

So there are three equations…

• Power for each exchanger as defined by [1]
• 2 equations of type [2], one with specific parameters for collector and air, the other for the heat exchanger in the tank.

… and from those the three unknowns can be calculated: Brine inlet temperatures, brine outlet temperature, and harvesting power. All is simple and linear, it is not a big surprise that collector harvesting power is proportional temperature difference between air and tank. The warmer the air, the more you harvest.

The combination of coefficient factors is the ratio of the product of total coefficients and their sum, like: $\frac{f_1 * f_2}{f_1 + f_2}$ – the inverse of the sum of inverses.

This formula shows what one might you have guessed intuitively: If one of the factors is much bigger than the other – if one of the heat exchangers is already much ‘better’ than the others, then it does not help to make the better one even better. In the denominator, the smaller number in the sum can be neglected before and after optimization, the superior properties always cancel out, and the ‘bad’ component fully determines performance. (If one of the ‘factors’ is zero, total power is zero.) Examples for ‘bad’ exchangers: If the heat exchanger tubes in the tank are much too short or if a flat plat collector is used instead of an unglazed collector.

On the other hand, if you make a formerly ‘worse’ exchanger much better, the ratio will change significantly. If both exchangers have properties of the same order of magnitude – which is what we deign our systems for – optimizing one will change things for the better, but never linearly, as effects always cancel out to some extent (You increase numbers in both parts if the fraction).

So there is no ‘rated performance’ in kW or kW per area you could attach to a collector. Its effective performance also depends on the properties of the heat exchanger in the tank.

But there is a subtle consequence to consider: The smaller collector can deliver the same energy and thus ‘has’ twice the power per area. However, air temperature is given, and [2] must hold: In order to achieve this, the delta T between brine and air necessarily has to increase. So brine will be a bit colder and thus the heat pump’s Coefficient of Performance will be a bit lower. Over a full season including the warm periods of heating hot water only the effect is less pronounced – but we see a more significant change in performance data and brine inlet temperature for the ice months in the respective seasons.

# The Orphaned Internet Domain Risk

I have clicked on company websites of social media acquaintances, and something is not right: Slight errors in formatting, encoding errors for special German characters.

Then I notice that some of the pages contain links to other websites that advertize products in a spammy way. However, the links to the spammy sites are embedded in this alleged company websites in a subtle way: Using the (nearly) correct layout, or  embedding the link in a ‘news article’ that also contains legit product information – content really related to the internet domain I am visiting.

Looking up whois information tells me that these internet domain are not owned by my friends anymore – consistent with what they actually say on the social media profiles. So how come that they ‘have given’ their former domains to spammers? They did not, and they didn’t need to: Spammers simply need to watch out for expired domains, seize them when they are available – and then reconstruct the former legit content from public archives, and interleave it with their spammy messages.

The former content of legitimate sites is often available on the web archive. Here is the timeline of one of the sites I checked:

Clicking on the details shows:

• Last display of legit content in 2008.
• In 2012 and 2013 a generic message from the hosting provider was displayed: This site has been registered by one of our clients
• After that we see mainly 403 Forbidden errors – so the spammers don’t want their site to be archived – but at one time a screen capture of the spammy site had been taken.

The new site shows the name of the former owner at the bottom but an unobtrusive link had been added, indicating the new owner – a US-based marketing and SEO consultancy.

So my take away is: If you ever feel like decluttering your websites and free yourself of your useless digital possessions – and possibly also social media accounts, think twice: As soon as your domain or name is available, somebody might take it, and re-use and exploit your former content and possibly your former reputation for promoting their spammy stuff in a shady way.

This happened a while ago, but I know now it can get much worse: Why only distribute marketing spam if you can distribute malware through channels still considered trusted? In this blog post Malwarebytes raises the question if such practices are illegal or not – it seems that question is not straight-forward to answer.

Visitors do not even have to visit the abandoned domain explicitly to get hacked by malware served. I have seen some reports of abandoned embedded plug-ins turned into malicious zombies. Silly example: If you embed your latest tweets, Twitter goes out-of-business, and its domains are seized by spammers – you Follow Me icon might help to spread malware.

If a legit site runs third-party code, they need to trust the authors of this code. For example, Equifax’ website recently served spyware:

… the problem stemmed from a “third-party vendor that Equifax uses to collect website performance data,” and that “the vendor’s code running on an Equifax Web site was serving malicious content.”

So if you run any plug-ins, embedded widgets or the like – better check out regularly if the originating domain is still run by the expected owner – monitor your vendors often; and don’t run code you do not absolutely need in the first place. Don’t use embedded active badges if a simple link to your profile would do.

Do a painful boring inventory and assessment often – then you will notice how much work it is to manage these ‘partners’ and rather stay away from signing up and registering for too much services.

Update 2017-10-25: And as we speak, we learn about another example – snatching a domain used for a Dell backup software, preinstalled on PCs.

# Data for the Heat Pump System: Heating Season 2016-2017

I update the documentation of measurement data [PDF] about twice a year. This post is to provide a quick overview for the past season.

The PDF also contains the technical configuration and sizing data. Based on typical questions from an ‘international audience’ I add a summary here plus some ‘cultural’ context:

Building: The house is a renovated, nearly 100-year old building in Eastern Austria: a typical so-called ‘Streckhof’ – an elongated, former small farmhouse. Some details are mentioned here. Heating energy for space heating of two storeys (185m2) and hot water is about 17.000-20.000kWh per year. The roof / attic had been rebuilt in 2008, and the facade was thermally insulated. However, the major part of the house is without an underground level, so most energy is lost via ground. Heating only the ground floor (75m2) with the heat pump reduces heating energy only by 1/3.

Climate: This is the sunniest region of Austria – the lowlands of the Pannonian Plain bordering Hungary. We have Pannonian ‘continental’ climate with low precipitation. Normally, monthly average temperatures in winter are only slightly below 0°C in January, and weeks of ‘ice days’ in a row are very rare.

Heat energy distribution and storage (in the house): The renovated first floor has floor loops while at the ground floor mainly radiators are used. Wall heating has been installed in one room so far. A buffer tank is used for the heating water as this is a simple ‘on-off’ heat pump always operating at about its rated power. Domestic hot water is heated indirectly using a hygienic storage tank.

Heating system. An off-the-shelf, simple brine-water heat pump uses a combination of an unglazed solar-air collector and an underwater water tank as a heat source. Energy is mainly harvested from rather cold air via convection.

Addressing often asked questions: Off-the-shelf =  Same type of heat pump as used with geothermal systems. Simple: Not-smart, not trying to be the universal energy management system, as the smartness in our own control unit and logic for managing the heat source(s). Brine: A mixture of glycol and water (similar to the fluid used with flat solar thermal collectors) = antifreeze as the temperature of brine is below 0°C in winter. The tank is not a seasonal energy storage but a buffer for days or weeks. In this post hydraulics is described in detail, and typical operating conditions throughout a year. Both tank and collector are needed: The tank provides a buffer of latent energy during ‘ice periods’ and it allows to harvest more energy from air, but the collector actually provides for about 75% of the total ambient energy the heat pump needs in a season.

Tank and collector are rather generously sized in relation to the heating demands: about 25m3 volume of water (total volume +10% freezing reserve) and 24m2 collector area.

The overall history of data documented in the PDF also reflects ongoing changes and some experiments, like heating the first floor with a wood stove, toggling the effective area of the collector used between 50% and 100%, or switching off the collector to simulate a harsher winter.

Data for the past season

Finally we could create a giant ice cube naturally. 14m3 of ice had been created in the coldest January since 30 years. The monthly average temperature was -3,6°C, 3 degrees below the long-term average.

(Re the oscillations of the ice volume are see here and here.)

We heated only the ground floor in this season and needed 16.600 kWh (incl. hot water) – about the same heating energy as in the previous season. On the other hand, we also used only half of the collector – 12m2. The heating water inlet temperatures for radiators was even 37°C in January.

For the first time the monthly performance factor was well below 4. The performance factor is the ratio of output heating energy and input electrical energy for heat pump and brine pump. In middle Europe we measure both energies in kWh 😉 The overall seasonal performance factor was 4,3.

The monthly performance factor is a bit lower again in summer, when only hot water is heated (and thus the heat pump’s COP is lower because of the higher target temperature).

Per day we needed about 100kWh of heating energy in January, while the collector could not harvest that much:

In contrast to the season of the Ice Storage Challenge, also the month before the ‘challenge’ (Dec. 2016) was not too collector-friendly. But when the ice melted again, we saw the usual large energy harvests. Overall, the collector could contribute not the full ‘typical’ 75% of ambient energy this season.

(Definitions, sign conventions explained here.)

But there was one positive record, too. In a hot summer of 2017 we consumed the highest cooling energy so far – about 600kWh. The floor loops are used for passive cooling; the heating buffer tank is used to transfer heat from the floor loops to the cold underground tank. In ‘colder’ summer nights the collector is in turn used to cool the tank, and every time hot tap water is heated up the tank is cooled, too.

Of course the available cooling power is just a small fraction of what an AC system for the theoretical cooling load would provide for. However, this moderate cooling is just what – for me – makes the difference between unbearable and OK on really hot days with more than 35°C peak ambient temperature.