Entropy and Dimensions (Following Landau and Lifshitz)

Some time ago I wrote about volumes of spheres in multi-dimensional phase space – as needed in integrals in statistical mechanics.

The post was primarily about the curious fact that the ‘bulk of the volume’ of such spheres is contained in a thin shell beneath their hyperspherical surfaces. The trick to calculate something reasonable is to spot expressions you can Tayler-expand in the exponent.

Large numbers ‘do not get much bigger’ if multiplied by a factor, to be demonstrated again by Taylor-expanding such a large number in the exponent; I used this example:

Assuming N is about 1025  then its natural logarithm is about 58 and Ne^N = e^{\ln(N)+N} = e^{58+10^{25}} , then 58 can be neglected compared to N itself.

However, in the real world numbers associated with locations and momenta of particles come with units. Calling the unit ‘length’ in phase space R_0 the large volume can be written as aN{(\frac{r}{R_0})}^N = ae^{\ln{(N)} + N\ln{(\frac{r}{R_0})}} , and the impact of an additional factor N also depends on the unit length chosen.

I did not yet mention the related issues with the definition of entropy. In this post I will follow the way Landau and Lifshitz introduce entropy in Statistical Physics, Volume 5 of their Course of Theoretical Physics.

Landau and Lifshitz introduce statistical mechanics top-down, starting from fundamental principles and from Hamiltonian classical mechanics: no applications, no definitions of ‘heat’ and ‘work’, nor historical references needed for motivation. Classical phenomenological thermodynamics is only introduced after their are done with the statistical foundations. Both entropy and temperature are defined – these are useful fundamental properties spotted in the mathematical derivations and thus deserve special names. They cover both classical and quantum statistics in small number of pages – LL’s style has been called terse or elegant.

The behaviour of a system with a large number of particles is encoded in a probability distribution function in phase space, a density. In the classical case this is a continuous function of phase-space co-ordinates. In the quantum case you consider distinct states – whose energy levels are densely packed together though. Moving from classical to quantum statistics means to count those states rather than to integrate the smooth density function over a volume. There are equivalent states created by permutations of identical particles – but factoring in that is postponed and not required for a first definition of entropy. A quasi-classical description is sufficient: using a minimum cell in phase space, whose dimensions are defined by Planck’s constant h that has a dimension of action – length times momentum.

Entropy as statistical weight

Entropy S is defined as the logarithm of the statistical weight \Delta \Gamma – the number of quantum states associated with the part of phase phase used by the (sub)-system. (Landau and Lifshitz use the concept of a – still large – subsystem embedded in a larger volume most consequentially, in order to avoid reliance on the ergodic hypothesis as mentioned in the preface). In the quasi-classical view the statistical weight is the volume in phase space occupied by the system divided by the size of the minimum unit cell defined by Planck’s constant h. Denoting momenta by p, positions by q, using \Delta p and \Delta q as a shortcut applying multiple dimensions equivalent to s degrees of freedom…

S = log \Delta \Gamma = log \frac {\Delta p \Delta q}{2 \pi \hbar^s}

An example from solid state physics: if the system is considered a rectangular box in the physical world, possible quantum states related to vibrations can be visualized in terms of possible standing waves that ‘fit’ into the box. The statistical weight would then single out those bunch of states the system actually ‘has’ / ‘uses’ / ‘occupies’ in the long run.

Different sorts of statistical functions are introduced, and one reason for writing this article to emphasize the difference between them: The density function associates each point in phase space – each possible configuration of a system characterized by the momenta and locations of all particles – with a probability. These points are also called microstates. Taking into account the probabilities to find a system in any of these microstates gives you the so-called macrostate characterized by the statistical weight: How large or small a part of phase space the system will use when watched for a long time.

The canonical example is an ideal gas in a vessel: The most probable spacial distribution of particles is to find them spread out evenly, the most unlikely configuration is to have them concentrated in (nearly) the same location, like one corner of the box. The density function assigns probabilities to these configurations. As the even distribution is so much much more likely, the \Delta q part of the statistical weight would cover all of the physical volume available. The statistical weight function has to obtain a maximum value in the most likely case, in equilibrium.

The significance of energies – and why there are logarithms everywhere.

Different sufficiently large subsystems of one big system are statistically independent – as their properties are defined by their bulk volume rather than their surfaces interfacing with other subsystems – and the larger the volume, the larger the ratio of volume and surface.  Thus the probability density function for the combined system – as a function of momenta and locations of all particles in the total phase phase – has to be equal to the product of the densities for each subsystem. Denoting the classical density with \rho and adding a subscript for the set of momenta and positions referring to a subsystem:

\rho(q,p) = \rho_1(q_1,p_1) \rho_2(q_2,p_2)

(Since these are probability densities, the actual probability is always obtained by multiplying with the differential(s) dqdp).

This means that the logarithm of the composite density is equal to the sum of the logarithms of the individual densities. This the root cause of having logarithms show up everywhere in statistical mechanics.

A mechanical system of particles is characterized by only 7 ‘meaningful’ additive integrals: Energy, momentum and angular momentum – they add up when you combine systems, in contrast to all the other millions of integration constants that would appear when solving the equations of motions exactly. Momentum and angular momentum are not that interesting thermodynamically, as one can change to a frame moving and rotating with the system (LL also cover rotating systems). So energy remains as the integral of outstanding importance.

From counting states to energy intervals

What we want is to relate entropy to energy, so assertions about numbers of states covered need to be translated to statements about energy and energy ranges.

LL denote the probability to find a system in (micro-)state n with energy E_n as w_n – the quantum equivalent of density \rho . w_n has to be a linear function of the energy of this micro-state E_n as per the additivity just mentioned above, and thus LL omit the subscript n for w:

w_n = w(E_n)

(They omit any symbol ever if possible to keep their notation succinct ;-))

A thermodynamic system has an enormous number of (mechanical) degrees of freedom. Fluctuations are small as per the law of large numbers in statistics, and the probability to find a system with a certain energy can be approximated by a sharp delta-function-like peak at the system’s energy E. So in thermal equilibrium its energy has a very sharp peak. It occupies a very thin ‘layer’ of thickness \Delta E in config space – around the hyperplane that characterizes its average energy E.

Statistical weight \Delta \Gamma can be considered the width of the related function: Energy-wise broadening of the macroscopic state \Delta E needs to be translated to a broadening related to the number of quantum states.

We change variables, so the connection between Γ and E is made via the derivative of Γ with respect to E. E is an integral, statistical property of the whole system, and the probability for the system to have energy E in equilibrium is W(E)dE . E is not discrete so this is again a  probability density. It is capital W now – in contrast to w_n which says something about the ‘population’ of each quantum state with energy E_n.

A quasi-continuous number of states per energy Γ is related to E by the differential:

d\Gamma = \frac{d\Gamma}{dE} dE.

As E peaks so sharply and the energy levels are packed so densely it is reasonable to use the function (small) w but calculate it for an argument value E. Capital W(E) is a probability density as a function of total energy, small w(E) is a function of discrete energies denoting states – so it has to be multiplied by the number of states in the range in question:

W(E)dE = w(E)d\Gamma

Thus…

W(E) = w(E)\frac{d\Gamma}{dE}.

The delta-function-like functions (of energy or states) have to be normalized, and the widths ΔΓ and ΔE multiplied by the respective heights W and w taken at the average energy E_\text{avg} have to be 1, respectively:

W(E_\text{avg}) \Delta E = 1
w(E_\text{avg}) \Delta \Gamma = 1

(… and the ‘average’ energy is what is simply called ‘the’ energy in classical thermodynamics).

So \Delta \Gamma is inversely proportional to the probability of the most likely state (of average energy). This can also be concluded from the quasi-classical definition: If you imagine a box full of particles, the least possible state is equivalent to all particles occupying a single cell in phase space. The probability for that is (size of the unit cell) over (size of the box) times smaller than the probability to find the particles evenly distributed on the whole box … which is exactly the definition of \Delta \Gamma.

The statistical weight is finally:

\Delta \Gamma =  \frac{d\Gamma(E_\text{avg})}{dE} \Delta E.

… the broadening in \Gamma , proportional to the broadening in E

The more familiar (?) definition of entropy

From that, you can recover another familiar definition of entropy, perhaps the more common one. Taking the logarithm…

log S = log (\Delta \Gamma) = -log (w(E_\text{avg})).

As log w is linear in E, the averaging of E can be extended to the whole log function. Then the definition of ‘averaging over states n’ can be used: To multiply the value for each state n by probability w_n and sum up:

- \sum_{n} w_n log w_n.

… which is the first statistical expression for entropy I had once learned.

LL do not introduce Boltzmann’s constant k here

It is effectively set to 1 – so entropy is defined without a reference to k. k is is only mentioned in passing later: In case one wishes to measure energy and temperature in different units. But there is no need to do so, if you defined entropy and temperature based on first principles.

Back to units

In a purely classical description based on the volume in phase space instead of the number of states there would be no cell of minimum size, and then instead of the statistical weight we had simply this volume: But then entropy would be calculated in a very awkward unit, the logarithm of action. Every change of the unit for measuring volumes in phase space would result in an additive constant – the deeper reason why entropy in a classical context is only defined up to such a constant.

So the natural unit called R_0 above should actually be Planck’s constant taken to the power defined by the number of particles.

Temperature

The first task to be solved in statistical mechanics is to find a general way of formulating a proper density function small w_n as a function of energy E_n. You can either assume that the system has a clearly defined energy upfront – the system lives on a ‘energy-hyperplane in phase space’ – or you can consider it immersed in a larger system later identified with a ‘heat bath’ which causes the system to reach thermal equilibrium. These two concepts are called the micro-canonical and the canonical distribution (or Gibbs distribution) and the actual distribution functions don’t differ much because the energy peaks so sharply also in the canonical case. It’s that type of calculations where those hyperspheres are actually needed.

Temperature as a concept emerges from a closer look at these distributions, but LL introduce it upfront from simpler considerations: It is sufficient to know that 1) entropy only depends on energy, 2) both are additive functions of subsystems, and 3) entropy is a maximum in equilibrium. You divide one system in two subsystems. The total change in entropy has to be zero as this is a maximum (in equilibrium), and what energy dE_1 leaves one system has to be received as dE_2 by the other system. Taking a look at the total entropy S as a function of the energy of one subsystem:

0 = \frac{dS}{dE_1} = \frac{dS_1}{dE_1} + \frac{dS_2}{dE_1} =
= \frac{dS_1}{dE_1} + \frac{dS_2}{dE_2} \frac{dE_2}{dE_1} =
= \frac{dS_1}{dE_1} + \frac{dS_2}{dE_2}

So \frac{dS_x}{dE_x} has to be the same for each subsystem x. Cutting one of the subsystems in two  you can use the same argument again. So there is one very interesting quantity that is the same for every subsystem – \frac{dS}{dE}. Let’s call it 1/T and let’s call T the temperature.

The Collector Size Paradox

Recently I presented the usual update of our system’s and measurement data documentation.The PDF document contains consolidated numbers for each year and month of operations:

Total output heating energy (incl. hot tap water), electrical input energy (incl. brine pump) and its ratio – the performance factor. Seasons always start at Sept.1, except the first season that started at Nov. 2011. For ‘special experiments’ that had an impact on the results see the text and the PDF linked above.

It is finally time to tackle the fundamental questions:

What id the impact of the size of the solar/air collector?

or

What is the typical output power of the collector?

In 2014 the Chief Engineer had rebuilt the collector so that you can toggle between 12m2 instead of 24m

TOP: Full collector – hydraulics as in seasons 2012, 2013. Active again since Sept. 2017. BOTTOM: Half of the collector, used in seasons 201414, 15, and 16.

Do we have data for seasons we can compare in a reasonable way – seasons that (mainly) differ by collector area?

We disregard seasons 2014 and 2016 – we had to get rid of a nearly 100 years old roof truss and only heated the ground floor with the heat pump.

Attic rebuild project – point of maximum destruction – generation of fuel for the wood stove.

Season 2014 was atypical anyway because of the Ice Storage Challenge experiment.

Then seasonal heating energy should be comparable – so we don’t consider the cold seasons 2012 and 2016.

Remaining warm seasons: 2013 – where the full collector was used – and 2015 (half collector). The whole house was heated with the heat pump; heating and energies and ambient energies were similar – and performance factors were basically identical. So we checked the numbers for the ice months Dec/Feb/Jan. Here a difference can be spotted, but it is far less dramatic than expected. For half the collector:

  • Collector harvest is about 10% lower
  • Performance factor is lower by about 0,2
  • Brine inlet temperature for the heat pump is about 1,5K lower

The upper half of the collector is used, as indicated by hoarfrost.

It was counter-intuitive, and I scrutinized Data Kraken to check it for bugs.

But actually we forgot that we had predicted that years ago: Simulations show the trend correctly, and it suffices to do some basic theoretical calculations. You only need to know how to represent a heat exchanger’s power in two different ways:

Power is either determined by the temperature of the fluid when it enters and exits the exchanger tubes …

[1]   T_brine_outlet – T_brine_inlet * flow_rate * specific_heat

… but power can also be calculated from the heat energy flow from brine to air – over the surface area of the tubes:

[2]   delta_T_brine_air * Exchange_area * some_coefficient

Delta T is an average over the whole exchanger length (actually a logarithmic average but using an arithmetic average is good enough for typical parameters). Some_coefficient is a parameter that characterized heat transfer for area or per length of a tube, so Exchange_area * Some_coefficient could also be called the total heat transfer coefficient.

If several heat exchangers are connected in series their powers are not independent as they share common temperatures of the fluid at the intersection points:

The brine circuit connecting heat pump, collector and the underground water/ice storage tank. The three ‘interesting’ temperatures before/after the heat pump, collector and tank can be calculated from the current power of the heat pump, ambient air temperature, and tank temperature.

When the heat pump is off in ‘collector regeneration mode’ the collector and the heat exchanger in the tank necessarily transfer heat at the same power  per equation [1] – as one’s brine inlet temperature is the other one’s outlet temperature, the flow rate is the same, and also specific heat (whose temperature dependence can be ignored).

But powers can also be expressed by [2]: Each exchanger has a different area, a different heat transfer coefficient, and different mean temperature difference to the ambient medium.

So there are three equations…

  • Power for each exchanger as defined by [1]
  • 2 equations of type [2], one with specific parameters for collector and air, the other for the heat exchanger in the tank.

… and from those the three unknowns can be calculated: Brine inlet temperatures, brine outlet temperature, and harvesting power. All is simple and linear, it is not a big surprise that collector harvesting power is proportional temperature difference between air and tank. The warmer the air, the more you harvest.

The combination of coefficient factors is the ratio of the product of total coefficients and their sum, like: \frac{f_1 * f_2}{f_1 + f_2} – the inverse of the sum of inverses.

This formula shows what one might you have guessed intuitively: If one of the factors is much bigger than the other – if one of the heat exchangers is already much ‘better’ than the others, then it does not help to make the better one even better. In the denominator, the smaller number in the sum can be neglected before and after optimization, the superior properties always cancel out, and the ‘bad’ component fully determines performance. (If one of the ‘factors’ is zero, total power is zero.) Examples for ‘bad’ exchangers: If the heat exchanger tubes in the tank are much too short or if a flat plat collector is used instead of an unglazed collector.

On the other hand, if you make a formerly ‘worse’ exchanger much better, the ratio will change significantly. If both exchangers have properties of the same order of magnitude – which is what we deign our systems for – optimizing one will change things for the better, but never linearly, as effects always cancel out to some extent (You increase numbers in both parts if the fraction).

So there is no ‘rated performance’ in kW or kW per area you could attach to a collector. Its effective performance also depends on the properties of the heat exchanger in the tank.

But there is a subtle consequence to consider: The smaller collector can deliver the same energy and thus ‘has’ twice the power per area. However, air temperature is given, and [2] must hold: In order to achieve this, the delta T between brine and air necessarily has to increase. So brine will be a bit colder and thus the heat pump’s Coefficient of Performance will be a bit lower. Over a full season including the warm periods of heating hot water only the effect is less pronounced – but we see a more significant change in performance data and brine inlet temperature for the ice months in the respective seasons.

The Orphaned Internet Domain Risk

I have clicked on company websites of social media acquaintances, and something is not right: Slight errors in formatting, encoding errors for special German characters.

Then I notice that some of the pages contain links to other websites that advertize products in a spammy way. However, the links to the spammy sites are embedded in this alleged company websites in a subtle way: Using the (nearly) correct layout, or  embedding the link in a ‘news article’ that also contains legit product information – content really related to the internet domain I am visiting.

Looking up whois information tells me that these internet domain are not owned by my friends anymore – consistent with what they actually say on the social media profiles. So how come that they ‘have given’ their former domains to spammers? They did not, and they didn’t need to: Spammers simply need to watch out for expired domains, seize them when they are available – and then reconstruct the former legit content from public archives, and interleave it with their spammy messages.

The former content of legitimate sites is often available on the web archive. Here is the timeline of one of the sites I checked:

Clicking on the details shows:

  • Last display of legit content in 2008.
  • In 2012 and 2013 a generic message from the hosting provider was displayed: This site has been registered by one of our clients
  • After that we see mainly 403 Forbidden errors – so the spammers don’t want their site to be archived – but at one time a screen capture of the spammy site had been taken.

The new site shows the name of the former owner at the bottom but an unobtrusive link had been added, indicating the new owner – a US-based marketing and SEO consultancy.

So my take away is: If you ever feel like decluttering your websites and free yourself of your useless digital possessions – and possibly also social media accounts, think twice: As soon as your domain or name is available, somebody might take it, and re-use and exploit your former content and possibly your former reputation for promoting their spammy stuff in a shady way.

This happened a while ago, but I know now it can get much worse: Why only distribute marketing spam if you can distribute malware through channels still considered trusted? In this blog post Malwarebytes raises the question if such practices are illegal or not – it seems that question is not straight-forward to answer.

Visitors do not even have to visit the abandoned domain explicitly to get hacked by malware served. I have seen some reports of abandoned embedded plug-ins turned into malicious zombies. Silly example: If you embed your latest tweets, Twitter goes out-of-business, and its domains are seized by spammers – you Follow Me icon might help to spread malware.

If a legit site runs third-party code, they need to trust the authors of this code. For example, Equifax’ website recently served spyware:

… the problem stemmed from a “third-party vendor that Equifax uses to collect website performance data,” and that “the vendor’s code running on an Equifax Web site was serving malicious content.”

So if you run any plug-ins, embedded widgets or the like – better check out regularly if the originating domain is still run by the expected owner – monitor your vendors often; and don’t run code you do not absolutely need in the first place. Don’t use embedded active badges if a simple link to your profile would do.

Do a painful boring inventory and assessment often – then you will notice how much work it is to manage these ‘partners’ and rather stay away from signing up and registering for too much services.

Update 2017-10-25: And as we speak, we learn about another example – snatching a domain used for a Dell backup software, preinstalled on PCs.

Data for the Heat Pump System: Heating Season 2016-2017

I update the documentation of measurement data [PDF] about twice a year. This post is to provide a quick overview for the past season.

The PDF also contains the technical configuration and sizing data. Based on typical questions from an ‘international audience’ I add a summary here plus some ‘cultural’ context:

Building: The house is a renovated, nearly 100-year old building in Eastern Austria: a typical so-called ‘Streckhof’ – an elongated, former small farmhouse. Some details are mentioned here. Heating energy for space heating of two storeys (185m2) and hot water is about 17.000-20.000kWh per year. The roof / attic had been rebuilt in 2008, and the facade was thermally insulated. However, the major part of the house is without an underground level, so most energy is lost via ground. Heating only the ground floor (75m2) with the heat pump reduces heating energy only by 1/3.

Climate: This is the sunniest region of Austria – the lowlands of the Pannonian Plain bordering Hungary. We have Pannonian ‘continental’ climate with low precipitation. Normally, monthly average temperatures in winter are only slightly below 0°C in January, and weeks of ‘ice days’ in a row are very rare.

Heat energy distribution and storage (in the house): The renovated first floor has floor loops while at the ground floor mainly radiators are used. Wall heating has been installed in one room so far. A buffer tank is used for the heating water as this is a simple ‘on-off’ heat pump always operating at about its rated power. Domestic hot water is heated indirectly using a hygienic storage tank.

Heating system. An off-the-shelf, simple brine-water heat pump uses a combination of an unglazed solar-air collector and an underwater water tank as a heat source. Energy is mainly harvested from rather cold air via convection.

Addressing often asked questions: Off-the-shelf =  Same type of heat pump as used with geothermal systems. Simple: Not-smart, not trying to be the universal energy management system, as the smartness in our own control unit and logic for managing the heat source(s). Brine: A mixture of glycol and water (similar to the fluid used with flat solar thermal collectors) = antifreeze as the temperature of brine is below 0°C in winter. The tank is not a seasonal energy storage but a buffer for days or weeks. In this post hydraulics is described in detail, and typical operating conditions throughout a year. Both tank and collector are needed: The tank provides a buffer of latent energy during ‘ice periods’ and it allows to harvest more energy from air, but the collector actually provides for about 75% of the total ambient energy the heat pump needs in a season.

Tank and collector are rather generously sized in relation to the heating demands: about 25m3 volume of water (total volume +10% freezing reserve) and 24m2 collector area.

The overall history of data documented in the PDF also reflects ongoing changes and some experiments, like heating the first floor with a wood stove, toggling the effective area of the collector used between 50% and 100%, or switching off the collector to simulate a harsher winter.

Data for the past season

Finally we could create a giant ice cube naturally. 14m3 of ice had been created in the coldest January since 30 years. The monthly average temperature was -3,6°C, 3 degrees below the long-term average.

(Re the oscillations of the ice volume are see here and here.)

We heated only the ground floor in this season and needed 16.600 kWh (incl. hot water) – about the same heating energy as in the previous season. On the other hand, we also used only half of the collector – 12m2. The heating water inlet temperatures for radiators was even 37°C in January.

For the first time the monthly performance factor was well below 4. The performance factor is the ratio of output heating energy and input electrical energy for heat pump and brine pump. In middle Europe we measure both energies in kWh 😉 The overall seasonal performance factor was 4,3.

The monthly performance factor is a bit lower again in summer, when only hot water is heated (and thus the heat pump’s COP is lower because of the higher target temperature).

Per day we needed about 100kWh of heating energy in January, while the collector could not harvest that much:

In contrast to the season of the Ice Storage Challenge, also the month before the ‘challenge’ (Dec. 2016) was not too collector-friendly. But when the ice melted again, we saw the usual large energy harvests. Overall, the collector could contribute not the full ‘typical’ 75% of ambient energy this season.

(Definitions, sign conventions explained here.)

But there was one positive record, too. In a hot summer of 2017 we consumed the highest cooling energy so far – about 600kWh. The floor loops are used for passive cooling; the heating buffer tank is used to transfer heat from the floor loops to the cold underground tank. In ‘colder’ summer nights the collector is in turn used to cool the tank, and every time hot tap water is heated up the tank is cooled, too.

Of course the available cooling power is just a small fraction of what an AC system for the theoretical cooling load would provide for. However, this moderate cooling is just what – for me – makes the difference between unbearable and OK on really hot days with more than 35°C peak ambient temperature.

Computers, Science, and History Thereof

I am reading three online resources in parallel – on the history and the basics of computing, computer science, software engineering, and the related culture and ‘philosophy’. An accidental combination I find most enjoyable.

Joel on Software: Joel Spolsky’s blog – a collection of classic essays. What every developer needs to know about Unicode. New terms like Astronaut Architects and Leaky Abstractions. How to start a self-funded software company, how to figure out the price of software, how to write functional specifications. Bringing back memories of my first encounters with Microsoft VBA. He has the best examples – Martian Headsets to explain web standards.

The blog started in 1999 – rather shortly after I had entered the IT industry. So it is an interesting time capsule, capturing technologies and trends I was sort of part of – including the relationship with one large well-known software company.

Somewhere deep in Joel’s blog I found references to another classic; it was in an advice on how to show passion as an applicant for a software developer job. Tell them how reading this moved you to tears:

Structure and Interpretation of Computer Programs. I think I have found the equivalent to Feynman’s Physics Lectures in computer science! I have hardly ever read a textbook or attended a class that was both so philosophically insightful and useful in a hands-on, practical way. Using Scheme (Lisp) as an example, important concepts are introduced step-by-step, via examples, viewed from different perspectives.

It was amazing how far you can get with purely Functional Programming. I did not even notice that they had not used a single assignment (Data Mutation) until far into the course.

The quality of the resources made available for free is incredible – which holds for all the content I am praising in this post: Full textbook, video lectures with transcripts, slides with detailed comments. It is also good to know and reassuring that despite the allegedly fast paced changes of technology, basic concepts have not changed that much since decades.

But if you are already indulging in nostalgic thoughts why not catch up on the full history of computing?

Creatures of Thought. A sublime book-like blog on the history of computing – starting from with the history of telephone networks and telegraphs, covering computing machines – electro-mechanical or electronic, related and maybe unappreciated hardware components like the relay, and including biographic vignettes of the heroes involved.

The author’s PhD thesis (available for download on the About page) covers the ‘information utility’ vision that was ultimately superseded by the personal computer. This is an interesting time capsule for me as well, as this story ends about where my personal journey started – touching personal PCs in the late 1980s, but having been taught the basics of programming via sending my batch jobs to an ancient mainframe.

From such diligently done history of engineering I can only learn not to rush to any conclusions. There are no simple causes and effects, or unambiguous stories about who invented what and who was first. It’s all subtle evolution and meandering narratives, randomness and serendipity. Quoting from the post that indicates the beginning of the journey, on the origins of the electric telegraph:

Our physics textbooks have packaged up the messy past into a tidy collection of concepts and equations, eliding centuries of development and conflict between competing schools of thought. Ohm never wrote the formula V = IR, nor did Maxwell create Maxwell’s equations.

Though I will not attempt to explore all the twists and turns of the intellectual history of electricity, I will do my best to present ideas as they existed at the time, not as we retrospectively fit them into our modern categories.

~

Phone, 1970s, Austria

The kind of phone I used at the time when the video lectures for Structure and Interpretation of Computer Programs had been recorded and when I submitted my batch jobs of Fortran code to be compiled. I have revived the phone now and then.

 

Tinkering, Science, and (Not) Sharing It

I stumbled upon this research paper called PVC polyhedra:

We describe how to construct a dodecahedron, tetrahedron, cube, and octahedron out of pvc pipes using standard fittings.

In particular, if we take a connector that takes three pipes each at 120 degree angles from the others (this is called a “true wye”) and we take elbows of the appropriate angle, we can make the edges come together below the center at exactly the correct angles.

A pivotal moment: What you consider tinkering is actually research-paper-worthy science. Here are some images from the Chief Engineer’s workbench.

The supporting construction of our heat exchangers are built from standard parts connected at various angles:

The final result can be a cuboid for holding meandering tubes:

… or cascaded prisms with n-gon basis – for holding spirals of flexible tubes:

The implementation of this design is documented here (a German post whose charm would be lost in translation unless I wanted to create Internet Poetry).

But I also started up my time machine – in order to find traces of my polyhedra research in the early 1980s. From photos and drawings of the three-dimensional crystals in mineralogy books I figured out how to draw two-dimensional maps of maximally connected surface areas. I cut out the map, and glued together the remaining free edges. Today I would be made redundant by Origami AI.

I filled several shelves with polyhedra of increasing number of faces, starting with a tetrahedron and culminating with this rhombicosidodecahedron. If I recall correctly, I cheated a bit with this one and created some of the pyramids as completely separate items.

I think this was a rather standard hobby for the typical nerdy child, among things like growing crystals from solutions of toxic chemicals, building a makeshift rotatable telescope tripod from scraps, or verifying the laws of optics using prisms and lenses from ancient dismantled devices.

The actually interesting thing is that this photo is the only trace of any of these hobbies. In many years after creating this stuff – and destroying it again – I never thought about documenting it. Until today. It seems we weren’t into sharing these days.

Simulations: Levels of Consciousness

In a recent post I showed these results of simulations for our heat pump system:

I focused on the technical details – this post will be more philosophical.

What is a ‘simulation’ – opposed to simplified calculations of monthly or yearly average temperatures or energies? The latter are provided by tools used by governmental agencies or standardization bodies – allowing for a comparison of different systems.

In a true simulation the time intervals so small that you catch all ‘relevant’ changes of a system. If a heating system is turned on for one hour, then turned off again, he time slot needs to be smaller than one hour. I argued before that calculating meaningful monthly numbers requires to incorporate data that had been obtained before by measurements – or by true simulations.

For our system, the heat flow between ground and the water/ice tank is important. In our simplified sizing tool – which is not a simulation – I use average numbers. I validated them by comparing with measurements: The contribution of ground can be determined indirectly; by tallying all the other energies involved. In the detailed simulation I calculate the temperature in ground as a function of time and of distance from the tank, by solving the Heat Equation numerically. Energy flow is then proportional to the temperature gradient at the walls of the tank. You need to make assumptions about the thermal properties of ground, and a simplified geometry of the tank is considered.

Engineering / applied physics in my opinion is about applying a good-enough-approach in order to solve one specific problem. It’s about knowing your numbers and their limits. It is tempting to get carried away by nerdy physics details, and focus on simulating what you know exactly – forgetting that there are huge error bars because of unknowns.

This is the hierarchy I keep in mind:

On the lowest level is the simulation physics, that is: about modelling how ‘nature’ and system’s components react – to changes in the previous time slot. Temperatures change because energies flows, and energy flows because of temperature differences. The heat pump’s output power depends on heating water temperature and brine temperature. Energy of the building is ‘lost’ to the environment via heat conduction; heat exchangers immersed in tanks deposit energy there or retrieve it. I found that getting the serial connection of heat exchangers right in the model was crucial, and it required a self-consistent calculation for three temperatures at the same point of time, rather than trying to ‘follow round the brine’. I used the information on average brine temperatures obtained by these method to run a simplified version of the simulation using daily averages only – for estimating the maximum volume of ice for two decades.

So this means you need to model your exact hydraulic setup, or at least you need to know which features of your setup are critical and worthy to model in detail. But the same also holds for the second level, the simulation of control logic. I try to mirror production control logic as far as possible: This code determines how pumps and valves will react, depending on the system’s prior status before. Both in real life and in the simulation threshold values and ‘hystereses’ are critical: You start to heat if some temperature falls below X, but you only stop heating if it has risen above X plus some Delta. Typical brine-water heat pumps always provide approximately the same output power, so you control operations time and buffer heating energy. If Delta for heating the hot water buffer tank is too large, the heat pump’s performance will suffer. The Coefficient of Performance of the heat pump decreases with increasing heating water temperature. Changing an innocuous parameter will change results a lot, and the ‘control model’ should be given the same vigilance as the ‘physics model’.

Control units can be tweaked at different levels: ‘Experts’ can change the logic, but end users can change non-critical parameters, such as set point temperatures.We don’t restrict expert access in systems we provide the control unit for. But it make sense to require extra input for the expert level though – to prevent accidental changes.

And here we enter level 3 – users’ behavior. We humans are bad at trying to outsmart the controller.

[Life-form in my home] always sets the controller to ‘Sun’. [little sun icon indicating manually set parameters]. Can’t you program something so that nothing actually changes when you pick ‘Sun’?

With heat pumps utilizing ground or water sources – ‘built’ storage repositories with limited capacity – unexpected and irregular system changes are critical: You have to size your source in advance. You cannot simply order one more lorry load of wood pellets or oil if you ‘run out of fuel’. If the source of ambient energy is depleted, the heat pump finally will refuse to work below a certain source temperature. The heat pump’s rated power has match the heating demands and the size of the source exactly. It also must not be oversized in order to avoid turning on and off the compressor too often.

Thus you need good estimates for peak heat load and yearly energy needs, and models should include extreme weather (‘physics’) but also erratic users’ behaviour. The more modern the building, the more important spikes in hot tap water usage get in relation to space heating. A vendor of wood pellet stoves told me that delivering peak energy for hot water – used in private bathrooms that match spas – is a greater challenge today than delivering space heating energy. Energy certificates of modern buildings take into account huge estimated solar and internal energy gains – calculated according to standards. But the true heating power needed on a certain day will depend on the strategy or automation home owners use when managing their shades.

Typical gas boilers are oversized (in terms of kW rated power) by a factor of 2 or more in Germany, but with heat pumps you need to be more careful. However, this also means that heat pump systems cannot and should not be planned for rare peak demands, such as: 10 overnight guests want to shower in the morning one after the other, on an extremely cold day, or for heating up the building quickly after temperature had been decreased during a leave of absence.

The nerdy answer is that a smart home would know when your vacation ends and start heating up well in advance. Not sure what to do about the showering guests as in this case ‘missing’ power cannot be compensated by more time. Perhaps a gamified approach will work: An app will do something funny / provide incentives and notifications so that people wait for the water to heat up again. But what about planning for renting a part of the house out someday? Maybe a very good AI will predict what your grandchildren are likely to do, based on automated genetics monitoring.

The challenge of simulating human behaviour is ultimately governed by constraints on resources – such as the size of the heat source: Future heating demands and energy usage is unknown but the heat source has to be sized today. If the system is ‘open’ and connected to a ‘grid’ in a convenient way problems seem to go away: You order whatever you need, including energy, any time. The opposite is planning for true self-sufficiency: I once developed a simulation for an off-grid system using photovoltaic generators and wind power – for a mountain shelter. They had to meet tough regulations and hygienic standards like any other restaurant, e.g.: to use ‘industry-grade’ dishwashers needing 10kW of power. In order to provide that by solar power (plus battery) you needed to make an estimate on the number of guests likely to visit … and thus on how many people would go hiking on a specific day … and thus maybe on the weather forecast. I tried to factor in the ‘visiting probability’ based on the current weather.

I think many of these problem can be ‘resolved’ by recognizing that they are first world problems. It takes tremendous efforts – in terms of energy use or systems’ complexity – to obtain 100% availability and to cover all exceptional use cases. You would need the design heat load only for a few days every decade. On most winter days a properly sized heat pump is operating for only 12 hours. The simple, low tech solution would be to accept the very very rare intermittent 18,5°C room temperature mitigated by proper clothing. Accepting a 20-minute delay of your shower solves the hot water issue. An economical analysis can reveal the (most likely very small) trade-off of providing exceptional peak energy by a ‘backup’ electrical heating element – or by using that wood stove that you installed ‘as a backup’ but mostly for ornamental reasons because it is dreadful to fetch the wood logs when it is really cold.

But our ‘modern’ expectations and convenience needs are also reflected in regulations. Contractors are afraid of being sued by malicious clients who (quote) sit next their heat pump and count its operating cycles – to compare the numbers with the ones to be ‘guaranteed. In a weather-challenged region at more than 2.000 meters altitude people need to steam clean dishes and use stainless steel instead of wood – where wooden plates have been used for centuries. I believe that regulators are as prone as anybody else to fall into the nerdy trap described above: You monitor, measure, calculate, and regulate the things in detail that you can measure and because you can measure them – not because these things were top priorities or had the most profound impact.

Still harvesting energy from air - during a record-breaking cold January 2017