Simulating Life-Forms (2): Cooling Energy

I found this comprehensive research report:
Energy Use in the Australian Residential Sector 1986–2020 (June 2008)
(several PDFs for download, click the link Energy Use… to display them)

There are many interesting results – and the level of detail is impressive: The authors modelled the energy used per appliance type, by e.g. factoring in how building types change slowly over time or by modelling the development of TV sets and their usage. Occupancy factors for buildings are determined from assumptions about typical usage profiles called Stay At Home, At Work or Night Owl.

I zoom in on simulating and predicting usage of air conditioning and thus cooling energy:

They went to great lengths to simulate the behavior of home owners to model operations of air conditioning and thus total cooling energy for a season, for a state or the whole country.

The authors investigated the official simulation software used for rating buildings (from …part2.pdf):

In the AccuRate software, once cooling is invoked the
program continues to assume that the occupant is willing to
tolerate less than optimal comfort conditions and will therefore terminate cooling if in the absence of such cooling the internal temperature would not rise above the summer neutral temperature noted in Table 57, + 2.5oC plus allowances for humidity and air movement as applicable. While this may be appropriate for rating purposes, it is considered to be an unlikely form of behaviour to be adopted by householders in the field and as such this assumption is likely to underestimate the potential space cooling demand. This theory is supported by the survey work undertaken by McGreggor in South Australia.

This confirms what I am saying all the time: The more modern a building is, or generally nowadays given ‘modern’ home owners’ requirements, the more important would it be to actually simulate humans’ behavior, on top of the physics and the control logic.

The research study also points out e.g. that AC usage has been on the rise, because units got affordable, modern houses are built with less focus on shading, and home owners demand higher standards of comfort. Ducted cooling systems that cover the cooling load of the whole house are being implemented, and they replace systems for cooling single zones only. Those ducted systems have a rated output cooling power greater than 10kW – so the authors (and it seems Australian governmental decision makers) are worried about the impact on the stability of the power grid on hot days [*].

Once AC had been turned on for the first time in the hot season, home owners don’t switch it off again when the theoretical ‘neutral’ summer temperature would be reached again, but they keep it on and try to maintain a lower temperature (22-23°C) that is about constant irrespective of temperature outside. So small differences in actual behavior cause huge error bars in total cooling energy for a season:

The impact of this resetting of the cooling thermostat operation was found to be significant. A comparison was undertaken between cooling loads determined using the AccuRate default thermostat settings and the modified settings as described above. A single-storey brick veneer detached dwelling with concrete slab on ground floor and ceiling insulation was used for the comparison. The comparison was undertaken in both the Adelaide and the Darwin climate zones. In Adelaide the modified settings produced an increased annual cooling load 64% higher than that using the AccuRate default settings.

The report also confirms my anecdotal evidence: In winter (colder regions) people heat rooms to higher temperatures than ‘expected’; in summer (warmer regions) people want to cool to a lower temperature:

This is perhaps not surprising, de Dear notes that: “preferred temperature for a particular building did not necessarily coincide with thermal neutrality, and this semantic discrepancy was most evident in HVAC buildings where preference was depressed below neutrality in warm climates and elevated above neutrality in cold climates (ie people preferred to feel cooler than neutral in warm climates, and warmer than neutral in cold climates)” (Richard de Dear et al 1997, P xi).

I noticed that the same people who (over-)heat their rooms to 24°C in winter might want to cool to 20°C in summer. In middle Europe AC in private homes has been uncommon, but I believe it is on the rise, too, also because home owners got accustomed to a certain level of cooling when they work in typical office buildings.

My conclusion is (yet again) that you cannot reliably ‘predict’ cooling energy. It’s already hard to do so for heating energy for low energy houses, but nearly impossible for cooling energy. All you can do – from a practical / system’s design perspective – is to make sure that there is an ‘infinite’ source of cooling energy available.

_________________________________

[*] Edit: And it actually happenend in February 2017.

Entropy and Dimensions (Following Landau and Lifshitz)

Some time ago I wrote about volumes of spheres in multi-dimensional phase space – as needed in integrals in statistical mechanics.

The post was primarily about the curious fact that the ‘bulk of the volume’ of such spheres is contained in a thin shell beneath their hyperspherical surfaces. The trick to calculate something reasonable is to spot expressions you can Tayler-expand in the exponent.

Large numbers ‘do not get much bigger’ if multiplied by a factor, to be demonstrated again by Taylor-expanding such a large number in the exponent; I used this example:

Assuming N is about 1025  then its natural logarithm is about 58 and Ne^N = e^{\ln(N)+N} = e^{58+10^{25}} , then 58 can be neglected compared to N itself.

However, in the real world numbers associated with locations and momenta of particles come with units. Calling the unit ‘length’ in phase space R_0 the large volume can be written as aN{(\frac{r}{R_0})}^N = ae^{\ln{(N)} + N\ln{(\frac{r}{R_0})}} , and the impact of an additional factor N also depends on the unit length chosen.

I did not yet mention the related issues with the definition of entropy. In this post I will follow the way Landau and Lifshitz introduce entropy in Statistical Physics, Volume 5 of their Course of Theoretical Physics.

Landau and Lifshitz introduce statistical mechanics top-down, starting from fundamental principles and from Hamiltonian classical mechanics: no applications, no definitions of ‘heat’ and ‘work’, nor historical references needed for motivation. Classical phenomenological thermodynamics is only introduced after their are done with the statistical foundations. Both entropy and temperature are defined – these are useful fundamental properties spotted in the mathematical derivations and thus deserve special names. They cover both classical and quantum statistics in small number of pages – LL’s style has been called terse or elegant.

The behaviour of a system with a large number of particles is encoded in a probability distribution function in phase space, a density. In the classical case this is a continuous function of phase-space co-ordinates. In the quantum case you consider distinct states – whose energy levels are densely packed together though. Moving from classical to quantum statistics means to count those states rather than to integrate the smooth density function over a volume. There are equivalent states created by permutations of identical particles – but factoring in that is postponed and not required for a first definition of entropy. A quasi-classical description is sufficient: using a minimum cell in phase space, whose dimensions are defined by Planck’s constant h that has a dimension of action – length times momentum.

Entropy as statistical weight

Entropy S is defined as the logarithm of the statistical weight \Delta \Gamma – the number of quantum states associated with the part of phase phase used by the (sub)-system. (Landau and Lifshitz use the concept of a – still large – subsystem embedded in a larger volume most consequentially, in order to avoid reliance on the ergodic hypothesis as mentioned in the preface). In the quasi-classical view the statistical weight is the volume in phase space occupied by the system divided by the size of the minimum unit cell defined by Planck’s constant h. Denoting momenta by p, positions by q, using \Delta p and \Delta q as a shortcut applying multiple dimensions equivalent to s degrees of freedom…

S = log \Delta \Gamma = log \frac {\Delta p \Delta q}{2 \pi \hbar^s}

An example from solid state physics: if the system is considered a rectangular box in the physical world, possible quantum states related to vibrations can be visualized in terms of possible standing waves that ‘fit’ into the box. The statistical weight would then single out those bunch of states the system actually ‘has’ / ‘uses’ / ‘occupies’ in the long run.

Different sorts of statistical functions are introduced, and one reason for writing this article to emphasize the difference between them: The density function associates each point in phase space – each possible configuration of a system characterized by the momenta and locations of all particles – with a probability. These points are also called microstates. Taking into account the probabilities to find a system in any of these microstates gives you the so-called macrostate characterized by the statistical weight: How large or small a part of phase space the system will use when watched for a long time.

The canonical example is an ideal gas in a vessel: The most probable spacial distribution of particles is to find them spread out evenly, the most unlikely configuration is to have them concentrated in (nearly) the same location, like one corner of the box. The density function assigns probabilities to these configurations. As the even distribution is so much much more likely, the \Delta q part of the statistical weight would cover all of the physical volume available. The statistical weight function has to obtain a maximum value in the most likely case, in equilibrium.

The significance of energies – and why there are logarithms everywhere.

Different sufficiently large subsystems of one big system are statistically independent – as their properties are defined by their bulk volume rather than their surfaces interfacing with other subsystems – and the larger the volume, the larger the ratio of volume and surface.  Thus the probability density function for the combined system – as a function of momenta and locations of all particles in the total phase phase – has to be equal to the product of the densities for each subsystem. Denoting the classical density with \rho and adding a subscript for the set of momenta and positions referring to a subsystem:

\rho(q,p) = \rho_1(q_1,p_1) \rho_2(q_2,p_2)

(Since these are probability densities, the actual probability is always obtained by multiplying with the differential(s) dqdp).

This means that the logarithm of the composite density is equal to the sum of the logarithms of the individual densities. This the root cause of having logarithms show up everywhere in statistical mechanics.

A mechanical system of particles is characterized by only 7 ‘meaningful’ additive integrals: Energy, momentum and angular momentum – they add up when you combine systems, in contrast to all the other millions of integration constants that would appear when solving the equations of motions exactly. Momentum and angular momentum are not that interesting thermodynamically, as one can change to a frame moving and rotating with the system (LL also cover rotating systems). So energy remains as the integral of outstanding importance.

From counting states to energy intervals

What we want is to relate entropy to energy, so assertions about numbers of states covered need to be translated to statements about energy and energy ranges.

LL denote the probability to find a system in (micro-)state n with energy E_n as w_n – the quantum equivalent of density \rho . w_n has to be a linear function of the energy of this micro-state E_n as per the additivity just mentioned above, and thus LL omit the subscript n for w:

w_n = w(E_n)

(They omit any symbol ever if possible to keep their notation succinct ;-))

A thermodynamic system has an enormous number of (mechanical) degrees of freedom. Fluctuations are small as per the law of large numbers in statistics, and the probability to find a system with a certain energy can be approximated by a sharp delta-function-like peak at the system’s energy E. So in thermal equilibrium its energy has a very sharp peak. It occupies a very thin ‘layer’ of thickness \Delta E in config space – around the hyperplane that characterizes its average energy E.

Statistical weight \Delta \Gamma can be considered the width of the related function: Energy-wise broadening of the macroscopic state \Delta E needs to be translated to a broadening related to the number of quantum states.

We change variables, so the connection between Γ and E is made via the derivative of Γ with respect to E. E is an integral, statistical property of the whole system, and the probability for the system to have energy E in equilibrium is W(E)dE . E is not discrete so this is again a  probability density. It is capital W now – in contrast to w_n which says something about the ‘population’ of each quantum state with energy E_n.

A quasi-continuous number of states per energy Γ is related to E by the differential:

d\Gamma = \frac{d\Gamma}{dE} dE.

As E peaks so sharply and the energy levels are packed so densely it is reasonable to use the function (small) w but calculate it for an argument value E. Capital W(E) is a probability density as a function of total energy, small w(E) is a function of discrete energies denoting states – so it has to be multiplied by the number of states in the range in question:

W(E)dE = w(E)d\Gamma

Thus…

W(E) = w(E)\frac{d\Gamma}{dE}.

The delta-function-like functions (of energy or states) have to be normalized, and the widths ΔΓ and ΔE multiplied by the respective heights W and w taken at the average energy E_\text{avg} have to be 1, respectively:

W(E_\text{avg}) \Delta E = 1
w(E_\text{avg}) \Delta \Gamma = 1

(… and the ‘average’ energy is what is simply called ‘the’ energy in classical thermodynamics).

So \Delta \Gamma is inversely proportional to the probability of the most likely state (of average energy). This can also be concluded from the quasi-classical definition: If you imagine a box full of particles, the least possible state is equivalent to all particles occupying a single cell in phase space. The probability for that is (size of the unit cell) over (size of the box) times smaller than the probability to find the particles evenly distributed on the whole box … which is exactly the definition of \Delta \Gamma.

The statistical weight is finally:

\Delta \Gamma =  \frac{d\Gamma(E_\text{avg})}{dE} \Delta E.

… the broadening in \Gamma , proportional to the broadening in E

The more familiar (?) definition of entropy

From that, you can recover another familiar definition of entropy, perhaps the more common one. Taking the logarithm…

log S = log (\Delta \Gamma) = -log (w(E_\text{avg})).

As log w is linear in E, the averaging of E can be extended to the whole log function. Then the definition of ‘averaging over states n’ can be used: To multiply the value for each state n by probability w_n and sum up:

- \sum_{n} w_n log w_n.

… which is the first statistical expression for entropy I had once learned.

LL do not introduce Boltzmann’s constant k here

It is effectively set to 1 – so entropy is defined without a reference to k. k is is only mentioned in passing later: In case one wishes to measure energy and temperature in different units. But there is no need to do so, if you defined entropy and temperature based on first principles.

Back to units

In a purely classical description based on the volume in phase space instead of the number of states there would be no cell of minimum size, and then instead of the statistical weight we had simply this volume: But then entropy would be calculated in a very awkward unit, the logarithm of action. Every change of the unit for measuring volumes in phase space would result in an additive constant – the deeper reason why entropy in a classical context is only defined up to such a constant.

So the natural unit called R_0 above should actually be Planck’s constant taken to the power defined by the number of particles.

Temperature

The first task to be solved in statistical mechanics is to find a general way of formulating a proper density function small w_n as a function of energy E_n. You can either assume that the system has a clearly defined energy upfront – the system lives on a ‘energy-hyperplane in phase space’ – or you can consider it immersed in a larger system later identified with a ‘heat bath’ which causes the system to reach thermal equilibrium. These two concepts are called the micro-canonical and the canonical distribution (or Gibbs distribution) and the actual distribution functions don’t differ much because the energy peaks so sharply also in the canonical case. It’s that type of calculations where those hyperspheres are actually needed.

Temperature as a concept emerges from a closer look at these distributions, but LL introduce it upfront from simpler considerations: It is sufficient to know that 1) entropy only depends on energy, 2) both are additive functions of subsystems, and 3) entropy is a maximum in equilibrium. You divide one system in two subsystems. The total change in entropy has to be zero as this is a maximum (in equilibrium), and what energy dE_1 leaves one system has to be received as dE_2 by the other system. Taking a look at the total entropy S as a function of the energy of one subsystem:

0 = \frac{dS}{dE_1} = \frac{dS_1}{dE_1} + \frac{dS_2}{dE_1} =
= \frac{dS_1}{dE_1} + \frac{dS_2}{dE_2} \frac{dE_2}{dE_1} =
= \frac{dS_1}{dE_1} + \frac{dS_2}{dE_2}

So \frac{dS_x}{dE_x} has to be the same for each subsystem x. Cutting one of the subsystems in two  you can use the same argument again. So there is one very interesting quantity that is the same for every subsystem – \frac{dS}{dE}. Let’s call it 1/T and let’s call T the temperature.

Spheres in a Space with Trillions of Dimensions

I don’t venture into speculative science writing – this is just about classical statistical mechanics; actually about a special mathematical aspect. It was one of the things I found particularly intriguing in my first encounters with statistical mechanics and thermodynamics a long time ago – a curious feature of volumes.

I was mulling upon how to ‘briefly motivate’ the calculation below in a comprehensible way, a task I might have failed at years ago already, when I tried to use illustrations and metaphors (Here and here). When introducing the ‘kinetic theory’ in thermodynamics often the pressure of an ideal gas is calculated first, by considering averages over momenta transferred from particles hitting the wall of a container. This is rather easy to understand but still sort of an intermediate view – between phenomenological thermodynamics that does not explain the microscopic origin of properties like energy, and ‘true’ statistical mechanics. The latter makes use of a phase space with with dimensions the number of particles. One cubic meter of gas contains ~1025 molecules. Each possible state of the system is depicted as a point in so-called phase space: A point in this abstract space represents one possible system state. For each (point-like) particle 6 numbers are added to a gigantic vector – 3 for its position and 3 for its momentum (mass times velocity), so the space has ~6 x 1025 dimensions. Thermodynamic properties are averages taken over the state of one system watched for a long time or over a lot of ‘comparable’ systems starting from different initial conditions. At the heart of statistical mechanics are distributions functions that describe how a set of systems described by such gigantic vectors evolves. This function is like a density of an incompressible fluid in hydrodynamics. I resorted to using the metaphor of a jelly in hyperspace before.

Taking averages means to multiply the ‘mechanical’ property by the density function and integrate it over the space where these functions live. The volume of interest is a  generalized N-ball defined as the volume within a generalized sphere. A ‘sphere’ is the surface of all points in a certain distance (‘radius’ R) from an origin

x_1^2 + x_2^2 + ... + x_ {N}^2 = R^2

(x_n being the co-ordinates in phase space and assuming that all co-ordinates of the origin are zero). Why a sphere? Because states are ordered or defined by energy, and larger energy means a greater ‘radius’ in phase space. It’s all about rounded surfaces enclosing each other. The simplest example for this is the ellipse of the phase diagram of the harmonic oscillator – more energy means a larger amplitude and a larger maximum velocity.

And here is finally the curious fact I actually want to talk about: Nearly all the volume of an N-ball with so many dimensions is concentrated in an extremely thin shell beneath its surface. Then an integral over a thin shell can be extended over the full volume of the sphere without adding much, while making integration simpler.

This can be seen immediately from plotting the volume of a sphere over radius: The volume of an N-ball is always equal to some numerical factor, times the radius to the power of the number of dimensions. In three dimensions the volume is the traditional, honest volume proportional to r3, in two dimensions the ‘ball’ is a circle, and its ‘volume’ is its area. In a realistic thermodynamic system, the volume is then proportional to rN with a very large N.

The power function rN turn more and more into an L-shaped function with increasing exponent N. The volume increases enormously just by adding a small additional layer to the ball. In order to compare the function for different exponents, both ‘radius’ and ‘volume’ are shown in relation to the respective maximum value, R and RN.

The interesting layer ‘with all the volume’ is certainly much smaller than the radius R, but of course it must not be too small to contain something. How thick the substantial shell has to be can be found by investigating the volume in more detail – using a ‘trick’ that is needed often in statistical mechanics: Taylor expanding in the exponent.

A function can be replaced by its tangent if it is sufficiently ‘straight’ at this point. Mathematically it means: If dx is added to the argument x, then the function at the new target is f(x + dx), which can be approximated by f(x) + [the slope df/dx] * dx. The next – higher-order term would be proportional to the curvature, the second derivation – then the function is replaced by a 2nd order polynomial. Joseph Nebus has recently published a more comprehensible and detailed post about how this works.

So the first terms of this so-called Taylor expansion are:

f(x + dx) = f(x) + dx{\frac{df}{dx}} + {\frac{dx^2}{2}}{\frac{d^2f}{dx^2}} + ...

If dx is small higher-order terms can be neglected.

In the curious case of the ball in hyperspace we are interested in the ‘remaining volume’ V(r – dr). This should be small compared to V(r) = arN (a being the uninteresting constant numerical factor) after we remove a layer of thickness dr with the substantial ‘bulk of the volume’.

However, trying to expand the volume V(r – dr) = a(r – dr)N, we get:

V(r - dr) = V(r) - adrNr^{N-1} + a{\frac{dr^2}{2}}N(N-1)r^{N-2} + ...
= ar^N(1 - N{\frac{dr}{r}} + {\frac{N(N-1)}{2}}({\frac{dr}{r}})^2) + ...

But this is not exactly what we want: It is finally not an expansion, a polynomial, in (the small) ratio of dr/r, but in Ndr/r, and N is enormous.

So here’s the trick: 1) Apply the definition of the natural logarithm ln:

V(r - dr) = ae^{N\ln(r - dr)} = ae^{N\ln(r(1 - {\frac{dr}{r}}))}
= ae^{N(\ln(r) + ln(1 - {\frac{dr}{r}}))}
= ar^Ne^{\ln(1 - {\frac{dr}{r}}))} = V(r)e^{N(\ln(1 - {\frac{dr}{r}}))}

2) Spot a function that can be safely expanded in the exponent: The natural logarithm of 1 plus something small, dr/r. So we can expand near 1: The derivative of ln(x) is 1/x (thus equal to 1/1 near x=1) and ln(1) = 0. So ln(1 – x) is about -x for small x:

V(r - dr) = V(r)e^{N(0 - 1{\frac{dr}{r})}} \simeq V(r)e^{-N{\frac{dr}{r}}}

3) Re-arrange fractions …

V(r - dr) = V(r)e^{-\frac{dr}{(\frac{r}{N})}}

This is now the remaining volume, after the thin layer dr has been removed. It is small in comparison with V(r) if the exponential function is small, thus if {\frac{dr}{(\frac{r}{N})}} is large or if:

dr \gg \frac{r}{N}

Summarizing: The volume of the N-dimensional hyperball is contained mainly in a shell dr below the surface if the following inequalities hold:

{\frac{r}{N}} \ll dr \ll r

The second one is needed to state that the shell is thin – and allow for expansion in the exponent, the first one is needed to make the shell thick enough so that it contains something.

This might help to ‘visualize’ a closely related non-intuitive fact about large numbers, like eN: If you multiply such a number by a factor ‘it does not get that much bigger’ in a sense – even if the factor is itself a large number:

Assuming N is about 1025  then its natural logarithm is about 58 and…

Ne^N = e^{\ln(N)+N} = e^{58+10^{25}}

… 58 can be neglected compared to N itself. So a multiplicative factor becomes something to be neglected in a sum!

I used a plain number – base e – deliberately as I am obsessed with units. ‘r’ in phase space would be associated with a unit incorporating lots of lengths and momenta. Note that I use the term ‘dimensions’ in two slightly different, but related ways here: One is the mathematical dimension of (an abstract) space, the other is about cross-checking the physical units in case a ‘number’ is something that can be measured – like meters. The co-ordinate  numbers in the vector refer to measurable physical quantities. Applying the definition of the logarithm just to rN would result in dimensionless number N side-by-side with something that has dimensions of a logarithm of the unit.

Using r – a number with dimensions of length – as base, it has to be expressed as a plain number, a multiple of the unit length R_0 (like ‘1 meter’). So comparing the original volume of the ball a{(\frac{r}{R_0})}^N to one a factor of N bigger …

aN{(\frac{r}{R_0})}^N = ae^{\ln{(N)} + N\ln{(\frac{r}{R_0})}}

… then ln(N) can be neglected as long as \frac{r}{R_0} is not extreeeemely tiny. Using the same argument as for base e above, we are on the safe side (and can neglect factors) if r is of about the same order of magnitude as the ‘unit length’ R_0 . The argument about negligible factors is an argument about plain numbers – and those ‘don’t exist’ in the real world as one could always decide to measure the ‘radius’ in a units of, say, 10-30 ‘meters’, which would make the original absolute number small and thus the additional factor non-negligible. One might save the argument by saying that we would always use units that sort of match the typical dimensions (size) of a system.

Saying everything in another way: If the volume of a hyperball ~rN is multiplied by a factor, this corresponds to multiplying the radius r by a factor very, very close to 1 – the Nth root of the factor for the volume. Only because the number of dimensions is so large, the volume is increased so much by such a small increase in radius.

As the ‘bulk of the volume’ is contained in a thin shell, the total volume is about the product of the surface area and the thickness of the shell dr. The N-ball is bounded by a ‘sphere’ with one dimension less than the ball. Increasing the volume by a factor means that the surface area and/or the thickness have to be increased by factors so that the product of these factors yield the volume increase factor. dr scales with r, and does thus not change much – the two inequalities derived above do still hold. Most of the volume factor ‘goes into’ the factor for increasing the surface. ‘The surface becomes the volume’.

This was long-winded. My excuse: Also Richard Feynman took great pleasure in explaining the same phenomenon in different ways. In his lectures you can hear him speak to himself when he says something along the lines of: Now let’s see if we really understood this – let’s try to derive it in another way…

And above all, he says (in a lecture that is more about math than about physics)

Now you may ask, “What is mathematics doing in a physics lecture?” We have several possible excuses: first, of course, mathematics is an important tool, but that would only excuse us for giving the formula in two minutes. On the other hand, in theoretical physics we discover that all our laws can be written in mathematical form; and that this has a certain simplicity and beauty about it. So, ultimately, in order to understand nature it may be necessary to have a deeper understanding of mathematical relationships. But the real reason is that the subject is enjoyable, and although we humans cut nature up in different ways, and we have different courses in different departments, such compartmentalization is really artificial, and we should take our intellectual pleasures where we find them.

___________________________________

Further reading / sources: Any theoretical physics textbook on classical thermodynamics / statistical mechanics. I am just re-reading mine.

And Now for Something Completely Different: Rotation Heat Pump!

Heat pumps for space heating are all very similar: Refrigerant evaporates, pressure is increased by a scroll compressor, refrigerant condenses, pressure is reduced in an expansion value. *yawn*

The question is:

Can a compression heat pump be built in a completely different way?

Austrian start-up ECOP did it: They  invented the so-called Rotation Heat Pump.

It does not have a classical compressor, and the ‘refrigerant’ does not undergo a phase transition. A pressure gradient is created by centrifugal forces: The whole system rotates, including the high-pressure (heat sink) and low-pressure (source) heat exchanger. The low pressure part of the system is positioned closer to the center of the rotation axis, and heat sink and heat source are connected at the axis (using heating water). The system rotates at up to 1800 rounds per minute.

A mixture of noble gases is used in a Joule (Brayton) process, driven in a cycle by a ventilator. Gas is compressed and thus heated up; then it is cooled at constant pressure and energy is released to the heat sink. After expanding the gas, it is heated up again at low pressure by the heat source.

In the textbook Joule cycle, a turbine and a compressor share a common axis: The energy released by the turbine is used to drive the compressor. This is essential, as compression and expansion energies are of the same order of magnitude, and both are considerably larger than the net energy difference – the actual input energy.

In contrast to that, a classical compression heat pump uses a refrigerant that is condensed while releasing heat and then evaporated again at low pressure. There is no mini-turbine to reduce the pressure but only an expansion valve, as there is not much energy to gain.

This explains why the Rotation Heat Pumps absolutely have to have compression efficiencies of nearly 100%, compared to, say, 85% efficiency of a scroll compressor in heat pump used for space heating:

Some numbers for a Joule process (from this German ECOP paper): On expansion of the gas 1200kW are gained, but 1300kW are needed for compression – if there would be no losses at all. So the net input power is 100kW. But if the efficiency of the compression is reduced from 100% to 80% about 1600kW are needed and thus a net input power of 500kW – five times the power compared to the ideal compressor! The coefficient of performance would plummet from 10 to 2,3.

I believe these challenging requirements are why Rotation Heat Pumps are ‘large’ and built for industrial processes. In addition to the high COP, this heat pump is also very versatile: Since there are no phase transitions, you can pick your favorite corner of the thermodynamic state diagram at will: This heat pump works for very different combinations temperatures of the hot target and the cold source.

Re-Visiting Carnot’s Theorem

The proof by contradiction used in physics textbooks is one of those arguments that appear surprising, then self-evident, then deceptive in its simplicity. You – or maybe only: I – cannot resist turning it over and over in your head again, viewing it from different angles.

tl;dr: I just wanted to introduce the time-honored tradition of ASCII text art images to illustrate Carnot’s Theorem, but this post got out of hand when I mulled about how to  refute an erroneous counter-argument. As there are still research papers being written about Carnot’s efficiency I feel vindicated for writing a really long post though.

Carnot‘s arguments prove that there is a maximum efficiency of a thermodynamic heat engine – a machine that turns heat into mechanical energy. He gives the maximum value by evaluating one specific, idealized process, and then proves that a machine with higher efficiency would give rise to a paradox. The engine uses part of the heat available in a large, hot reservoir of heat and turns it into mechanical work and waste heat – the latter dumped to a colder ‘environment’ in a 4-step process. (Note that while our modern reformulation of the proof by contradiction refers to the Second Law of Thermodynamics, Carnot’s initial version was based on the caloric theory.)

The efficiency of such an engine η – mechanical energy per cycle over input heat energy – only depends on the two temperatures (More details and references here):

\eta_\text{carnot} = \frac {T_1-T_2}{T_1}

These are absolute temperatures in Kelvin; this universal efficiency can be used to define what we mean by absolute temperature.

I am going to use ‘nice’ numbers. To make ηcarnot equal to 1/2, the hot temperature
T1 = 273° = 546 K, and the colder ‘environment’ has T2 = 0°C = 273 K.

If this machine is run in reverse, it uses mechanical input energy to ‘pump’ energy from the cold environment to the hot reservoir – it is a heat pump using the ambient reservoir as a heat source. The Coefficient of Performance (COP, ε) of the heat pump is heat output over mechanical input, the inverse of the efficiency of the corresponding engine. εcarnot is 2 for the temperatures given above.

If we combine two such perfect machines – an engine and a heat pump, both connected to the hot space and to the cold environment, their effects cancel out: The mechanical energy released by the engine drives the heat pump which ‘pumps back’ the same amount of energy.

In the ASCII images energies are translated to arrows, and the number of parallel arrows indicates the amount of energy per cycle (or power). For each device, the number or arrows flowing in and out is the same; energy is always conserved. I am viewing this from the heat pump’s perspective, so I call the cold environment the source, and the hot environment room.

Neither of the heat reservoirs are heated or cooled in this ideal case as the same amount of energy flows from and to each of the heat reservoirs:

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
           | | | |                         | | | |
           v v v v                         ^ ^ ^ ^
           | | | |                         | | | |
       |------------|                 |---------------|
       |   Engine   |->->->->->->->->-|   Heat pump   |
       |  Eta = 1/2 |->->->->->->->->-| COP=2 Eta=1/2 |
       |------------|                 |---------------|
             | |                             | |
             v v                             ^ ^
             | |                             | |
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------|

If either of the two machines works less than perfectly and in tandem with a perfect machine, anything is still fine:

If the engine is far less than perfect and has an efficiency of only 1/4 – while the heat pump still works perfectly – more of the engine’s heat energy input is now converted to waste heat and diverted to the environment:

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
           | | | |                           | |  
           v v v v                           ^ ^  
           | | | |                           | |  
       |------------|                 |---------------|
       |   Engine   |->->->->->->->->-|   Heat pump   |
       |  Eta = 1/4 |                 | COP=2 Eta=1/2 |
       |------------|                 |---------------|
            | | |                             |
            v v v                             ^
            | | |                             |
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------|

Now two net units of energy flow from the hot room to the environment (summing up the arrows to and from the devices):

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
                              | |                                
                              v v                                
                              | | 
                     |------------------|
                     |   Combination:   |
                     | Eta=1/4 COP=1/2  |
                     |------------------|                            
                              | |                              
                              v v                              
                              | |                             
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------|

Using a real-live heat pump with a COP of 3/2 (< 2) together with a perfect engine …

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
          | | | |                             | | | 
          v v v v                             ^ ^ ^ 
          | | | |                             | | |
       |------------|                 |-----------------|
       |   Engine   |->->->->->->->->-|    Heat pump    |
       |  Eta = 1/2 |->->->->->->->->-|     COP=3/2     |
       |------------|                 |-----------------|
            | |                                 |
            v v                                 ^
            | |                                 |
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------|

… causes again a non-paradoxical net flow of one unit of energy from the room to the environment.

In the most extreme case  a poor heat pump (not worth this name) with a COP of 1 just translates mechanical energy into heat energy 1:1. This is a resistive heating element, a heating rod, and net heat fortunately flows from hot to cold without paradoxes:

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
            | |                                |   
            v v                                ^   
            | |                                |   
       |------------|                 |-----------------|
       |   Engine   |->->->->->->->->-|   'Heat pump'   |
       |  Eta = 1/2 |                 |     COP = 1     |
       |------------|                 |-----------------|
             |                                 
             v                                 
             |                                 
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------|

The textbook paradox in encountered, when an ideal heat pump is combined with an allegedly better-than-possible engine, e.g. one with an efficiency:

ηengine = 2/3 (> ηcarnot = 1/2)

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
           | | |                           | | | |
           v v v                           ^ ^ ^ ^
           | | |                           | | | |
       |------------|                 |---------------|
       |   Engine   |->->->->->->->->-|   Heat pump   |
       |  Eta = 2/3 |->->->->->->->->-| COP=2 Eta=1/2 |
       |------------|                 |---------------|
             |                               | |
             v                               ^ ^
             |                               | |
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------|

The net effect / heat flow is then:

|----------------------------------------------------------|
|        Hot room at temperature T_1 = 273°C = 546 K       | 
|----------------------------------------------------------| 
                             | 
                             ^ 
                             | 
                   |------------------| 
                   |   Combination:   | 
                   | Eta=3/2; COP=1/2 | 
                   |------------------| 
                             | 
                             ^ 
                             | 
|----------------------------------------------------------| 
|       Cold source at temperature T_2 = 0°C = 273 K       | 
|----------------------------------------------------------|

One unit of heat would flow from the environment to the room, from the colder to the warmer body without any other change being made to the system. The combination of these machines would violate the Second Law of Thermodynamics; it is a Perpetuum Mobile of the Second Kind.

If the heat pump has a higher COP than the inverse of the perfect engine’s efficiency, a similar paradox arises, and again one unit of heat flows in the forbidden direction:

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
            | |                             | | |
            v v                             ^ ^ ^
            | |                             | | |
       |------------|                 |---------------|
       |   Engine   |->->->->->->->->-|   Heat pump   |
       |  Eta = 1/2 |                 |    COP = 3    |
       |------------|                 |---------------|
             |                               | |
             v                               ^ ^
             |                               | |
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------|

A weird question: Can’t we circumvent the paradoxes if we pair the impossible superior devices with poorer ones (of the reverse type)?

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
           | | |                             | |  
           v v v                             ^ ^  
           | | |                             | |  
       |------------|                 |---------------|
       |   Engine   |->->->->->->->->-|   Heat pump   |
       |  Eta = 2/3 |->->->->->->->->-|    COP = 1    |
       |------------|                 |---------------|
             |                                
             v                                
             |                                
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------

Indeed: If the COP of the heat pump (= 1) is smaller than the inverse of the (impossible) engine’s efficiency (3/2), there will be no apparent violation of the Second Law – one unit of net heat flows from hot to cold.

An engine with low efficiency 1/4 would ‘fix’ the second paradox involving the better-than-perfect heat pump:

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
           | | | |                          | | |
           v v v v                          ^ ^ ^
           | | | |                          | | |
       |------------|                 |---------------|
       |   Engine   |->->->->->->->->-|   Heat pump   |
       |  Eta = 1/4 |                 |     COP=3     |
       |------------|                 |---------------|
            | | |                            | |
            v v v                            ^ ^
            | | |                            | |
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------|

But we cannot combine heat pumps and engines at will, just to circumvent the paradox – one counter-example is sufficient: Any realistic engine combined with any realistic heat pump – plus all combinations of those machines with ‘worse’ ones – have to result in net flow from hot to cold …

The Second Law identifies such ‘sets’ of engines and heat pumps that will all work together nicely. It’s easier to see this when all examples are condensed into one formula:

The heat extracted in total from the hot room – Q1 –  is the difference of heat used by the engine and heat delivered by the heat pump, both of which are defined in relation to the same mechanical work W:

Q_1 = W\left (\frac{1}{\eta_\text{engine}}-\varepsilon_\text{heatpump}\right)

This is also automatically equal to Qas another quick calculation shows or by just considering that energy is conserved: Some heat goes into the combination of the two machines, part of it – W – flows internally from the engine to the heat pump. But no part of the input Q1 can be lost, so the output of the combined machine has to match the input. Energy ‘losses’ such as energy due to friction will flow to either of the heat reservoirs: If an engine is less-then-perfect, more heat will be wasted to the environment; and if the heat pump is less-than-perfect a greater part of mechanical energy will be translated to heat only 1:1. You might be even lucky: Some part of heat generated by friction might end up in the hot room.

As Q1 has to be > 0 according to the Second Low, the performance numbers have to related by this inequality:

\frac{1}{\eta_\text{engine}}\geq\varepsilon_\text{heatpump}

The equal sign is true if the effects of the two machines just cancel each other.

If we start from a combination of two perfect machines (ηengine = 1/2 = 1/εheatpump) and increase either ηengine or εheatpump, this condition would be violated and heat would flow from cold to hot without efforts.

But also an engine with efficiency = 1 would work happily with the worst heat pump with COP = 1. No paradox would arise at first glance  – as 1/1 >= 1:

|----------------------------------------------------------|
|         Hot room at temperature T_1 = 273°C = 546 K      |
|----------------------------------------------------------|
             |                                |   
             v                                ^   
             |                                |   
       |------------|                 |-----------------|
       |   Engine   |->->->->->->->->-|   'Heat pump'   |
       |   Eta = 1  |                 |      COP=1      |
       |------------|                 |-----------------|
                                               
                                               
                                               
|----------------------------------------------------------| 
|        Cold source at temperature T_2 = 0°C = 273 K      | 
|----------------------------------------------------------|

What’s wrong here?

Because of conservation of energy ε is always greater equal 1; so the set of valid combinations of machines all consistent with each other is defined by:

\frac{1}{\eta_\text{engine}}\geq\varepsilon_\text{heatpump}\geq1

… for all efficiencies η and COPs / ε of machines in a valid set. The combination η = ε = 1 is still not ruled out immediately.

But if the alleged best engine (in a ‘set’) would have an efficiency of 1, then the alleged best heat pump would have an Coefficient of Performance of only 1 – and this is actually the only heat pump possible as ε has to be both lower equal and greater equal than 1. It cannot get better without creating paradoxes!

If one real-live heat pump is found that is just slightly better than a heating rod – say
ε = 1,1 – then performance numbers for the set of consisent, non-paradoxical machines need to fulfill:

\eta_\text{engine}\leq\eta_\text{best engine}

and

\varepsilon_\text{heatpump}\leq\varepsilon_\text{best heatpump}

… in addition to the inequality relating η and ε.

If ε = 1,1 is a candidate for the best heat pump, a set of valid machines would comprise:

  • All heat pumps with ε between 1 and 1,1 (as per limits on ε)
  • All engines with η between 0 and 0,9 (as per inequality following the Second Law plus limit on η).

Consistent sets of machines are thus given by a stronger condition – by adding a limit for both efficiency and COP ‘in between’:

\frac{1}{\eta_\text{engine}}\geq\text{Some Number}\geq\varepsilon_\text{heatpump}\geq1

Carnot has designed a hypothetical ideal heat pump that could have a COP of εcarnot = 1/ηcarnot. It is a limiting case of a reversible machine, but feasible on principle. εcarnot  is thus a valid upper limit for heat pumps, a candidate for Some Number. In order to make this inequality true for all sets of machines (ideal ones plus all worse ones) then 1/ηcarnot = εcarnot also constitutes a limit for engines:

\frac{1}{\eta_\text{engine}}\geq\frac{1}{\eta_\text{carnot}}\geq\varepsilon_\text{heatpump}\geq1

So in order to rule out all paradoxes, Some Number in Between has to be provided for each set of machines. But what defines a set? As machines of totally different making have to work with each other without violating this equality, this number can only be a function of the only parameters characterizing the system – the two temperatures

Carnot’s efficiency is only a function of the temperatures. His hypothetical process is reversible, the machine can work either as a heat pump or an engine. If we could come up with a better process for a reversible heat pump (ε > εcarnot), the machine run in reverse would be an engine with η less than ηcarnot, whereas a ‘better’ engine would lower the upper bound for heat pumps.

If you have found one truly reversible process, both η and ε associated with it are necessarily the upper bounds of performance of the respective machines, so you cannot push Some Number in one direction or the other, and the efficiencies of all reversible engines have to be equal – and thus equal to ηcarnot. The ‘resistive heater’ with ε = 1 is the iconic irreversible device. It will not turn into a perfect engine with η = 1 when ‘run in reverse’.

The seemingly odd thing is that 1/ηcarnot appears like a lower bound for ε at first glance if you just declare ηcarnot an upper bound for corresponding engines and take the inverse, while in practice and according to common sense it is the maximum value for all heat pumps, including irreversible ones. (As a rule of thumb a typical heat pump for space heating has a COP only 50% of 1/ηcarnot.)

But this ‘contradiction’ is yet another way of stating that there is one universal performance indicator of all reversible machines making use of two heat reservoirs: The COP of a hypothetical ‘superior’ reversible heat pump would be at least 1/ηcarnot  … as good as Carnot’s reversible machine, maybe better. But the same is true for the hypothetical superior engine with an efficiency of at least ηcarnot. So the performance numbers of all reversible machines (all in one set, characterized by the two temperatures) have to be exactly the same.

Steam pump / Verkehrt laufende Dampfmaschine

Historical piston compressor (from the time when engines with pistons looked like the ones in textbooks), installed 1878 in the salt mine of Bex, Switzerland. 1943 it was still in operation. Such machines used in salt processing were considered the first heat pumps.

An Efficiency Greater Than 1?

No, my next project is not building a Perpetuum Mobile.

Sometimes I mull upon definitions of performance indicators. It seems straight-forward that the efficiency of a wood log or oil burner is smaller than 1 – if combustion is not perfect you will never be able to turn the caloric value into heat, due to various losses and incomplete combustion.

Our solar panels have an ‘efficiency’ or power ratio of about 16,5%. So 16.5% of solar energy are converted to electrical energy which does not seem a lot. However, that number is meaningless without adding economic context as solar energy is free. Higher efficiency would allow for much smaller panels. If efficiency were only 1% and panels were incredibly cheap and I had ample roof spaces I might not care though.

The coefficient of performance of a heat pump is 4-5 which sometimes leaves you with this weird feeling of using odd definitions. Electrical power is ‘multiplied’ by a factor always greater than one. Is that based on crackpottery?

Heat pump.

Our heat pump. (5 connections: 2x heat source – brine, 3x heating water hot water / heating water supply, joint return).

Actually, we are cheating here when considering the ‘input’ – in contrast to the way we view photovoltaic panels: If 1 kW of electrical power is magically converted to 4 kW of heating power, the remaining 3 kW are provided by a cold or lukewarm heat source. Since those are (economically) free, they don’t count. But you might still wonder, why the number is so much higher than 1.

My favorite answer:

There is an absolute minimum temperature, and our typical refrigerators and heat pumps operate well above it.

The efficiency of thermodynamic machines is most often explained by starting with an ideal process using an ideal substance – using a perfect gas as a refrigerant that runs in a closed circuit. (For more details see pointers in the Further Reading section below). The gas would be expanded at a low temperature. This low temperature is constant as heat is transferred from the heat source to the gas. At a higher temperature the gas is compressed and releases heat. The heat released is the sum of the heat taken in at lower temperatures plus the electrical energy fed in to the compressor – so there is no violation of energy conservation. In order to ‘jump’ from the lower to the higher temperature, the gas is compressed – by a compressor run on electrical power – without exchanging heat with the environment. This process is repeating itself again and again, and with every cycle the same heat energy is released at the higher temperature.

In defining the coefficient of performance the energy from the heat source is omitted, in contrast to the electrical energy:

COP = \frac {\text{Heat released at higher temperature per cycle}}{\text{Electrical energy fed into the compressor per cycle}}

The efficiency of a heat pump is the inverse of the efficiency of an ideal engine – the same machine, running in reverse. The engine has an efficiency lower than 1 as expected. Just as the ambient energy fed into the heat pump is ‘free’, the related heat released by the engine to the environment is useless and thus not included in the engine’s ‘output’.

100 1870 (Voitsberg steam power plant)

One of Austria’s last coal power plants – Kraftwerk Voitsberg, retired in 2006 (Florian Probst, Wikimedia). Thermodynamically, this is like ‘a heat pump running in reverse. That’s why I don’t like when a heat pump is said to ‘work like a refrigerator, just in reverse’ (Hinting at: The useful heat provided by the heat pump is equivalent to the waste heat of the refrigerator). If you run the cycle backwards, a heat pump would become sort of a steam power plant.

The calculation (see below) results in a simple expression as the efficiency only depends on temperatures. Naming the higher temperature (heating water) T1 and the temperature of the heat source (‘environment’, our water tank for example) T….

COP = \frac {T_1}{T_1-T_2}

The important thing here is that temperatures have to be calculated in absolute values: 0°C is equal to 273,15 Kelvin, so for a typical heat pump and floor loops the nominator is about 307 K (35°C) whereas the denominator is the difference between both temperature levels – 35°C and 0°C, so 35 K. Thus the theoretical COP is as high as 8,8!

Two silly examples:

  • Would the heat pump operate close to absolute zero, say, trying to pump heat from 5 K to 40 K, the COP would only be
    40 / 35 = 1,14.
  • On the other hand, using the sun as a heat source (6000 K) the COP would be
    6035 / 35 = 172.

So, as heat pump owners we are lucky to live in an environment rather hot compared to absolute zero, on a planet where temperatures don’t vary that much in different places, compared to how far away we are from absolute zero.

__________________________

Further reading:

Richard Feynman has often used unusual approaches and new perspectives when explaining the basics in his legendary Physics Lectures. He introduces (potential) energy at the very beginning of the course drawing on Carnot’s argument, even before he defines force, acceleration, velocity etc. (!) In deriving the efficiency of an ideal thermodynamic engine many chapters later he pictured a funny machine made from rubber bands, but otherwise he follows the classical arguments:

Chapter 44 of Feynman’s Physics Lectures Vol 1, The Laws of Thermodynamics.

For an ideal gas heat energies and mechanical energies are calculated for the four steps of Carnot’s ideal process – based on the Ideal Gas Law. The result is the much more universal efficiency given above. There can’t be any better machine as combining an ideal engine with an ideal heat pump / refrigerator (the same type of machine running in reverse) would violate the second law of thermodynamics – stated as a principle: Heat cannot flow from a colder to a warmer body and be turned into mechanical energy, with the remaining system staying the same.

KarnoyiCikl

Pressure over Volume for Carnot’s process, when using the machine as an engine (running it counter-clockwise it describes a heat pump): AB: Expansion at constant high temperature, BC: Expansion without heat exchange (cooling), CD: Compression at constant low temperature, DA: Compression without heat exhange (gas heats up). (Image: Kara98, Wikimedia).

Feynman stated several times in his lectures that he does not want to teach history of physics or downplayed the importance of learning about history of science a bit (though it seems he was well versed in – as e.g. his efforts to follow Newton’s geometrical prove of Kepler’s Laws showed). For historical background of the evolution of Carnot’s ideas and his legacy see the the definitive resource on classical thermodynamics and its history – Peter Mander’s blog carnotcycle.wordpress.com:

What had puzzled me is once why we accidentally latched onto such a universal law, using just the Ideal Gas Law.The reason is that the Gas Law has the absolute temperature already included. Historically, it did take quite a while until pressure, volume and temperature had been combined in a single equation – see Peter Mander’s excellent article on the historical background of this equation.

Having explained Carnot’s Cycle and efficiency, every course in thermodynamics reveals a deeper explanation: The efficiency of an ideal engine could actually be used as a starting point defining the new scale of temperature.

Temperature scale according to Kelvin (William Thomson)

Carnot engines with different efficiencies due to different lower temperatures. If one of the temperatures is declared the reference temperature, the other can be determined by / defined by the efficiency of the ideal machine (Image: Olivier Cleynen, Wikimedia.)

However, according to the following paper, Carnot did not rigorously prove that his ideal cycle would be the optimum one. But it can be done, applying variational principles – optimizing the process for maximum work done or maximum efficiency:

Carnot Theory: Derivation and Extension, paper by Liqiu Wang

A Sublime Transition

Don’t expect anything philosophical or career-change-related. I am talking about water and its phase transition to ice because …

…the fact that a process so common and important as water freezing is not fully resolved and understood, is astonishing.

(Source)

There are more spectacular ways of triggering this transition than just letting a tank of water cool down slowly: Following last winter’s viral trend, fearless mavericks turned boiling water vapor into snow flakes. Simply sublime desublimation?

Here is an elegant demo of Boiling water freezing in midair in the cold:

The science experiment took its toll: About 50 hobbyist scientists scalded themselves, ignoring the empirical rule about spraying any kind of liquid and wind direction:

“I accidentally threw all the BOILING water against the wind and burnt myself.”

Can it really be desublimation of water vapor? The reverse of this process, sublimation, is well known to science fiction fans:

Special effects supervisor Alex Weldon was charged with devising a way to realistically recreate the look of pools of steaming milky water that had been at the location. He concocted similar liquid with evaporated milk and white poster paint, mixed with water and poured into the set’s pools. Steam bubbling to the top was created with dry ice and steam machines, passed into the water via hidden tubing.

(Source: Star Trek online encyclopedia Memory Alpha on planet Vulcan.)

Dry ice is solid carbon dioxide, and it is the combination of temperature and atmospheric pressure on planet earth that allow for the sublimation of CO2. The phase diagram shows that at an air pressure of 1 bar and room temperature (about 293 K = 20°C) only solid and gaseous CO2 can exist:

Carbon dioxide p-T phase diagramIf a chunk of dry ice is taken out of the refrigerator and thrown onto the disco’s dance floor it will heat up a bit, and cross the line between the solid and gas areas in the diagram.

Sublimation of dry ice (Wikimedia, public domain)On the contrary, the phase diagram of water shows that at 1 bar (= 100 kPa) the direct transition from vapor to ice is the is not an option. Following the red horizontal 1-bar-line you need to cross the green realm of the liquid phase:

Phase diagram of water (Wikimedia, User cmglee)You would need to do the experiment in an atmosphere less than 1/100 as dense to sublimate ice or desublimate vapor.

But experiments show that the green area seems to be traversed in the fraction of a second – and boiling water seems to cool down much faster than colder water!

It seems paradoxical as more heat energy need to be removed from boiling water (or vapor!) to cool it down to 0°C. The heat of vaporization is about 2.300 kJ/kg whereas the specific heat of water is only 4 kJ/kgK.

I believe that the sudden freezing  is due to the much more efficient heat transfer between the ambient air and vapor / tiny droplets versus the smaller heat flow from larger droplets to the air.

Mixing water vapor with air will provide for the best exposure of the wildly shaking water molecules to the slower air molecules. If not-yet-vaporized water droplets are thrown into the air, I blame the faster freezing on water’s surface tension decreasing with increasing temperature:

Temperature dependence surface tension of waterSurface tension indicates the work it takes to create or maintain a surface between different phases or substances. The internal pressure inside a water droplet is proportional to surface tension and inverse proportional to its radius. This follows from the work against air pressure needed to increase the size of a droplet. Assuming that droplets of different sizes will be created with similar internal pressures, the average size of droplets will be smaller for higher temperatures.

A cup of water at 90°C will be dispersed into a larger number of smaller droplets and thus a bigger surface exposed to air than a cup at 70°C. The liquid with the lower surface tension will evaporate more quickly.

One more twist: If droplets are created in mid air, as precipitates from condensation or desublimation, it takes work to create their surfaces – proportional to surface tension and area. On the other hand, you gain energy from  these processes – proportional to volume. If the surface tension is lower but the area is larger the total volume is the same – and thus the net effect in terms of energy balance might be the same. But arguments based on energy balance only don’t take into account the dynamic nature of this process, far off thermodynamic equilibrium: The theoretical energy gain can only be cashed in (within the time frame we are interested in it) if condensation or freezing or desublimation is actually initiated – which in turn depends in the shape and area of the surface and on nuclei for droplets.

Heat transfer is of course more efficient for a larger temperature differences between air and water; perhaps that’s why the trend started in Siberia:

I have for sure not discussed any phenomenon involved here. Even hot water kept in a vessel can cool down and freeze faster than initially cooler water: This is called the Mpemba effect, a phenomenon known to our ancestors and rediscovered by the scientific community in the 1960s – after a curious African student refused to believe that his teachers called his observations on making ice cream ‘impossible’. The effect is surprisingly difficult to explain!

In 2013 an Mpemba effect contest had been held and the paper quoted at the top of this post was the winner (out of 22.000 submissions!). Physical chemist Nikola Bregovic emphasizes the impact of heat transfer and convection: Hot water is cooled faster due to more efficient heat transfer to the environment. Stirring the liquid will disturb convective flows inside the vessel and can prevent the Mpemba effect.

The  effect could also be due to different spontaneous freezing temperatures of supercooled water. Ice crystals can start to grow instantly at a temperature below the theoretical freezing point:

Various parameters and processes – such as living organisms in the water or heating water to higher temperatures before! –  might destroy or create nucleation sites for ice crystals. Supercooling of vapor might also allow for a jump over the green liquid area in the phase diagram, and thus for deposition of ice from vapor even at normal pressures.

Quoting Bregovic again:

I did not expect to find that water could behave in such a different manner under so similar conditions. Once again this small, simple molecule amazes and intrigues us with it’s magic.
~
Ice in our underground water tank, growing at the top layer of heat exchanger tubes. These are only covered with water if a bulk of ice underneath will make the water level rise.