Consequences of the Second Law of Thermodynamics

Why a Carnot process using a Van der Waals gas – or other fluid with uncommon equation of state – also runs at Carnot’s efficiency.

Textbooks often refer to an ideal gas when introducing Carnot’s cycle – it’s easy to calculate heat energies and work in this case. Perhaps this might imply that not only must the engine be ‘ideal’ – reversible – but also the working fluid has to be ‘ideal’ in some sense? No, it does not, as explicitly shown in this paper: The Carnot cycle with the Van der Waals equation of state.

In this post I am considering a class of substances which is more general than the Van der Waals gas, and I come to the same conclusion. Unsurprisingly. You only need to imagine Carnot’s cycle in a temperature-entropy (T-S) diagram: The process is represented by a rectangle for both ideal and Van der Waals gas. Heat energies and work needed to calculate efficiency can be read off, and the – universal – maximum efficiency can be calculated without integrating over potentially wiggly pressure-volume curves.

But the fact that we can use the T-S diagram or the fact that the concept of entropy makes sense is a consequence of the Second Law of Thermodynamics. It also states, that a Perpetuum Mobile of the Second Kind is not possible: You cannot build a machine that converts 100% of the heat energy in a temperature bath to mechanical energy. This statement sounds philosophical but it puts constraints on the way real materials can behave, and I think these constraints on the relations between physical properties are stronger than one might intuitively expect. If you pick an equation of state – the pressure as a function of volume and temperature, like the wavy Van der Waals curve, the behavior of specific heat is locked in. In a sense the functions describing the material’s properties have to conspire just in the right way to yield the simple rectangle in the T-S plane.

The efficiency of a perfectly reversible thermodynamic engine (converting heat to mechanical energy) has a maximum well below 100%. If the machine uses two temperature baths with constant temperatures T_1 and T_2, the heat energies exchanged between machine and baths Q_1 and Q_2 for an ideal reversible process are related by:

\frac{Q_1}{T_1} + \frac{Q_2}{T_2} = 0

(I wrote on the related proof by contradiction before – avoiding to use the notion of entropy at all costs). This ideal process and this ideal efficiency could also be used to actually define the thermodynamic temperature (as it emerges from statistical considerations; I have followed Landau and Lifshitz’s arguments in this post on statistical mechanics and entropy)

Any thermodynamic process using any type of substance can be imagined as being a combination of lots of Carnot engines operating between lots of temperature baths at different temperatures (see e.g. Feynman’s lecture). The area in the p-V diagram that is traced out in a cyclic process is being split into infinitely many Carnot processes. For each process small heat energies \delta Q are transferred. Summing up the contributions of all processes only the loop at the edge remains and thus …

\oint \frac{\delta Q}{T}

which means that for a reversible process \frac{\delta Q}{T} actually has to be a total differential of a function dS … that is called entropy. This argument used in thermodynamics textbooks is kind of a ‘reverse’ argument to the statistical one – which introduces  ‘entropy first’ and ‘temperature second’.

What I  need in the following derivations are the relations between differentials that represent a version of First and Second Law:

The First Law of Thermodynamics states that heat is a form of energy, so

dE = \delta Q - pdV

The minus is due to the fact that energy is increased on increasing volume (There might be other thermodynamics degrees of freedom like the magnetization of a magnetic substance – so other pairs of variables like p and V).

Inserting the definition of entropy S as the total differential we obtain this relation …

dS = \frac{dE + pdV}{T}

… from which follow lots of relations between thermodynamic properties!

I will derive one the them to show how strong the constraints are that the Second Law actually imposes on the physical properties of materials: When the so-called equation of state is given – the pressure as a function of volume and temperature p(V,T) – then you also know something about its specific heat. For an ideal gas pV is simply a constant times temperature.

S is a function of the state, so picking independent variables V and T entropy’s total differential is:

dS = (\frac{\partial S}{\partial T})_V dT + (\frac{\partial S}{\partial V})_T dV

On the other hand, from the definition of entropy / the combination of 1st and 2nd Law given above it follows that

dS = \frac{1}{T} \left \{ (\frac{\partial E }{\partial T})_V dT + \left [ (\frac{\partial E }{\partial V})_T + p \right ]dV \right \}

Comparing the coefficients of dT and dV the partial derivatives of entropy with respect to volume and temperature can be expressed as functions of energy and pressure. The order of partial derivation does not matter:

\left[\frac{\partial}{\partial V}\left(\frac{\partial S}{\partial T}\right)_V \right]_T = \left[\frac{\partial}{\partial T}\left(\frac{\partial S}{\partial V}\right)_T \right]_V

Thus differentiating each derivative of S once more with respect to the other variable yields:

[ \frac{\partial}{\partial V} \frac{1}{T} (\frac{\partial E }{\partial T})_V ]_T = [ \frac{\partial}{\partial T} \frac{1}{T} \left [ (\frac{\partial E }{\partial V})_T + p \right ] ]_V

What I actually want, is a result for the specific heat: (\frac{\partial E }{\partial T})_V – the energy you need to put in per degree Kelvin to heat up a substance at constant volume, usually called C_v. I keep going, hoping that something like this derivative will show up. The mixed derivative \frac{1}{T} \frac{\partial^2 E}{\partial V \partial T} shows up on both sides of the equation, and these terms cancel each other. Collecting the remaining terms:

0 = -\frac{1}{T^2} (\frac{\partial E }{\partial V})_T -\frac{1}{T^2} p + \frac{1}{T}(\frac{\partial p}{\partial T})_V

Multiplying by T^2 and re-arranging …

(\frac{\partial E }{\partial V})_T = -p +T(\frac{\partial p }{\partial T})_V = T^2(\frac{\partial}{\partial T}\frac{p}{T})_V

Again, noting that the order of derivations does not matter, we can use this result to check if the specific heat for constant volume – C_v = (\frac{\partial E }{\partial T})_V – depends on volume:

(\frac{\partial C_V}{\partial V})_T = \frac{\partial}{\partial V}[(\frac{\partial E }{\partial T})_V]_T = \frac{\partial}{\partial T}[(\frac{\partial E }{\partial V})_T]_V

But we know the last partial derivative already and insert the expression derived before – a function that is fully determined by the equation of state p(V,T):

(\frac{\partial C_V}{\partial V})_T= \frac{\partial}{\partial T}[(-p +T(\frac{\partial p }{\partial T})_V)]_V = -(\frac{\partial p}{\partial T})_V +  (\frac{\partial p}{\partial T})_V + T(\frac{\partial^2 p}{\partial T^2})_V = T(\frac{\partial^2 p}{\partial T^2})_V

So if the pressure depends e.g. only linearly on temperature the second derivative re T is zero and C_v does not depend on volume but only on temperature. The equation of state says something about specific heat.

The idealized Carnot process contains four distinct steps. In order to calculate efficiency for a certain machine and working fluid, you need to calculate the heat energies exchanged between machine and bath on each of these steps. Two steps are adiabatic – the machine is thermally insulated, thus no heat is exchanged. The other steps are isothermal, run at constant temperature – only these steps need to be considered to calculate the heat energies denoted Q_1 and Q_2:

Carnot-cycle-p-V-diagram

Carnot process for an ideal gas: A-B: Isothermal expansion, B-C: Adiabatic expansion, C-D: isothermal compression, D-A: adiabatic compression. (Wikimedia, public domain, see link for details).

I am using the First Law again and insert the result for (\frac{\partial E}{\partial V})_T which was obtained from the combination of both Laws – the goal is to express heat energy as a function of pressure and specific heat:

\delta Q= dE + p(T,V)dV = (\frac{\partial E}{\partial T})_V dT + (\frac{\partial E}{\partial V})_T dV + p(T,V)dV
= C_V(T,V) dT + [-p +T(\frac{\partial p(T,V)}{\partial T})_V] dV + p(T,V)dV = C_V(T,V)dT + T(\frac{\partial p(T,V)}{\partial T})_V dV

Heat Q is not a function of the state defined by V and T – that’s why the incomplete differential δQ is denoted by the Greek δ. The change in heat energy depends on how exactly you get from one state to another. But we know what the process should be in this case: It is isothermal, therefore dT is zero and heat energy is obtained by integrating over volume only.

We need p as a function of V and T. The equation of state for ideal gas says that pV is proportional to temperature. I am now considering a more general equation of state of the form …

p = f(V)T + g(V)

The Van der Waals equation of state takes into account that particles in the gas interact with each other and that they have a finite volume (Switching units, from capital volume V [m3] to small v [m3/kg] to use gas constant R [kJ/kgK] rather than absolute numbers of particles and to use the more common representation – so comparing to $latex pv = RT) :

p = \frac{RT}{v - b} - \frac{a}{v^2}

This equation also matches the general pattern.

Van der Waals isothmers (Waals3)

Van der Waals isotherms (curves of constant temperature) in the p-V plane: Depending on temperature, the functions show a more or less pronounced ‘wave’ with a maximum and a minimum, in contrast to the ideal-gas-like hyperbolas (p = RT/v) for high temperatures. (By Andrea insinga, Wikimedia, for details see link.)

In both cases pressure depends only linearly on temperature, and so (\frac{\partial C_V}{\partial V})_T is 0. Thus specific heat does not depend on volume, and I want to stress that this is a consequence of the fundamental Laws and the p(T,V) equation of state, not an arbitrary, additional assumption about this substance.

The isothermal heat energies are thus given by the following, integrating T(\frac{\partial p(T,V)}{\partial T})_V  = T f(V) over V:

Q_1 = T_1 \int_{V_A}^{V_B} f(V) dV
Q_2 = T_2 \int_{V_C}^{V_D} f(V) dV

(So if Q_1 is positive, Q_2 has to be negative.)

In the adiabatic processes δQ is zero, thus

C_V(T,V)dT = -T(\frac{\partial p(T,V)}{\partial T})_V dV = -T f(V) dV
\int \frac{C_V(T,V)}{T}dT = \int -f(V) dV

This is useful as we already know that specific heat only depends on temperature for the class of substances considered, so for each adiabatic process…

\int_{T_1}^{T_2} \frac{C_V(T)}{T}dT = \int_{V_B}^{V_C} -f(V) dV
\int_{T_2}^{T_1} \frac{C_V(T)}{T}dT = \int_{V_D}^{V_A} -f(V) dV

Adding these equations, the two integrals over temperature cancel and

\int_{V_B}^{V_C} f(V) = -\int_{V_D}^{V_A} f(V) dV

Carnot’s efficiency is work – the difference of the absolute values of the two heat energies – over the heat energy invested at higher temperature T_1 :

\eta = \frac {Q_1 - \left | Q_2 \right |}{Q_1} = 1 - \frac {\left | Q_2 \right |}{Q_1}
\eta = 1 - \frac {T_2}{T_1} \frac {\left | \int_{V_C}^{V_D} f(V) dV \right |}{\int_{V_A}^{V_B} f(V) dV}

The integral from A to B can replaced by an integral over the alternative path A-D-C-B (as the integral over the closed path is zero for a reversible process) and

\int_{A}^{B} = \int_{A}^{D} + \int_{D}^{C}+ \int_{C}^{B}

But the relation between the B-C and A-D integral derived from considering the adiabatic processes is equivalent to

-\int_{C}^{B} = \int_{B}^{C} = - \int_{D}^{A} = \int_{A}^{D}

Thus two terms in the alternative integral cancel and

\int_{A}^{B} = \int_{D}^{C}

… and finally the integrals in the efficiency cancel. What remains is Carnot’s efficiency:

\eta = \frac {T_1 - T_2}{T_1}

But what if the equation of state is more complex and specific heat would depends also on volume?

Yet another way to state the Second Law is to say that the efficiencies of all reversible processes has to be equal and equal to Carnot’s efficiency. Otherwise you get into a thicket of contradictions (as I highlighted here). The authors of the VdW paper say they are able to prove this for infinitesimal cycles which sounds of course plausible: As mentioned at the beginning, splitting up any reversible process into many processes that use only a tiny part of the co-ordinate space is the ‘standard textbook procedure’ (see e.g. Feynman’s lecture, especially figure 44-10).

But you could immediately see it without calculating anything by having a look at the process in a T-S diagram instead of the p-V representation. A process made up of two isothermal and two adiabatic processes is by definition (of entropy, see above) a rectangle no matter what the equation of state of the working substance is. Heat energy and work can easily been read off as the rectangles between or below the straight lines:

Carnot-cycle-T-S-diagram

Carnot process displayed in the entropy-temperature plane. No matter if the working fluid is an ideal gas following the pv = RT equation of state or if it is a Van der Waals gas that may show a ‘wave’ with a maximum and a minimum in a p-V diagram – in the T-S diagram all of this will look like rectangles and thus exhibit the maximum (Carnot’s) efficiency.

In the p-V diagram one might see curves of weird shape, but when calculating the relation between entropy and temperature the weirdness of the dependencies of specific heat and pressure of V and T compensate for each other. They are related because of the differential relation implied by the 2nd Law.

Computers, Science, and History Thereof

I am reading three online resources in parallel – on the history and the basics of computing, computer science, software engineering, and the related culture and ‘philosophy’. An accidental combination I find most enjoyable.

Joel on Software: Joel Spolsky’s blog – a collection of classic essays. What every developer needs to know about Unicode. New terms like Astronaut Architects and Leaky Abstractions. How to start a self-funded software company, how to figure out the price of software, how to write functional specifications. Bringing back memories of my first encounters with Microsoft VBA. He has the best examples – Martian Headsets to explain web standards.

The blog started in 1999 – rather shortly after I had entered the IT industry. So it is an interesting time capsule, capturing technologies and trends I was sort of part of – including the relationship with one large well-known software company.

Somewhere deep in Joel’s blog I found references to another classic; it was in an advice on how to show passion as an applicant for a software developer job. Tell them how reading this moved you to tears:

Structure and Interpretation of Computer Programs. I think I have found the equivalent to Feynman’s Physics Lectures in computer science! I have hardly ever read a textbook or attended a class that was both so philosophically insightful and useful in a hands-on, practical way. Using Scheme (Lisp) as an example, important concepts are introduced step-by-step, via examples, viewed from different perspectives.

It was amazing how far you can get with purely Functional Programming. I did not even notice that they had not used a single assignment (Data Mutation) until far into the course.

The quality of the resources made available for free is incredible – which holds for all the content I am praising in this post: Full textbook, video lectures with transcripts, slides with detailed comments. It is also good to know and reassuring that despite the allegedly fast paced changes of technology, basic concepts have not changed that much since decades.

But if you are already indulging in nostalgic thoughts why not catch up on the full history of computing?

Creatures of Thought. A sublime book-like blog on the history of computing – starting from with the history of telephone networks and telegraphs, covering computing machines – electro-mechanical or electronic, related and maybe unappreciated hardware components like the relay, and including biographic vignettes of the heroes involved.

The author’s PhD thesis (available for download on the About page) covers the ‘information utility’ vision that was ultimately superseded by the personal computer. This is an interesting time capsule for me as well, as this story ends about where my personal journey started – touching personal PCs in the late 1980s, but having been taught the basics of programming via sending my batch jobs to an ancient mainframe.

From such diligently done history of engineering I can only learn not to rush to any conclusions. There are no simple causes and effects, or unambiguous stories about who invented what and who was first. It’s all subtle evolution and meandering narratives, randomness and serendipity. Quoting from the post that indicates the beginning of the journey, on the origins of the electric telegraph:

Our physics textbooks have packaged up the messy past into a tidy collection of concepts and equations, eliding centuries of development and conflict between competing schools of thought. Ohm never wrote the formula V = IR, nor did Maxwell create Maxwell’s equations.

Though I will not attempt to explore all the twists and turns of the intellectual history of electricity, I will do my best to present ideas as they existed at the time, not as we retrospectively fit them into our modern categories.

~

Phone, 1970s, Austria

The kind of phone I used at the time when the video lectures for Structure and Interpretation of Computer Programs had been recorded and when I submitted my batch jobs of Fortran code to be compiled. I have revived the phone now and then.

 

Tinkering, Science, and (Not) Sharing It

I stumbled upon this research paper called PVC polyhedra:

We describe how to construct a dodecahedron, tetrahedron, cube, and octahedron out of pvc pipes using standard fittings.

In particular, if we take a connector that takes three pipes each at 120 degree angles from the others (this is called a “true wye”) and we take elbows of the appropriate angle, we can make the edges come together below the center at exactly the correct angles.

A pivotal moment: What you consider tinkering is actually research-paper-worthy science. Here are some images from the Chief Engineer’s workbench.

The supporting construction of our heat exchangers are built from standard parts connected at various angles:

The final result can be a cuboid for holding meandering tubes:

… or cascaded prisms with n-gon basis – for holding spirals of flexible tubes:

The implementation of this design is documented here (a German post whose charm would be lost in translation unless I wanted to create Internet Poetry).

But I also started up my time machine – in order to find traces of my polyhedra research in the early 1980s. From photos and drawings of the three-dimensional crystals in mineralogy books I figured out how to draw two-dimensional maps of maximally connected surface areas. I cut out the map, and glued together the remaining free edges. Today I would be made redundant by Origami AI.

I filled several shelves with polyhedra of increasing number of faces, starting with a tetrahedron and culminating with this rhombicosidodecahedron. If I recall correctly, I cheated a bit with this one and created some of the pyramids as completely separate items.

I think this was a rather standard hobby for the typical nerdy child, among things like growing crystals from solutions of toxic chemicals, building a makeshift rotatable telescope tripod from scraps, or verifying the laws of optics using prisms and lenses from ancient dismantled devices.

The actually interesting thing is that this photo is the only trace of any of these hobbies. In many years after creating this stuff – and destroying it again – I never thought about documenting it. Until today. It seems we weren’t into sharing these days.

Spheres in a Space with Trillions of Dimensions

I don’t venture into speculative science writing – this is just about classical statistical mechanics; actually about a special mathematical aspect. It was one of the things I found particularly intriguing in my first encounters with statistical mechanics and thermodynamics a long time ago – a curious feature of volumes.

I was mulling upon how to ‘briefly motivate’ the calculation below in a comprehensible way, a task I might have failed at years ago already, when I tried to use illustrations and metaphors (Here and here). When introducing the ‘kinetic theory’ in thermodynamics often the pressure of an ideal gas is calculated first, by considering averages over momenta transferred from particles hitting the wall of a container. This is rather easy to understand but still sort of an intermediate view – between phenomenological thermodynamics that does not explain the microscopic origin of properties like energy, and ‘true’ statistical mechanics. The latter makes use of a phase space with with dimensions the number of particles. One cubic meter of gas contains ~1025 molecules. Each possible state of the system is depicted as a point in so-called phase space: A point in this abstract space represents one possible system state. For each (point-like) particle 6 numbers are added to a gigantic vector – 3 for its position and 3 for its momentum (mass times velocity), so the space has ~6 x 1025 dimensions. Thermodynamic properties are averages taken over the state of one system watched for a long time or over a lot of ‘comparable’ systems starting from different initial conditions. At the heart of statistical mechanics are distributions functions that describe how a set of systems described by such gigantic vectors evolves. This function is like a density of an incompressible fluid in hydrodynamics. I resorted to using the metaphor of a jelly in hyperspace before.

Taking averages means to multiply the ‘mechanical’ property by the density function and integrate it over the space where these functions live. The volume of interest is a  generalized N-ball defined as the volume within a generalized sphere. A ‘sphere’ is the surface of all points in a certain distance (‘radius’ R) from an origin

x_1^2 + x_2^2 + ... + x_ {N}^2 = R^2

(x_n being the co-ordinates in phase space and assuming that all co-ordinates of the origin are zero). Why a sphere? Because states are ordered or defined by energy, and larger energy means a greater ‘radius’ in phase space. It’s all about rounded surfaces enclosing each other. The simplest example for this is the ellipse of the phase diagram of the harmonic oscillator – more energy means a larger amplitude and a larger maximum velocity.

And here is finally the curious fact I actually want to talk about: Nearly all the volume of an N-ball with so many dimensions is concentrated in an extremely thin shell beneath its surface. Then an integral over a thin shell can be extended over the full volume of the sphere without adding much, while making integration simpler.

This can be seen immediately from plotting the volume of a sphere over radius: The volume of an N-ball is always equal to some numerical factor, times the radius to the power of the number of dimensions. In three dimensions the volume is the traditional, honest volume proportional to r3, in two dimensions the ‘ball’ is a circle, and its ‘volume’ is its area. In a realistic thermodynamic system, the volume is then proportional to rN with a very large N.

The power function rN turn more and more into an L-shaped function with increasing exponent N. The volume increases enormously just by adding a small additional layer to the ball. In order to compare the function for different exponents, both ‘radius’ and ‘volume’ are shown in relation to the respective maximum value, R and RN.

The interesting layer ‘with all the volume’ is certainly much smaller than the radius R, but of course it must not be too small to contain something. How thick the substantial shell has to be can be found by investigating the volume in more detail – using a ‘trick’ that is needed often in statistical mechanics: Taylor expanding in the exponent.

A function can be replaced by its tangent if it is sufficiently ‘straight’ at this point. Mathematically it means: If dx is added to the argument x, then the function at the new target is f(x + dx), which can be approximated by f(x) + [the slope df/dx] * dx. The next – higher-order term would be proportional to the curvature, the second derivation – then the function is replaced by a 2nd order polynomial. Joseph Nebus has recently published a more comprehensible and detailed post about how this works.

So the first terms of this so-called Taylor expansion are:

f(x + dx) = f(x) + dx{\frac{df}{dx}} + {\frac{dx^2}{2}}{\frac{d^2f}{dx^2}} + ...

If dx is small higher-order terms can be neglected.

In the curious case of the ball in hyperspace we are interested in the ‘remaining volume’ V(r – dr). This should be small compared to V(r) = arN (a being the uninteresting constant numerical factor) after we remove a layer of thickness dr with the substantial ‘bulk of the volume’.

However, trying to expand the volume V(r – dr) = a(r – dr)N, we get:

V(r - dr) = V(r) - adrNr^{N-1} + a{\frac{dr^2}{2}}N(N-1)r^{N-2} + ...
= ar^N(1 - N{\frac{dr}{r}} + {\frac{N(N-1)}{2}}({\frac{dr}{r}})^2) + ...

But this is not exactly what we want: It is finally not an expansion, a polynomial, in (the small) ratio of dr/r, but in Ndr/r, and N is enormous.

So here’s the trick: 1) Apply the definition of the natural logarithm ln:

V(r - dr) = ae^{N\ln(r - dr)} = ae^{N\ln(r(1 - {\frac{dr}{r}}))}
= ae^{N(\ln(r) + ln(1 - {\frac{dr}{r}}))}
= ar^Ne^{\ln(1 - {\frac{dr}{r}}))} = V(r)e^{N(\ln(1 - {\frac{dr}{r}}))}

2) Spot a function that can be safely expanded in the exponent: The natural logarithm of 1 plus something small, dr/r. So we can expand near 1: The derivative of ln(x) is 1/x (thus equal to 1/1 near x=1) and ln(1) = 0. So ln(1 – x) is about -x for small x:

V(r - dr) = V(r)e^{N(0 - 1{\frac{dr}{r})}} \simeq V(r)e^{-N{\frac{dr}{r}}}

3) Re-arrange fractions …

V(r - dr) = V(r)e^{-\frac{dr}{(\frac{r}{N})}}

This is now the remaining volume, after the thin layer dr has been removed. It is small in comparison with V(r) if the exponential function is small, thus if {\frac{dr}{(\frac{r}{N})}} is large or if:

dr \gg \frac{r}{N}

Summarizing: The volume of the N-dimensional hyperball is contained mainly in a shell dr below the surface if the following inequalities hold:

{\frac{r}{N}} \ll dr \ll r

The second one is needed to state that the shell is thin – and allow for expansion in the exponent, the first one is needed to make the shell thick enough so that it contains something.

This might help to ‘visualize’ a closely related non-intuitive fact about large numbers, like eN: If you multiply such a number by a factor ‘it does not get that much bigger’ in a sense – even if the factor is itself a large number:

Assuming N is about 1025  then its natural logarithm is about 58 and…

Ne^N = e^{\ln(N)+N} = e^{58+10^{25}}

… 58 can be neglected compared to N itself. So a multiplicative factor becomes something to be neglected in a sum!

I used a plain number – base e – deliberately as I am obsessed with units. ‘r’ in phase space would be associated with a unit incorporating lots of lengths and momenta. Note that I use the term ‘dimensions’ in two slightly different, but related ways here: One is the mathematical dimension of (an abstract) space, the other is about cross-checking the physical units in case a ‘number’ is something that can be measured – like meters. The co-ordinate  numbers in the vector refer to measurable physical quantities. Applying the definition of the logarithm just to rN would result in dimensionless number N side-by-side with something that has dimensions of a logarithm of the unit.

Using r – a number with dimensions of length – as base, it has to be expressed as a plain number, a multiple of the unit length R_0 (like ‘1 meter’). So comparing the original volume of the ball a{(\frac{r}{R_0})}^N to one a factor of N bigger …

aN{(\frac{r}{R_0})}^N = ae^{\ln{(N)} + N\ln{(\frac{r}{R_0})}}

… then ln(N) can be neglected as long as \frac{r}{R_0} is not extreeeemely tiny. Using the same argument as for base e above, we are on the safe side (and can neglect factors) if r is of about the same order of magnitude as the ‘unit length’ R_0 . The argument about negligible factors is an argument about plain numbers – and those ‘don’t exist’ in the real world as one could always decide to measure the ‘radius’ in a units of, say, 10-30 ‘meters’, which would make the original absolute number small and thus the additional factor non-negligible. One might save the argument by saying that we would always use units that sort of match the typical dimensions (size) of a system.

Saying everything in another way: If the volume of a hyperball ~rN is multiplied by a factor, this corresponds to multiplying the radius r by a factor very, very close to 1 – the Nth root of the factor for the volume. Only because the number of dimensions is so large, the volume is increased so much by such a small increase in radius.

As the ‘bulk of the volume’ is contained in a thin shell, the total volume is about the product of the surface area and the thickness of the shell dr. The N-ball is bounded by a ‘sphere’ with one dimension less than the ball. Increasing the volume by a factor means that the surface area and/or the thickness have to be increased by factors so that the product of these factors yield the volume increase factor. dr scales with r, and does thus not change much – the two inequalities derived above do still hold. Most of the volume factor ‘goes into’ the factor for increasing the surface. ‘The surface becomes the volume’.

This was long-winded. My excuse: Also Richard Feynman took great pleasure in explaining the same phenomenon in different ways. In his lectures you can hear him speak to himself when he says something along the lines of: Now let’s see if we really understood this – let’s try to derive it in another way…

And above all, he says (in a lecture that is more about math than about physics)

Now you may ask, “What is mathematics doing in a physics lecture?” We have several possible excuses: first, of course, mathematics is an important tool, but that would only excuse us for giving the formula in two minutes. On the other hand, in theoretical physics we discover that all our laws can be written in mathematical form; and that this has a certain simplicity and beauty about it. So, ultimately, in order to understand nature it may be necessary to have a deeper understanding of mathematical relationships. But the real reason is that the subject is enjoyable, and although we humans cut nature up in different ways, and we have different courses in different departments, such compartmentalization is really artificial, and we should take our intellectual pleasures where we find them.

___________________________________

Further reading / sources: Any theoretical physics textbook on classical thermodynamics / statistical mechanics. I am just re-reading mine.

You Never Know

… when obscure knowledge comes in handy!

You can dismantle an old gutter without efforts, and without any special tools:

Just by gently setting it into twisted motion, effectively applying ~1Hz torsion waves that would lead to fatigue break within a few minutes.

I knew my stint in steel research in the 1990s would finally be good for something.

If you want to create a meme from this and tag it with Work Smart Not Harder, don’t forget to give me proper credits.

Ploughing Through Theoretical Physics Textbooks Is Therapeutic

And finally science confirms it, in a sense.

Again and again, I’ve harped on this pet theory of mine – on this blog and elsewhere on the web: At the peak of my immersion in the so-called corporate world, as a super-busy bonus miles-collecting consultant, I turned to the only solace: Getting up (even) earlier, and starting to re-read all my old mathematics and physics textbooks and lecture notes.

The effect was two-fold: It made me more detached, perhaps more Stoic when facing the seemingly urgent challenges of the accelerated world. Maybe it already prepared me for a long and gradual withdrawal from that biosphere. But surprisingly, I felt it also made my work results (even ;-)) better: I clearly remember compiling documentation I wrote after setting up some security infrastructure with a client. Writing precise documentation was again more like casting scientific research results into stone, carefully picking each term and trying to be as succinct as possible.

As anybody else I enjoy reading about psychological research that confirms my biases one-datapoint-based research – and here it finally is. Thanks to Professor Gary for sharing it. Science says that Corporate-Speak Makes You Stupid. Haven’t we – Dilbert fans – always felt that this has to be true?

… I’ve met otherwise intelligent people, after working with management consultant, are convinced that infinitely-malleable concepts like “disruptive innovation,” “business ecosystem,” and “collaborative culture” have objective value.

In my post In Praise of Textbooks with Tons of Formulas I focused on possible positive explanations, like speeding up your rational System 2 ((c) Daniel Kahneman) – by getting accustomed to mathematics again. By training yourself to recognize patterns and to think out of the box when trying to find the clever twist to solve a physics problem. Re-reading this, I cringe though: Thinking out of the box has entered the corporate vocabulary already. Disclaimer: I am talking about ways to pick a mathematical approach, by drawing on other, slightly related problems intuitively – in the way Kahneman explains the so-called intuition of experts as pattern recognition.

But perhaps the explanation is really as simple as that we just need to shield ourselves from negative effects of certain ecosystems and cultures that are particularly intrusive and mind-bending. So this is my advice to physics and math graduates: Do not rely on your infamous analytical skills forever. First, using that phrase in a job application sounds like phony hollow BS (as unfortunately any self-advertising of social skills does). Second, these skills are real, but they will decay exponentially if you don’t hone them.

6 volumes on all of Theoretical Physics - 1960s self-consistent series by my late professor Wilhelm Macke

Simulating Peak Ice

This year ice in the tank was finally melted between March 5 to March 10 – as ‘visual inspection’ showed. Level sensor Mr. Bubble was confused during the melting phase; thus it was an interesting exercise to compare simulations to measurements.

Simulations use the measured ambient temperature and solar radiation as an input, data points are taken every minute. Air temperature determines the heating energy needed by the house: Simulated heat load is increasing linearly until a maximum ‘cut off’ temperature.

The control logic of the real controller (UVR1611 / UVR16x2) is mirrored in the simulation: The controller’s heating curve determines the set temperature for the heating water, and it switches the virtual 3-way valves: Diverting heating water either to the hygienic storage or the buffer tank for space heating, and including the collector in the brine circuit if air temperature is high enough compared to brine temperature. In the brine circuit, three heat exchangers are connected in series: Three temperatures at different points are determined self-consistently from three equations that use underground tank temperature, air temperature, and the heat pump evaporator’s power as input parameters.

The hydraulic schematic for reference, as displayed in the controller’s visualization (See this article for details on operations.)

The Coefficient of Performance of the heat pump, its heating power, and its electrical input power are determined by heating water temperature and brine temperature – from polynomial fit curves to vendors’ data sheet.

So for every minute, the temperatures of tanks – hot and cold – and the volume of ice can be calculated from energy balances. The heating circuits and tap water consume energy, the heat pump delivers energy. The heat exchanger in the tank releases energy or harvests energy, and the collector exchanges energy with the environment. The heat flow between tank and ground is calculated by numerically solving the Heat Equation, using the nearly constant temperature in about 10 meters depth as a boundary condition.

For validating the simulation and for fine-tuning input parameters – like the thermal properties of ground or the building – I cross-check calculated versus measured daily / monthly energies and average temperatures.

Measurements for this winter show the artificial oscillations during the melting phase because Mr. Bubble faces the cliff of ice:

Simulations show growing of ice and the evolution of the tank temperature in agreement with measurements. The melting of ice is in line with observations. The ‘plateau’ shows the oscillations that Mr. Bubble notices, but the true amplitude is smaller:

2016-09 - 2017-03: Temperatures and ice formation - simulations.

Simulated peak ice is about 0,7m3 greater than the measured value. This can be explained by my neglecting temperature gradients within water or ice in the tank:

When there is only a bit of ice yet (small peak in December), tank temperature is underestimated: In reality, the density anomaly of water causes a zone of 4°C at the bottom, below the ice.

When the ice block is more massive (end of January), I overestimate brine temperature as ice has less than 0°C, at least intermittently when the heat pump is turned on. Thus the temperature difference between ambient air and brine is underestimated, and so is the simulated energy harvested from the collector – and more energy needs to be provided by freezing water.

However, a difference in volume of less than 10% is uncritical for system’s sizing, especially if you err on the size of caution. Temperature gradients in ice and convection in water should be less critical if heat exchanger tubes traverse the volume of tank evenly – our prime design principle.

I have got questions about the efficiency of immersed heat exchangers in the tank – will heat transfer deteriorate if the layer of ice becomes too thick? No, according also to this very detailed research report on simulations of ‘ice storage heat pump systems’ (p.5). We grow so-called ‘ice on coil’ which is compared to flat-plate heat exchangers:

… for the coil, the total heat transfer (UA), accounting for the growing ice surface, shows only a small decrease with growing ice thickness. The heat transfer resistance of the growing ice layer is partially compensated by the increased heat transfer area around the coil. In the case of the flat plate, on the contrary, also the UA-value decreases rapidly with growing ice thickness.

__________________________________

For system’s configuration data see the last chapter of this documentation.