Consequences of the Second Law of Thermodynamics

Why a Carnot process using a Van der Waals gas – or other fluid with uncommon equation of state – also runs at Carnot’s efficiency.

Textbooks often refer to an ideal gas when introducing Carnot’s cycle – it’s easy to calculate heat energies and work in this case. Perhaps this might imply that not only must the engine be ‘ideal’ – reversible – but also the working fluid has to be ‘ideal’ in some sense? No, it does not, as explicitly shown in this paper: The Carnot cycle with the Van der Waals equation of state.

In this post I am considering a class of substances which is more general than the Van der Waals gas, and I come to the same conclusion. Unsurprisingly. You only need to imagine Carnot’s cycle in a temperature-entropy (T-S) diagram: The process is represented by a rectangle for both ideal and Van der Waals gas. Heat energies and work needed to calculate efficiency can be read off, and the – universal – maximum efficiency can be calculated without integrating over potentially wiggly pressure-volume curves.

But the fact that we can use the T-S diagram or the fact that the concept of entropy makes sense is a consequence of the Second Law of Thermodynamics. It also states, that a Perpetuum Mobile of the Second Kind is not possible: You cannot build a machine that converts 100% of the heat energy in a temperature bath to mechanical energy. This statement sounds philosophical but it puts constraints on the way real materials can behave, and I think these constraints on the relations between physical properties are stronger than one might intuitively expect. If you pick an equation of state – the pressure as a function of volume and temperature, like the wavy Van der Waals curve, the behavior of specific heat is locked in. In a sense the functions describing the material’s properties have to conspire just in the right way to yield the simple rectangle in the T-S plane.

The efficiency of a perfectly reversible thermodynamic engine (converting heat to mechanical energy) has a maximum well below 100%. If the machine uses two temperature baths with constant temperatures T_1 and T_2, the heat energies exchanged between machine and baths Q_1 and Q_2 for an ideal reversible process are related by:

\frac{Q_1}{T_1} + \frac{Q_2}{T_2} = 0

(I wrote on the related proof by contradiction before – avoiding to use the notion of entropy at all costs). This ideal process and this ideal efficiency could also be used to actually define the thermodynamic temperature (as it emerges from statistical considerations; I have followed Landau and Lifshitz’s arguments in this post on statistical mechanics and entropy)

Any thermodynamic process using any type of substance can be imagined as being a combination of lots of Carnot engines operating between lots of temperature baths at different temperatures (see e.g. Feynman’s lecture). The area in the p-V diagram that is traced out in a cyclic process is being split into infinitely many Carnot processes. For each process small heat energies \delta Q are transferred. Summing up the contributions of all processes only the loop at the edge remains and thus …

\oint \frac{\delta Q}{T}

which means that for a reversible process \frac{\delta Q}{T} actually has to be a total differential of a function dS … that is called entropy. This argument used in thermodynamics textbooks is kind of a ‘reverse’ argument to the statistical one – which introduces  ‘entropy first’ and ‘temperature second’.

What I  need in the following derivations are the relations between differentials that represent a version of First and Second Law:

The First Law of Thermodynamics states that heat is a form of energy, so

dE = \delta Q - pdV

The minus is due to the fact that energy is increased on increasing volume (There might be other thermodynamics degrees of freedom like the magnetization of a magnetic substance – so other pairs of variables like p and V).

Inserting the definition of entropy S as the total differential we obtain this relation …

dS = \frac{dE + pdV}{T}

… from which follow lots of relations between thermodynamic properties!

I will derive one the them to show how strong the constraints are that the Second Law actually imposes on the physical properties of materials: When the so-called equation of state is given – the pressure as a function of volume and temperature p(V,T) – then you also know something about its specific heat. For an ideal gas pV is simply a constant times temperature.

S is a function of the state, so picking independent variables V and T entropy’s total differential is:

dS = (\frac{\partial S}{\partial T})_V dT + (\frac{\partial S}{\partial V})_T dV

On the other hand, from the definition of entropy / the combination of 1st and 2nd Law given above it follows that

dS = \frac{1}{T} \left \{ (\frac{\partial E }{\partial T})_V dT + \left [ (\frac{\partial E }{\partial V})_T + p \right ]dV \right \}

Comparing the coefficients of dT and dV the partial derivatives of entropy with respect to volume and temperature can be expressed as functions of energy and pressure. The order of partial derivation does not matter:

\left[\frac{\partial}{\partial V}\left(\frac{\partial S}{\partial T}\right)_V \right]_T = \left[\frac{\partial}{\partial T}\left(\frac{\partial S}{\partial V}\right)_T \right]_V

Thus differentiating each derivative of S once more with respect to the other variable yields:

[ \frac{\partial}{\partial V} \frac{1}{T} (\frac{\partial E }{\partial T})_V ]_T = [ \frac{\partial}{\partial T} \frac{1}{T} \left [ (\frac{\partial E }{\partial V})_T + p \right ] ]_V

What I actually want, is a result for the specific heat: (\frac{\partial E }{\partial T})_V – the energy you need to put in per degree Kelvin to heat up a substance at constant volume, usually called C_v. I keep going, hoping that something like this derivative will show up. The mixed derivative \frac{1}{T} \frac{\partial^2 E}{\partial V \partial T} shows up on both sides of the equation, and these terms cancel each other. Collecting the remaining terms:

0 = -\frac{1}{T^2} (\frac{\partial E }{\partial V})_T -\frac{1}{T^2} p + \frac{1}{T}(\frac{\partial p}{\partial T})_V

Multiplying by T^2 and re-arranging …

(\frac{\partial E }{\partial V})_T = -p +T(\frac{\partial p }{\partial T})_V = T^2(\frac{\partial}{\partial T}\frac{p}{T})_V

Again, noting that the order of derivations does not matter, we can use this result to check if the specific heat for constant volume – C_v = (\frac{\partial E }{\partial T})_V – depends on volume:

(\frac{\partial C_V}{\partial V})_T = \frac{\partial}{\partial V}[(\frac{\partial E }{\partial T})_V]_T = \frac{\partial}{\partial T}[(\frac{\partial E }{\partial V})_T]_V

But we know the last partial derivative already and insert the expression derived before – a function that is fully determined by the equation of state p(V,T):

(\frac{\partial C_V}{\partial V})_T= \frac{\partial}{\partial T}[(-p +T(\frac{\partial p }{\partial T})_V)]_V = -(\frac{\partial p}{\partial T})_V +  (\frac{\partial p}{\partial T})_V + T(\frac{\partial^2 p}{\partial T^2})_V = T(\frac{\partial^2 p}{\partial T^2})_V

So if the pressure depends e.g. only linearly on temperature the second derivative re T is zero and C_v does not depend on volume but only on temperature. The equation of state says something about specific heat.

The idealized Carnot process contains four distinct steps. In order to calculate efficiency for a certain machine and working fluid, you need to calculate the heat energies exchanged between machine and bath on each of these steps. Two steps are adiabatic – the machine is thermally insulated, thus no heat is exchanged. The other steps are isothermal, run at constant temperature – only these steps need to be considered to calculate the heat energies denoted Q_1 and Q_2:


Carnot process for an ideal gas: A-B: Isothermal expansion, B-C: Adiabatic expansion, C-D: isothermal compression, D-A: adiabatic compression. (Wikimedia, public domain, see link for details).

I am using the First Law again and insert the result for (\frac{\partial E}{\partial V})_T which was obtained from the combination of both Laws – the goal is to express heat energy as a function of pressure and specific heat:

\delta Q= dE + p(T,V)dV = (\frac{\partial E}{\partial T})_V dT + (\frac{\partial E}{\partial V})_T dV + p(T,V)dV
= C_V(T,V) dT + [-p +T(\frac{\partial p(T,V)}{\partial T})_V] dV + p(T,V)dV = C_V(T,V)dT + T(\frac{\partial p(T,V)}{\partial T})_V dV

Heat Q is not a function of the state defined by V and T – that’s why the incomplete differential δQ is denoted by the Greek δ. The change in heat energy depends on how exactly you get from one state to another. But we know what the process should be in this case: It is isothermal, therefore dT is zero and heat energy is obtained by integrating over volume only.

We need p as a function of V and T. The equation of state for ideal gas says that pV is proportional to temperature. I am now considering a more general equation of state of the form …

p = f(V)T + g(V)

The Van der Waals equation of state takes into account that particles in the gas interact with each other and that they have a finite volume (Switching units, from capital volume V [m3] to small v [m3/kg] to use gas constant R [kJ/kgK] rather than absolute numbers of particles and to use the more common representation – so comparing to $latex pv = RT) :

p = \frac{RT}{v - b} - \frac{a}{v^2}

This equation also matches the general pattern.

Van der Waals isothmers (Waals3)

Van der Waals isotherms (curves of constant temperature) in the p-V plane: Depending on temperature, the functions show a more or less pronounced ‘wave’ with a maximum and a minimum, in contrast to the ideal-gas-like hyperbolas (p = RT/v) for high temperatures. (By Andrea insinga, Wikimedia, for details see link.)

In both cases pressure depends only linearly on temperature, and so (\frac{\partial C_V}{\partial V})_T is 0. Thus specific heat does not depend on volume, and I want to stress that this is a consequence of the fundamental Laws and the p(T,V) equation of state, not an arbitrary, additional assumption about this substance.

The isothermal heat energies are thus given by the following, integrating T(\frac{\partial p(T,V)}{\partial T})_V  = T f(V) over V:

Q_1 = T_1 \int_{V_A}^{V_B} f(V) dV
Q_2 = T_2 \int_{V_C}^{V_D} f(V) dV

(So if Q_1 is positive, Q_2 has to be negative.)

In the adiabatic processes δQ is zero, thus

C_V(T,V)dT = -T(\frac{\partial p(T,V)}{\partial T})_V dV = -T f(V) dV
\int \frac{C_V(T,V)}{T}dT = \int -f(V) dV

This is useful as we already know that specific heat only depends on temperature for the class of substances considered, so for each adiabatic process…

\int_{T_1}^{T_2} \frac{C_V(T)}{T}dT = \int_{V_B}^{V_C} -f(V) dV
\int_{T_2}^{T_1} \frac{C_V(T)}{T}dT = \int_{V_D}^{V_A} -f(V) dV

Adding these equations, the two integrals over temperature cancel and

\int_{V_B}^{V_C} f(V) = -\int_{V_D}^{V_A} f(V) dV

Carnot’s efficiency is work – the difference of the absolute values of the two heat energies – over the heat energy invested at higher temperature T_1 :

\eta = \frac {Q_1 - \left | Q_2 \right |}{Q_1} = 1 - \frac {\left | Q_2 \right |}{Q_1}
\eta = 1 - \frac {T_2}{T_1} \frac {\left | \int_{V_C}^{V_D} f(V) dV \right |}{\int_{V_A}^{V_B} f(V) dV}

The integral from A to B can replaced by an integral over the alternative path A-D-C-B (as the integral over the closed path is zero for a reversible process) and

\int_{A}^{B} = \int_{A}^{D} + \int_{D}^{C}+ \int_{C}^{B}

But the relation between the B-C and A-D integral derived from considering the adiabatic processes is equivalent to

-\int_{C}^{B} = \int_{B}^{C} = - \int_{D}^{A} = \int_{A}^{D}

Thus two terms in the alternative integral cancel and

\int_{A}^{B} = \int_{D}^{C}

… and finally the integrals in the efficiency cancel. What remains is Carnot’s efficiency:

\eta = \frac {T_1 - T_2}{T_1}

But what if the equation of state is more complex and specific heat would depends also on volume?

Yet another way to state the Second Law is to say that the efficiencies of all reversible processes has to be equal and equal to Carnot’s efficiency. Otherwise you get into a thicket of contradictions (as I highlighted here). The authors of the VdW paper say they are able to prove this for infinitesimal cycles which sounds of course plausible: As mentioned at the beginning, splitting up any reversible process into many processes that use only a tiny part of the co-ordinate space is the ‘standard textbook procedure’ (see e.g. Feynman’s lecture, especially figure 44-10).

But you could immediately see it without calculating anything by having a look at the process in a T-S diagram instead of the p-V representation. A process made up of two isothermal and two adiabatic processes is by definition (of entropy, see above) a rectangle no matter what the equation of state of the working substance is. Heat energy and work can easily been read off as the rectangles between or below the straight lines:


Carnot process displayed in the entropy-temperature plane. No matter if the working fluid is an ideal gas following the pv = RT equation of state or if it is a Van der Waals gas that may show a ‘wave’ with a maximum and a minimum in a p-V diagram – in the T-S diagram all of this will look like rectangles and thus exhibit the maximum (Carnot’s) efficiency.

In the p-V diagram one might see curves of weird shape, but when calculating the relation between entropy and temperature the weirdness of the dependencies of specific heat and pressure of V and T compensate for each other. They are related because of the differential relation implied by the 2nd Law.

Alien Energy

I am sure it protects us not only from lightning but also from alien attacks and EMP guns …

So I wrote about our lightning protection, installed together with our photovoltaic generator. Now our PV generator is operational for 11 months and we have encountered one alien attack, albeit by beneficial aliens.

The Sunny Baseline

This is the electrical output power of our generator – oriented partly south-east, partly south-west – for some selected nearly perfectly cloudless days last year. Even in darkest winter you could fit the 2kW peak that a water cooker or heat pump needs under the curve at noon. We can heat hot water once a day on a really sunny day but not provide enough energy for room heating (monthly statistics here).

PV power over time: Sunny days 2015

Alien Spikes and an Extended Alien Attack

I was intrigued by very high and narrow spikes of output power immediately after clouds had passed by:

PV power over time, data points taken every few seconds.

There are two possible explanations: 1) Increase in solar cell efficiency as the panels cool off while shadowed or 2) ‘focusing’ (refraction) of radiation by the edges of nearby clouds.

Such 4kW peaks lasting only a few seconds wide are not uncommon, but typically they do not show up in our standard logging, comprising 5-minute averages.

There was one notable exception this February: Power surged to more than 4kW which is significantly higher than the output on other sunny days in February. Actually, it was higher than the output on the best ever sunny day last May 11 and as high as the peaks on summer solstice (Aliens are green, of course):

PV power over time: Alien Energy on Feb 11, 2016

Temperature effect and/or ‘focusing’?

On the alien attack day it was cloudy and warmer in the night than on the sunny reference day, February 6. At about 11:30 the sun was breaking through the clouds, hitting rather cool panels:

PV power over time: February 2016 - Output Power and Ambient Temperature

At that day, the sun was lingering right at the edge of clouds for some time, and global radiation was likely to be higher than expected due to the focusing effect.

Global Radiation over time: February 2016

The jump in global radiation at 11:30 is clearly visible in our measurements of radiation. But in addition panels had been heated up before by the peak in solar radiation and air temperature had risen, too. So the different effects cannot be disentangled easily .

Power drops by 0,44% of the rated power per decrease in °C of panel temperature. Our generator has 4,77kW, so power decreases by 21W/°C panel temperature.

At 11:30 power was by 1,3kW higher than power on the normal reference day – the theoretical equivalent of a panel temperature decrease by 62°C. I think I can safely attribute the initial surge in output power to the unusual peak in global radiation only.

At 12:30 output power is lower by 300W on the normal sunny day compared to the alien day. This can partly be attributed to the lower input radiation, and partly to a higher ambient temperature.

But only if input radiation is changing slowly, panel temperature has a simple, linear relationship with input temperature. The sun might be blocked for a very short period – shorter than our standard logging interval of 90s for radiation – and the surface of panels cools off intermittently. It is an interesting optimization problem: By just the right combination of blocking period and sunny period overall output could be maximized.

Re-visiting data from last hot August to add more dubious numbers

Panels’ performance was lower for higher ambient air temperatures …

PV power over time: August 2015 - Output Power and Ambient Temperature

… while global radiation over time was about the same. Actually the enveloping curve was the same, and there were even negative spikes at noon despite the better PV performance:

Global Radiation over time: August 2015

The difference in peak power was about 750W. The panel temperature difference to account for that would have to be about 36°. This is three times the measured difference in ambient temperature of 39°C – 27°C = 12°C. Is this plausible?

PV planners use a worst-case panel temperature of 75°C – for worst-case hot days like August 12, 2015.

Normal Operating Cell Temperature of panels is about 46°C. Normal conditions are: 20°C of ambient air, 800W/m2 solar radiation, and free-standing panels. One panel has an area of about 1,61m2; our generator with 18 panels has 29m2, so 800W/m2 translates to 23kW. Since the efficiency of solar panels is about 16%, 23kW of input generates about 3,7kW output power – about the average of the peak values of the two days in August. Our panels are attached to the roof and not free-standing – which is expected to result in a temperature increase of 10°C.

So we had been close to normal conditions at noon radiation-wise, and if we had been able to crank ambient temperature down to 20°C in August, panel temperature had been about 46°C + 10°C = 56°C.

I am boldly interpolating now, in order to estimate panel temperature on the ‘colder’ day in August:

Air Temperature Panel Temperature Comment
20°C 56°C Normal operating conditions, plus typical temperature increase for well-vented rooftop panels.
27°C 63°C August 1. Measured ambient temperature, solar cell temperature interpolated.
39°C 75°C August 12. Measured ambient temperature.
Panel temperature is an estimate for the worst case.

Under perfectly stable conditions panel temperature would have differed by 12°C, resulting in a difference of only ~ 250W (12°C * 21W/°C).

Even considering higher panel temperatures at the hotter day or a non-linear relationship between air temperature and panel temperature will not easily give you the 35° of temperature difference required to explain the observed difference of 750W.

I think we see aliens at work again:

At about 10:45 global radiation for the cooler day, August 1, starts to fluctuate – most likely even more wildly than we see with the 90s interval. Before 10:45, the difference in output power for the two days is actually more like 200-300W – so in line with my haphazard estimate for steady-state conditions.

Then at noon the ‘focusing’ effect could have kicked in, and panel surface temperature might haved fluctuated between 27°C air temperature minimum and the estimated 63°C. Both of these effects could result in the required additional increase of a few 100W.

Since ‘focusing’ is actually refraction by particles in the thinned out edges of clouds, I wonder if the effect could also be caused by barely visible variations of the density of mist in the sky as I remember the hot period in August 2015 as sweltry and a bit hazy, rather than partly cloudy.

I think it is likely that both beneficial effects – temperature and ‘focusing’ – will always be observed in unison. On February 11 I had the chance to see the effect of focusing only (or traces of an alien spaceship that just exited a worm-hole) for about half an hour.

Wormhole travel as envisioned by Les Bossinas for NASA________________________________

Further reading:

On temperature dependence of PV output power – from an awesome resource on photovoltaics:

On the ‘focusing’ effect:

  • Can You Get More than 100% Solar Energy?
    Note especially this comment – describing refraction, and pointing out that refraction of light can ‘focus’ light that would otherwise have been scattered back into space. This commentator also proposes different mechanism for short spikes in power and increase of power during extended periods (such as I observed on February 11).
  • Edge-of-Cloud Effect

Source for the 10°C higher temperature of rooftop panels versus free-standing ones: German link, p.3: Ambient air + 20°C versus air + 30°C

An Efficiency Greater Than 1?

No, my next project is not building a Perpetuum Mobile.

Sometimes I mull upon definitions of performance indicators. It seems straight-forward that the efficiency of a wood log or oil burner is smaller than 1 – if combustion is not perfect you will never be able to turn the caloric value into heat, due to various losses and incomplete combustion.

Our solar panels have an ‘efficiency’ or power ratio of about 16,5%. So 16.5% of solar energy are converted to electrical energy which does not seem a lot. However, that number is meaningless without adding economic context as solar energy is free. Higher efficiency would allow for much smaller panels. If efficiency were only 1% and panels were incredibly cheap and I had ample roof spaces I might not care though.

The coefficient of performance of a heat pump is 4-5 which sometimes leaves you with this weird feeling of using odd definitions. Electrical power is ‘multiplied’ by a factor always greater than one. Is that based on crackpottery?

Heat pump.

Our heat pump. (5 connections: 2x heat source – brine, 3x heating water hot water / heating water supply, joint return).

Actually, we are cheating here when considering the ‘input’ – in contrast to the way we view photovoltaic panels: If 1 kW of electrical power is magically converted to 4 kW of heating power, the remaining 3 kW are provided by a cold or lukewarm heat source. Since those are (economically) free, they don’t count. But you might still wonder, why the number is so much higher than 1.

My favorite answer:

There is an absolute minimum temperature, and our typical refrigerators and heat pumps operate well above it.

The efficiency of thermodynamic machines is most often explained by starting with an ideal process using an ideal substance – using a perfect gas as a refrigerant that runs in a closed circuit. (For more details see pointers in the Further Reading section below). The gas would be expanded at a low temperature. This low temperature is constant as heat is transferred from the heat source to the gas. At a higher temperature the gas is compressed and releases heat. The heat released is the sum of the heat taken in at lower temperatures plus the electrical energy fed in to the compressor – so there is no violation of energy conservation. In order to ‘jump’ from the lower to the higher temperature, the gas is compressed – by a compressor run on electrical power – without exchanging heat with the environment. This process is repeating itself again and again, and with every cycle the same heat energy is released at the higher temperature.

In defining the coefficient of performance the energy from the heat source is omitted, in contrast to the electrical energy:

COP = \frac {\text{Heat released at higher temperature per cycle}}{\text{Electrical energy fed into the compressor per cycle}}

The efficiency of a heat pump is the inverse of the efficiency of an ideal engine – the same machine, running in reverse. The engine has an efficiency lower than 1 as expected. Just as the ambient energy fed into the heat pump is ‘free’, the related heat released by the engine to the environment is useless and thus not included in the engine’s ‘output’.

100 1870 (Voitsberg steam power plant)

One of Austria’s last coal power plants – Kraftwerk Voitsberg, retired in 2006 (Florian Probst, Wikimedia). Thermodynamically, this is like ‘a heat pump running in reverse. That’s why I don’t like when a heat pump is said to ‘work like a refrigerator, just in reverse’ (Hinting at: The useful heat provided by the heat pump is equivalent to the waste heat of the refrigerator). If you run the cycle backwards, a heat pump would become sort of a steam power plant.

The calculation (see below) results in a simple expression as the efficiency only depends on temperatures. Naming the higher temperature (heating water) T1 and the temperature of the heat source (‘environment’, our water tank for example) T….

COP = \frac {T_1}{T_1-T_2}

The important thing here is that temperatures have to be calculated in absolute values: 0°C is equal to 273,15 Kelvin, so for a typical heat pump and floor loops the nominator is about 307 K (35°C) whereas the denominator is the difference between both temperature levels – 35°C and 0°C, so 35 K. Thus the theoretical COP is as high as 8,8!

Two silly examples:

  • Would the heat pump operate close to absolute zero, say, trying to pump heat from 5 K to 40 K, the COP would only be
    40 / 35 = 1,14.
  • On the other hand, using the sun as a heat source (6000 K) the COP would be
    6035 / 35 = 172.

So, as heat pump owners we are lucky to live in an environment rather hot compared to absolute zero, on a planet where temperatures don’t vary that much in different places, compared to how far away we are from absolute zero.


Further reading:

Richard Feynman has often used unusual approaches and new perspectives when explaining the basics in his legendary Physics Lectures. He introduces (potential) energy at the very beginning of the course drawing on Carnot’s argument, even before he defines force, acceleration, velocity etc. (!) In deriving the efficiency of an ideal thermodynamic engine many chapters later he pictured a funny machine made from rubber bands, but otherwise he follows the classical arguments:

Chapter 44 of Feynman’s Physics Lectures Vol 1, The Laws of Thermodynamics.

For an ideal gas heat energies and mechanical energies are calculated for the four steps of Carnot’s ideal process – based on the Ideal Gas Law. The result is the much more universal efficiency given above. There can’t be any better machine as combining an ideal engine with an ideal heat pump / refrigerator (the same type of machine running in reverse) would violate the second law of thermodynamics – stated as a principle: Heat cannot flow from a colder to a warmer body and be turned into mechanical energy, with the remaining system staying the same.


Pressure over Volume for Carnot’s process, when using the machine as an engine (running it counter-clockwise it describes a heat pump): AB: Expansion at constant high temperature, BC: Expansion without heat exchange (cooling), CD: Compression at constant low temperature, DA: Compression without heat exhange (gas heats up). (Image: Kara98, Wikimedia).

Feynman stated several times in his lectures that he does not want to teach history of physics or downplayed the importance of learning about history of science a bit (though it seems he was well versed in – as e.g. his efforts to follow Newton’s geometrical prove of Kepler’s Laws showed). For historical background of the evolution of Carnot’s ideas and his legacy see the the definitive resource on classical thermodynamics and its history – Peter Mander’s blog

What had puzzled me is once why we accidentally latched onto such a universal law, using just the Ideal Gas Law.The reason is that the Gas Law has the absolute temperature already included. Historically, it did take quite a while until pressure, volume and temperature had been combined in a single equation – see Peter Mander’s excellent article on the historical background of this equation.

Having explained Carnot’s Cycle and efficiency, every course in thermodynamics reveals a deeper explanation: The efficiency of an ideal engine could actually be used as a starting point defining the new scale of temperature.

Temperature scale according to Kelvin (William Thomson)

Carnot engines with different efficiencies due to different lower temperatures. If one of the temperatures is declared the reference temperature, the other can be determined by / defined by the efficiency of the ideal machine (Image: Olivier Cleynen, Wikimedia.)

However, according to the following paper, Carnot did not rigorously prove that his ideal cycle would be the optimum one. But it can be done, applying variational principles – optimizing the process for maximum work done or maximum efficiency:

Carnot Theory: Derivation and Extension, paper by Liqiu Wang