Why a Carnot process using a Van der Waals gas – or other fluid with uncommon equation of state – also runs at Carnot’s efficiency.

Textbooks often refer to an ideal gas when introducing Carnot’s cycle – it’s easy to calculate heat energies and work in this case. Perhaps this might imply that not only must the engine be ‘ideal’ – reversible – but also the working fluid has to be ‘ideal’ in some sense? No, it does not, as explicitly shown in this paper: The Carnot cycle with the Van der Waals equation of state.

In this post I am considering a class of substances which is more general than the Van der Waals gas, and I come to the same conclusion. Unsurprisingly. You only need to imagine Carnot’s cycle in a temperature-entropy (T-S) diagram: The process is represented by a rectangle for both ideal and Van der Waals gas. Heat energies and work needed to calculate efficiency can be read off, and the – universal – maximum efficiency can be calculated without integrating over potentially wiggly pressure-volume curves.

But the fact that we can use the T-S diagram or the fact that the concept of entropy makes sense is a consequence of the Second Law of Thermodynamics. It also states, that a Perpetuum Mobile of the Second Kind is not possible: You cannot build a machine that converts 100% of the heat energy in a temperature bath to mechanical energy. This statement sounds philosophical but it puts constraints on the way real materials can behave, and I think these constraints on the relations between physical properties are stronger than one might intuitively expect. If you pick an equation of state – the pressure as a function of volume and temperature, like the wavy Van der Waals curve, the behavior of specific heat is locked in. In a sense the functions describing the material’s properties have to conspire just in the right way to yield the simple rectangle in the T-S plane.

The efficiency of a perfectly reversible thermodynamic engine (converting heat to mechanical energy) has a maximum well below 100%. If the machine uses two temperature baths with constant temperatures and , the heat energies exchanged between machine and baths and for an ideal *reversible* process are related by:

(I wrote on the related proof by contradiction before – avoiding to use the notion of entropy at all costs). This ideal process and this ideal efficiency could also be used to actually define the thermodynamic temperature (as it emerges from statistical considerations; I have followed Landau and Lifshitz’s arguments in this post on statistical mechanics and entropy)

Any thermodynamic process using any type of substance can be imagined as being a combination of lots of Carnot engines operating between lots of temperature baths at different temperatures (see e.g. Feynman’s lecture). The area in the p-V diagram that is traced out in a cyclic process is being split into infinitely many Carnot processes. For each process small heat energies are transferred. Summing up the contributions of all processes only the loop at the edge remains and thus …

which means that for a reversible process actually has to be a total differential of a function … that is called entropy. This argument used in thermodynamics textbooks is kind of a ‘reverse’ argument to the statistical one – which introduces ‘entropy first’ and ‘temperature second’.

What I need in the following derivations are the relations between differentials that represent a version of First and Second Law:

The First Law of Thermodynamics states that heat is a form of energy, so

The minus is due to the fact that energy is increased on increasing volume (There might be other thermodynamics degrees of freedom like the magnetization of a magnetic substance – so other pairs of variables like p and V).

Inserting the definition of entropy S as the total differential we obtain this relation …

… from which follow lots of relations between thermodynamic properties!

I will derive one the them to show how strong the constraints are that the Second Law actually imposes on the physical properties of materials: When the so-called equation of state is given – the pressure as a function of volume and temperature p(V,T) – then you also know something about its specific heat. For an ideal gas pV is simply a constant times temperature.

S is a function of the state, so picking independent variables V and T entropy’s total differential is:

On the other hand, from the definition of entropy / the combination of 1st and 2nd Law given above it follows that

Comparing the coefficients of dT and dV the partial derivatives of entropy with respect to volume and temperature can be expressed as functions of energy and pressure. The order of partial derivation does not matter:

Thus differentiating each derivative of S once more with respect to the other variable yields:

What I actually want, is a result for the specific heat: – the energy you need to put in per degree Kelvin to heat up a substance at constant volume, usually called . I keep going, hoping that something like this derivative will show up. The mixed derivative shows up on both sides of the equation, and these terms cancel each other. Collecting the remaining terms:

Multiplying by and re-arranging …

Again, noting that the order of derivations does not matter, we can use this result to check if the specific heat for constant volume – – depends on volume:

But we know the last partial derivative already and insert the expression derived before – a function that is fully determined by the equation of state p(V,T):

So if the pressure depends e.g. only linearly on temperature the second derivative re T is zero and does not depend on volume but only on temperature. The equation of state says something about specific heat.

The idealized Carnot process contains four distinct steps. In order to calculate efficiency for a certain machine and working fluid, you need to calculate the heat energies exchanged between machine and bath on each of these steps. Two steps are adiabatic – the machine is thermally insulated, thus no heat is exchanged. The other steps are isothermal, run at constant temperature – only these steps need to be considered to calculate the heat energies denoted and :

I am using the First Law again and insert the result for which was obtained from the combination of both Laws – the goal is to express heat energy as a function of pressure and specific heat:

Heat Q is not a function of the state defined by V and T – that’s why the incomplete differential δQ is denoted by the Greek δ. The change in heat energy depends on how exactly you get from one state to another. But we know what the process should be in this case: It is isothermal, therefore dT is zero and heat energy is obtained by integrating over volume only.

We need p as a function of V and T. The equation of state for ideal gas says that pV is proportional to temperature. I am now considering a more general equation of state of the form …

The Van der Waals equation of state takes into account that particles in the gas interact with each other and that they have a finite volume (Switching units, from capital volume V [m^{3}] to small v [m^{3}/kg] to use gas constant R [kJ/kgK] rather than absolute numbers of particles and to use the more common representation – so comparing to $latex pv = RT) :

This equation also matches the general pattern.

In both cases pressure depends only linearly on temperature, and so is 0. Thus specific heat does not depend on volume, and I want to stress that this is a consequence of the fundamental Laws and the p(T,V) equation of state, not an arbitrary, additional assumption about this substance.The isothermal heat energies are thus given by the following, integrating over V:

(So if is positive, has to be negative.)

In the adiabatic processes δQ is zero, thus

This is useful as we already know that specific heat only depends on temperature for the class of substances considered, so for each adiabatic process…

Adding these equations, the two integrals over temperature cancel and

Carnot’s efficiency is work – the difference of the absolute values of the two heat energies – over the heat energy invested at higher temperature :

The integral from A to B can replaced by an integral over the alternative path A-D-C-B (as the integral over the closed path is zero for a reversible process) and

But the relation between the B-C and A-D integral derived from considering the adiabatic processes is equivalent to

Thus two terms in the alternative integral cancel and

… and finally the integrals in the efficiency cancel. What remains is Carnot’s efficiency:

But what if the equation of state is more complex and specific heat would depends also on volume?

Yet another way to state the Second Law is to say that the efficiencies of all reversible processes has to be equal and equal to Carnot’s efficiency. Otherwise you get into a thicket of contradictions (as I highlighted here). The authors of the VdW paper say they are able to prove this for infinitesimal cycles which sounds of course plausible: As mentioned at the beginning, splitting up any reversible process into many processes that use only a tiny part of the co-ordinate space is the ‘standard textbook procedure’ (see e.g. Feynman’s lecture, especially figure 44-10).

**But you could immediately see it without calculating anything by having a look at the process in a T-S diagram instead of the p-V representation.** A process made up of two isothermal and two adiabatic processes is **by definition** (of entropy, see above)** a rectangle no matter what the equation of state of the working substance is.** Heat energy and work can easily been read off as the rectangles between or below the straight lines:

In the p-V diagram one might see curves of weird shape, but when calculating the relation between entropy and temperature the weirdness of the dependencies of specific heat and pressure of V and T compensate for each other. They are related because of the differential relation implied by the 2nd Law.

All laws of thermodynamics and fluidomechanics just keep frightening me … even after 30 years.

Some spastic reaction occurs being confronted :-) (dont take this too seriously)

:-D

By the way, incredibly off-topic, but I watched the documentary “Bombshell, the Hedy Lamarr Story” about the Austrian born movie star and inventor. It was very good! You may know the story, but you should check out the film.

Thanks – I read different accounts of her story, but it’s always good to know which documentary is worth watching! BTW, my off-topic, in case you got notified about a spammy like on your comment: I reported that to WordPress support … unfortunately the next evolution of spam likes, after the spam likes on posts have been fended off …

I don’t follow the integration and more complex maths, but I adore and cherish T-S diagrams, as well as the crazy relationships of real world refrigerants near the saturation curve of both T-S and P-h diagrams.

Thanks for the perspective. It’s nice to see someone write about this subject for the simply joy and interest of it.

Yeah, sometimes I need to ‘muse upon entropy’ ;-) Thanks a lot for your feedback!!

May I just propose an alternative point of view of the Carnot efficiency. The limit of performance the latter represents can also be obtained without any reference to any thermodynamic cycle, but just from the entropy balance of the system. Once you defined what is entropy and its link with heat and temperature, so the famous Clausius formula (your second equation), it is quite easy to detail the entropy balance of a given system, as e.g. a heat engine.

An amount of entropy is hence received from a hot heat source at high temperature T1 (noted Q1/T1>0 in your calculations), some positive amount of entropy is produced inside the system by various dissipation processes, and some entropy (Q2/T2<0) is rejected to a cold heat sink at low temperature T2. The sum of all these entropies has to be equal to zero (if not entropy is "stored" inside, during some transient process) and there is no way to obtain this equality without the existence of the rejected heat Q2<0, so the necessity to have at our disposal two different heat reservoirs at two different temperatures to make the system work.

Expressing the efficiency of this system (thanks to the help of the first law), you can obtain very easily the expression of the Carnot efficiency in supposing no entropy is produced inside the engine (or refrigerator or heat pump, the result is the same). Obtaining this expression of efficiency can then achieved with no need to any physical substance, as a gas for example, but just in combining energy and entropy balances. It is why you result concerning the van der Waals gas, although very interesting on its own, is not so surprising to me.

The need to consider a specific equation of state of a gas, whatever its nature, has been considered as a weakness of thermodynamics for a long time among physicists, so for example the statement of the laws of thermodynamics by Carathéodory (https://carnotcycle.wordpress.com/2014/01/06/caratheodory-the-forgotten-pioneer/), which doesn't require such a link with any kind of physical substance.

Thanks for your comment! As I tried to point out, the result is not surprising and would not have required this explicit calculation – when you consider entropy from the beginning. I referred the rectangle in the T-S plane, as a way of calculating the efficiency immediately, because it allows you to read off the work and the heat energies easily and without a reference to a specific substance. And you can only use this rectangle if you defined entropy entropy.

But yes, I did not state that Q/T is the flow of entropy. I covered it indirectly in my previous article on statistical mechanics – by considering how thermodynamic temperature can derived be by considering the changes of entropy and energy (from dS/dE): https://elkement.blog/2017/11/24/entropy-and-dimensions-following-landau-and-lifshitz/