# Can the Efficiency Be Greater Than One?

This is one of the perennial top search terms for this blog.

Anticlimactic answer: Yes, because input and output are determined also by economics, not only by physics.

Often readers search for the efficiency of a refrigerator. Its efficiency, the ratio of output and input energies, is greater than 1 because the ambient energy is free. System’s operators are interested in the money they pay the utility, in relation to the resulting energy for cooling.

If you use the same thermodynamic machine either as a refrigerator or as a heat pump, efficiencies differ: The same input energy drives the compressor, but the relevant output energy is either the energy released to the ‘hot side’ at the condenser or the energy used for evaporating the refrigerant at the ‘cool side’:

The same machine / cycle is used as a heat pump for heating (left) or a refrigerator or AC for cooling (right). (This should just highlight the principles and does not include any hydraulic details, losses etc. related to detailed differences between refrigerators / ACs and heat pumps.)

For photovoltaic panels the definition has sort of the opposite bias: The sun does not send a bill – as PV installers say in their company’s slogan – but the free solar ambient energy is considered, and thus their efficiency is ‘only’ ~20%.

Half of our generator, now operational for three years: 10 panels, oriented south-east, 265W each, efficiency 16%. (The other 8 panels are oriented south-west).

When systems are combined, you can invent all kinds of efficiencies, depending on system boundaries. If PV panels are ‘included’ in a heat pump system (calculation-wise) the nominal electrical input energy becomes lower. If solar thermal collectors are added to any heating system, the electrical or fossil fuel input decreases.

Output energy may refer to energy measured directly at the outlet of the heat pump or boiler. But it might also mean the energy delivered to the heating circuits – after the thermal losses of a buffer tank have been accounted for. But not 100% of these losses are really lost, if the buffer tank is located in the house.

I’ve seen many different definitions in regulations and related software tools, and you find articles about how to game interpret these guidelines to your advantage. Tools and standards also make arbitrary assumptions about storage tank losses, hysteresis parameter and the like – factors that might be critical for efficiency.

Then there are scaling effects: When the design heat loads of two houses differ by a factor of 2, and the smaller house would use a scaled down heat pump (hypothetically providing 50% output power at the same efficiency), the smaller system’s efficiency is likely to be a bit lower. Auxiliary consumers of electricity – like heating circuit pumps or control systems – will not be perfectly scalable. But the smaller the required output energy is, the better it can be aligned with solar energy usage and storage by a ‘smart’ system – and this might outweigh the additional energy needed for ‘smartness’. Perhaps intermittent negative market prices of electricity could be leveraged.

Definitions of efficiency are also culture-specific, tailored to an academic discipline or industry sector. There are different but remotely related concepts of rating how useful a source of energy is: Gibbs Free Energy is the maximum work a system can deliver, given that pressure and temperature do not change during the process considered – for example in a chemical reaction. On the other hand, Exergy is the useful ‘available’ energy ‘contained’ in a (part of a) system: Sources of energy and heat are rated; e.g. heat energy is only mechanically useful up to the maximum efficiency of an ideal Carnot process. Thus exergy depends on the temperature of the environment where waste heat ends up. The exergy efficiency of a Carnot process is 1, as waste heat is already factored in. On the other hand, the fuel used to drive the process may or may not be included and it may or may not be considered pure exergy – if it is, energy and exergy efficiency would be the same again. If heat energy flows from the hot to the cold part of a system in a heat exchanger, no energy is lost – but exergy is.

You could also extend the system’s boundary spatially and on the time axis: Include investment costs or the cost of harm done to the environment. Consider the primary fuel / energy / exergy to ‘generate’ electricity: If a thermal power plant has 40% efficiency then the heat pump’s efficiency needs to be at least 2,5 to ‘compensate’ for that.

In summary, ‘efficiency’ is the ratio of an output and an input energy, and the definitions may be rather arbitrary as and these energies are determined by a ‘sampling’  time, system boundaries, and additional ‘ratings’.

# Consequences of the Second Law of Thermodynamics

Why a Carnot process using a Van der Waals gas – or other fluid with uncommon equation of state – also runs at Carnot’s efficiency.

Textbooks often refer to an ideal gas when introducing Carnot’s cycle – it’s easy to calculate heat energies and work in this case. Perhaps this might imply that not only must the engine be ‘ideal’ – reversible – but also the working fluid has to be ‘ideal’ in some sense? No, it does not, as explicitly shown in this paper: The Carnot cycle with the Van der Waals equation of state.

In this post I am considering a class of substances which is more general than the Van der Waals gas, and I come to the same conclusion. Unsurprisingly. You only need to imagine Carnot’s cycle in a temperature-entropy (T-S) diagram: The process is represented by a rectangle for both ideal and Van der Waals gas. Heat energies and work needed to calculate efficiency can be read off, and the – universal – maximum efficiency can be calculated without integrating over potentially wiggly pressure-volume curves.

But the fact that we can use the T-S diagram or the fact that the concept of entropy makes sense is a consequence of the Second Law of Thermodynamics. It also states, that a Perpetuum Mobile of the Second Kind is not possible: You cannot build a machine that converts 100% of the heat energy in a temperature bath to mechanical energy. This statement sounds philosophical but it puts constraints on the way real materials can behave, and I think these constraints on the relations between physical properties are stronger than one might intuitively expect. If you pick an equation of state – the pressure as a function of volume and temperature, like the wavy Van der Waals curve, the behavior of specific heat is locked in. In a sense the functions describing the material’s properties have to conspire just in the right way to yield the simple rectangle in the T-S plane.

The efficiency of a perfectly reversible thermodynamic engine (converting heat to mechanical energy) has a maximum well below 100%. If the machine uses two temperature baths with constant temperatures $T_1$ and $T_2$, the heat energies exchanged between machine and baths $Q_1$ and $Q_2$ for an ideal reversible process are related by:

$\frac{Q_1}{T_1} + \frac{Q_2}{T_2} = 0$

(I wrote on the related proof by contradiction before – avoiding to use the notion of entropy at all costs). This ideal process and this ideal efficiency could also be used to actually define the thermodynamic temperature (as it emerges from statistical considerations; I have followed Landau and Lifshitz’s arguments in this post on statistical mechanics and entropy)

Any thermodynamic process using any type of substance can be imagined as being a combination of lots of Carnot engines operating between lots of temperature baths at different temperatures (see e.g. Feynman’s lecture). The area in the p-V diagram that is traced out in a cyclic process is being split into infinitely many Carnot processes. For each process small heat energies $\delta Q$ are transferred. Summing up the contributions of all processes only the loop at the edge remains and thus …

$\oint \frac{\delta Q}{T}$

which means that for a reversible process $\frac{\delta Q}{T}$ actually has to be a total differential of a function $dS$ … that is called entropy. This argument used in thermodynamics textbooks is kind of a ‘reverse’ argument to the statistical one – which introduces  ‘entropy first’ and ‘temperature second’.

What I  need in the following derivations are the relations between differentials that represent a version of First and Second Law:

The First Law of Thermodynamics states that heat is a form of energy, so

$dE = \delta Q - pdV$

The minus is due to the fact that energy is increased on increasing volume (There might be other thermodynamics degrees of freedom like the magnetization of a magnetic substance – so other pairs of variables like p and V).

Inserting the definition of entropy S as the total differential we obtain this relation …

$dS = \frac{dE + pdV}{T}$

… from which follow lots of relations between thermodynamic properties!

I will derive one the them to show how strong the constraints are that the Second Law actually imposes on the physical properties of materials: When the so-called equation of state is given – the pressure as a function of volume and temperature p(V,T) – then you also know something about its specific heat. For an ideal gas pV is simply a constant times temperature.

S is a function of the state, so picking independent variables V and T entropy’s total differential is:

$dS = (\frac{\partial S}{\partial T})_V dT + (\frac{\partial S}{\partial V})_T dV$

On the other hand, from the definition of entropy / the combination of 1st and 2nd Law given above it follows that

$dS = \frac{1}{T} \left \{ (\frac{\partial E }{\partial T})_V dT + \left [ (\frac{\partial E }{\partial V})_T + p \right ]dV \right \}$

Comparing the coefficients of dT and dV the partial derivatives of entropy with respect to volume and temperature can be expressed as functions of energy and pressure. The order of partial derivation does not matter:

$\left[\frac{\partial}{\partial V}\left(\frac{\partial S}{\partial T}\right)_V \right]_T = \left[\frac{\partial}{\partial T}\left(\frac{\partial S}{\partial V}\right)_T \right]_V$

Thus differentiating each derivative of S once more with respect to the other variable yields:

$[ \frac{\partial}{\partial V} \frac{1}{T} (\frac{\partial E }{\partial T})_V ]_T = [ \frac{\partial}{\partial T} \frac{1}{T} \left [ (\frac{\partial E }{\partial V})_T + p \right ] ]_V$

What I actually want, is a result for the specific heat: $(\frac{\partial E }{\partial T})_V$ – the energy you need to put in per degree Kelvin to heat up a substance at constant volume, usually called $C_v$. I keep going, hoping that something like this derivative will show up. The mixed derivative $\frac{1}{T} \frac{\partial^2 E}{\partial V \partial T}$ shows up on both sides of the equation, and these terms cancel each other. Collecting the remaining terms:

$0 = -\frac{1}{T^2} (\frac{\partial E }{\partial V})_T -\frac{1}{T^2} p + \frac{1}{T}(\frac{\partial p}{\partial T})_V$

Multiplying by $T^2$ and re-arranging …

$(\frac{\partial E }{\partial V})_T = -p +T(\frac{\partial p }{\partial T})_V = T^2(\frac{\partial}{\partial T}\frac{p}{T})_V$

Again, noting that the order of derivations does not matter, we can use this result to check if the specific heat for constant volume – $C_v = (\frac{\partial E }{\partial T})_V$ – depends on volume:

$(\frac{\partial C_V}{\partial V})_T = \frac{\partial}{\partial V}[(\frac{\partial E }{\partial T})_V]_T = \frac{\partial}{\partial T}[(\frac{\partial E }{\partial V})_T]_V$

But we know the last partial derivative already and insert the expression derived before – a function that is fully determined by the equation of state p(V,T):

$(\frac{\partial C_V}{\partial V})_T= \frac{\partial}{\partial T}[(-p +T(\frac{\partial p }{\partial T})_V)]_V = -(\frac{\partial p}{\partial T})_V + (\frac{\partial p}{\partial T})_V + T(\frac{\partial^2 p}{\partial T^2})_V = T(\frac{\partial^2 p}{\partial T^2})_V$

So if the pressure depends e.g. only linearly on temperature the second derivative re T is zero and $C_v$ does not depend on volume but only on temperature. The equation of state says something about specific heat.

The idealized Carnot process contains four distinct steps. In order to calculate efficiency for a certain machine and working fluid, you need to calculate the heat energies exchanged between machine and bath on each of these steps. Two steps are adiabatic – the machine is thermally insulated, thus no heat is exchanged. The other steps are isothermal, run at constant temperature – only these steps need to be considered to calculate the heat energies denoted $Q_1$ and $Q_2$:

Carnot process for an ideal gas: A-B: Isothermal expansion, B-C: Adiabatic expansion, C-D: isothermal compression, D-A: adiabatic compression. (Wikimedia, public domain, see link for details).

I am using the First Law again and insert the result for $(\frac{\partial E}{\partial V})_T$ which was obtained from the combination of both Laws – the goal is to express heat energy as a function of pressure and specific heat:

$\delta Q= dE + p(T,V)dV = (\frac{\partial E}{\partial T})_V dT + (\frac{\partial E}{\partial V})_T dV + p(T,V)dV$
$= C_V(T,V) dT + [-p +T(\frac{\partial p(T,V)}{\partial T})_V] dV + p(T,V)dV = C_V(T,V)dT + T(\frac{\partial p(T,V)}{\partial T})_V dV$

Heat Q is not a function of the state defined by V and T – that’s why the incomplete differential δQ is denoted by the Greek δ. The change in heat energy depends on how exactly you get from one state to another. But we know what the process should be in this case: It is isothermal, therefore dT is zero and heat energy is obtained by integrating over volume only.

We need p as a function of V and T. The equation of state for ideal gas says that pV is proportional to temperature. I am now considering a more general equation of state of the form …

$p = f(V)T + g(V)$

The Van der Waals equation of state takes into account that particles in the gas interact with each other and that they have a finite volume (Switching units, from capital volume V [m3] to small v [m3/kg] to use gas constant R [kJ/kgK] rather than absolute numbers of particles and to use the more common representation – so comparing to \$latex pv = RT) :

$p = \frac{RT}{v - b} - \frac{a}{v^2}$

This equation also matches the general pattern.

Van der Waals isotherms (curves of constant temperature) in the p-V plane: Depending on temperature, the functions show a more or less pronounced ‘wave’ with a maximum and a minimum, in contrast to the ideal-gas-like hyperbolas (p = RT/v) for high temperatures. (By Andrea insinga, Wikimedia, for details see link.)

In both cases pressure depends only linearly on temperature, and so $(\frac{\partial C_V}{\partial V})_T$ is 0. Thus specific heat does not depend on volume, and I want to stress that this is a consequence of the fundamental Laws and the p(T,V) equation of state, not an arbitrary, additional assumption about this substance.

The isothermal heat energies are thus given by the following, integrating $T(\frac{\partial p(T,V)}{\partial T})_V = T f(V)$ over V:

$Q_1 = T_1 \int_{V_A}^{V_B} f(V) dV$
$Q_2 = T_2 \int_{V_C}^{V_D} f(V) dV$

(So if $Q_1$ is positive, $Q_2$ has to be negative.)

In the adiabatic processes δQ is zero, thus

$C_V(T,V)dT = -T(\frac{\partial p(T,V)}{\partial T})_V dV = -T f(V) dV$
$\int \frac{C_V(T,V)}{T}dT = \int -f(V) dV$

This is useful as we already know that specific heat only depends on temperature for the class of substances considered, so for each adiabatic process…

$\int_{T_1}^{T_2} \frac{C_V(T)}{T}dT = \int_{V_B}^{V_C} -f(V) dV$
$\int_{T_2}^{T_1} \frac{C_V(T)}{T}dT = \int_{V_D}^{V_A} -f(V) dV$

Adding these equations, the two integrals over temperature cancel and

$\int_{V_B}^{V_C} f(V) = -\int_{V_D}^{V_A} f(V) dV$

Carnot’s efficiency is work – the difference of the absolute values of the two heat energies – over the heat energy invested at higher temperature $T_1$:

$\eta = \frac {Q_1 - \left | Q_2 \right |}{Q_1} = 1 - \frac {\left | Q_2 \right |}{Q_1}$
$\eta = 1 - \frac {T_2}{T_1} \frac {\left | \int_{V_C}^{V_D} f(V) dV \right |}{\int_{V_A}^{V_B} f(V) dV}$

The integral from A to B can replaced by an integral over the alternative path A-D-C-B (as the integral over the closed path is zero for a reversible process) and

$\int_{A}^{B} = \int_{A}^{D} + \int_{D}^{C}+ \int_{C}^{B}$

But the relation between the B-C and A-D integral derived from considering the adiabatic processes is equivalent to

$-\int_{C}^{B} = \int_{B}^{C} = - \int_{D}^{A} = \int_{A}^{D}$

Thus two terms in the alternative integral cancel and

$\int_{A}^{B} = \int_{D}^{C}$

… and finally the integrals in the efficiency cancel. What remains is Carnot’s efficiency:

$\eta = \frac {T_1 - T_2}{T_1}$

But what if the equation of state is more complex and specific heat would depends also on volume?

Yet another way to state the Second Law is to say that the efficiencies of all reversible processes has to be equal and equal to Carnot’s efficiency. Otherwise you get into a thicket of contradictions (as I highlighted here). The authors of the VdW paper say they are able to prove this for infinitesimal cycles which sounds of course plausible: As mentioned at the beginning, splitting up any reversible process into many processes that use only a tiny part of the co-ordinate space is the ‘standard textbook procedure’ (see e.g. Feynman’s lecture, especially figure 44-10).

But you could immediately see it without calculating anything by having a look at the process in a T-S diagram instead of the p-V representation. A process made up of two isothermal and two adiabatic processes is by definition (of entropy, see above) a rectangle no matter what the equation of state of the working substance is. Heat energy and work can easily been read off as the rectangles between or below the straight lines:

Carnot process displayed in the entropy-temperature plane. No matter if the working fluid is an ideal gas following the pv = RT equation of state or if it is a Van der Waals gas that may show a ‘wave’ with a maximum and a minimum in a p-V diagram – in the T-S diagram all of this will look like rectangles and thus exhibit the maximum (Carnot’s) efficiency.

In the p-V diagram one might see curves of weird shape, but when calculating the relation between entropy and temperature the weirdness of the dependencies of specific heat and pressure of V and T compensate for each other. They are related because of the differential relation implied by the 2nd Law.

# An Efficiency Greater Than 1?

No, my next project is not building a Perpetuum Mobile.

Sometimes I mull upon definitions of performance indicators. It seems straight-forward that the efficiency of a wood log or oil burner is smaller than 1 – if combustion is not perfect you will never be able to turn the caloric value into heat, due to various losses and incomplete combustion.

Our solar panels have an ‘efficiency’ or power ratio of about 16,5%. So 16.5% of solar energy are converted to electrical energy which does not seem a lot. However, that number is meaningless without adding economic context as solar energy is free. Higher efficiency would allow for much smaller panels. If efficiency were only 1% and panels were incredibly cheap and I had ample roof spaces I might not care though.

The coefficient of performance of a heat pump is 4-5 which sometimes leaves you with this weird feeling of using odd definitions. Electrical power is ‘multiplied’ by a factor always greater than one. Is that based on crackpottery?

Our heat pump. (5 connections: 2x heat source – brine, 3x heating water hot water / heating water supply, joint return).

Actually, we are cheating here when considering the ‘input’ – in contrast to the way we view photovoltaic panels: If 1 kW of electrical power is magically converted to 4 kW of heating power, the remaining 3 kW are provided by a cold or lukewarm heat source. Since those are (economically) free, they don’t count. But you might still wonder, why the number is so much higher than 1.

There is an absolute minimum temperature, and our typical refrigerators and heat pumps operate well above it.

The efficiency of thermodynamic machines is most often explained by starting with an ideal process using an ideal substance – using a perfect gas as a refrigerant that runs in a closed circuit. (For more details see pointers in the Further Reading section below). The gas would be expanded at a low temperature. This low temperature is constant as heat is transferred from the heat source to the gas. At a higher temperature the gas is compressed and releases heat. The heat released is the sum of the heat taken in at lower temperatures plus the electrical energy fed in to the compressor – so there is no violation of energy conservation. In order to ‘jump’ from the lower to the higher temperature, the gas is compressed – by a compressor run on electrical power – without exchanging heat with the environment. This process is repeating itself again and again, and with every cycle the same heat energy is released at the higher temperature.

In defining the coefficient of performance the energy from the heat source is omitted, in contrast to the electrical energy:

$COP = \frac {\text{Heat released at higher temperature per cycle}}{\text{Electrical energy fed into the compressor per cycle}}$

The efficiency of a heat pump is the inverse of the efficiency of an ideal engine – the same machine, running in reverse. The engine has an efficiency lower than 1 as expected. Just as the ambient energy fed into the heat pump is ‘free’, the related heat released by the engine to the environment is useless and thus not included in the engine’s ‘output’.

One of Austria’s last coal power plants – Kraftwerk Voitsberg, retired in 2006 (Florian Probst, Wikimedia). Thermodynamically, this is like ‘a heat pump running in reverse. That’s why I don’t like when a heat pump is said to ‘work like a refrigerator, just in reverse’ (Hinting at: The useful heat provided by the heat pump is equivalent to the waste heat of the refrigerator). If you run the cycle backwards, a heat pump would become sort of a steam power plant.

The calculation (see below) results in a simple expression as the efficiency only depends on temperatures. Naming the higher temperature (heating water) T1 and the temperature of the heat source (‘environment’, our water tank for example) T….

$COP = \frac {T_1}{T_1-T_2}$

The important thing here is that temperatures have to be calculated in absolute values: 0°C is equal to 273,15 Kelvin, so for a typical heat pump and floor loops the nominator is about 307 K (35°C) whereas the denominator is the difference between both temperature levels – 35°C and 0°C, so 35 K. Thus the theoretical COP is as high as 8,8!

Two silly examples:

• Would the heat pump operate close to absolute zero, say, trying to pump heat from 5 K to 40 K, the COP would only be
40 / 35 = 1,14.
• On the other hand, using the sun as a heat source (6000 K) the COP would be
6035 / 35 = 172.

So, as heat pump owners we are lucky to live in an environment rather hot compared to absolute zero, on a planet where temperatures don’t vary that much in different places, compared to how far away we are from absolute zero.

__________________________

Richard Feynman has often used unusual approaches and new perspectives when explaining the basics in his legendary Physics Lectures. He introduces (potential) energy at the very beginning of the course drawing on Carnot’s argument, even before he defines force, acceleration, velocity etc. (!) In deriving the efficiency of an ideal thermodynamic engine many chapters later he pictured a funny machine made from rubber bands, but otherwise he follows the classical arguments:

Chapter 44 of Feynman’s Physics Lectures Vol 1, The Laws of Thermodynamics.

For an ideal gas heat energies and mechanical energies are calculated for the four steps of Carnot’s ideal process – based on the Ideal Gas Law. The result is the much more universal efficiency given above. There can’t be any better machine as combining an ideal engine with an ideal heat pump / refrigerator (the same type of machine running in reverse) would violate the second law of thermodynamics – stated as a principle: Heat cannot flow from a colder to a warmer body and be turned into mechanical energy, with the remaining system staying the same.

Pressure over Volume for Carnot’s process, when using the machine as an engine (running it counter-clockwise it describes a heat pump): AB: Expansion at constant high temperature, BC: Expansion without heat exchange (cooling), CD: Compression at constant low temperature, DA: Compression without heat exhange (gas heats up). (Image: Kara98, Wikimedia).

Feynman stated several times in his lectures that he does not want to teach history of physics or downplayed the importance of learning about history of science a bit (though it seems he was well versed in – as e.g. his efforts to follow Newton’s geometrical prove of Kepler’s Laws showed). For historical background of the evolution of Carnot’s ideas and his legacy see the the definitive resource on classical thermodynamics and its history – Peter Mander’s blog carnotcycle.wordpress.com:

What had puzzled me is once why we accidentally latched onto such a universal law, using just the Ideal Gas Law.The reason is that the Gas Law has the absolute temperature already included. Historically, it did take quite a while until pressure, volume and temperature had been combined in a single equation – see Peter Mander’s excellent article on the historical background of this equation.

Having explained Carnot’s Cycle and efficiency, every course in thermodynamics reveals a deeper explanation: The efficiency of an ideal engine could actually be used as a starting point defining the new scale of temperature.

Carnot engines with different efficiencies due to different lower temperatures. If one of the temperatures is declared the reference temperature, the other can be determined by / defined by the efficiency of the ideal machine (Image: Olivier Cleynen, Wikimedia.)

However, according to the following paper, Carnot did not rigorously prove that his ideal cycle would be the optimum one. But it can be done, applying variational principles – optimizing the process for maximum work done or maximum efficiency:

Carnot Theory: Derivation and Extension, paper by Liqiu Wang