Ice Storage Hierarchy of Needs

Data Kraken – the tentacled tangled pieces of software for data analysis – has a secret theoretical sibling, an older one: Before we built our heat source from a cellar, I developed numerical simulations of the future heat pump system. Today this simulation tool comprises e.g. a model of our control system, real-live weather data, energy balances of all storage tanks, and a solution to the heat equation for the ground surrounding the water/ice tank.

I can model the change of the tank temperature and  ‘peak ice’ in a heating season. But the point of these simulations is rather to find out to which parameters the system’s performance reacts particularly sensitive: In a worst case scenario will the storage tank be large enough?

A seemingly fascinating aspect was how peak ice ‘reacts’ to input parameters: It is quite sensitive to the properties of ground and the solar/air collector. If you made either the ground or the collector just ‘a bit worse’, ice seems to grow out of proportion. Taking a step back I realized that I could have come to that conclusion using simple energy accounting instead of differential equations – once I had long-term data for the average energy harvesting power of the collector and ground. Caveat: The simple calculation only works if these estimates are reliable for a chosen system – and this depends e.g. on hydraulic design, control logic, the shape of the tank, and the heat transfer properties of ground and collector.

For the operations of the combined tank+collector source the critical months are the ice months Dec/Jan/Feb when air temperature does not allow harvesting all energy from air. Before and after that period, the solar/air collector is nearly the only source anyway. As I emphasized on this blog again and again, even during the ice months, the collector is still the main source and delivers most of the ambient energy the heat pump needs (if properly sized) in a typical winter. The rest has to come from energy stored in the ground surrounding the tank or from freezing water.

I am finally succumbing to trends of edutainment and storytelling in science communications – here is an infographic:

(Add analogies to psychology here.)

Using some typical numbers, I am illustrating 4 scenarios in the figure below, for a  system with these parameters:

• A cuboid tank of about 23 m3
• Required ambient energy for the three ice months is ~7000kWh
(about 9330kWh of heating energy at a performance factor of 4)
• ‘Standard’ scenario: The collector delivers 75% of the ambient energy, ground delivers about 18%.
• Worse’ scenarios: Either collector or/and ground energy is reduced by 25% compared to the standard.

Contributions of the three sources add up to the total ambient energy needed – this is yet another way of combining different energies in one balance.

Ambient energy needed by the heat pump in  Dec+Jan+Feb,  as delivered by the three different sources. Latent ‘ice’ energy is also translated to the percentage of water in the tank that would be frozen.

Neither collector nor ground energy change much in relation to the base line. But latent energy has to fill in the gap: As the total collector energy is much higher than the total latent energy content of the tank, an increase in the gap is large in relation to the base ice energy.

If collector and ground would both ‘underdeliver’ by 25% the tank in this scenario would be frozen completely instead of only 23%.

The ice energy is just the peak of the total ambient energy iceberg.

You could call this system an air-geothermal-ice heat pump then!

____________________________

Continued: Here are some details on simulations.

On Photovoltaic Generators and Scattering Cross Sections

Subtitle: Dimensional Analysis again.

Our photovoltaic generator has about 5 kW rated ‘peak’ power – 18 panels with 265W each.

South-east oriented part of our generator – 10 panels. The remaining 8 are oriented south-west.

Peak output power is obtained under so-called standard testing condition – 1 kWp (kilo Watt peak) is equivalent to:

• a panel temperature of 25°C (as efficiency depends on temperature)
• an incident angle of sunlight relative to zenith of about 48°C – equivalent to an air mass of 1,5. This determines the spectrum of the electromagnetic radiation.
• an irradiance of solar energy of 1kW per square meter.

Simulated spectra for different air masses (Wikimedia, User Solar Gate). For AM 1 the path of sunlight is shortest and thus absorption is lowest.

The last condition can be rephrased as: We get 1 kW output per kW/minput. 1 kWp is thus defined as:

1 kWp = 1 kW / (1 kW/m2)

Canceling kW, you end up with 1 kWp being equivalent to an area of 1 m2.

Why is this a useful unit?

Solar radiation generates electron-hole pairs in solar cells, operated as photodiodes in reverse bias. Only if the incoming photon has exactly the right energy, solar energy is used efficiently. If the photon is not energetic enough – too ‘red’ – it is lost and converted to heat. If the photon is too blue  – too ‘ultraviolet’ – it generates electrical charges, but the greater part of its energy is wasted as the probability of two photons hitting at the same time is rare. Thus commercial solar panels have an efficiency of less than 20% today. (This does not yet say anything about economics as the total incoming energy is ‘free’.)

The less efficient solar panels are, the more of them you need to obtain a certain target output power. A perfect generator would deliver 1 kW output with a size of 1 m2 at standard test conditions. The kWp rating is equivalent to the area of an ideal generator that would generate the same output power, and it helps with evaluating if your rooftop area is large enough.

Our 4,77 kW generator uses 18 panels, about 1,61 m2 each – so 29 m2 in total. Panels’ efficiency  is then about 4,77 / 29 = 16,4% – a number you can also find in the datasheet.

There is no rated power comparable to that for solar thermal collectors, so I wonder why the unit has been defined in this way. Speculating wildly: Physicists working on solar cells usually have a background in solid state physics, and the design of the kWp rating is equivalent to a familiar concept: Scattering cross section.

An atom can be modeled as a little oscillator, driven by the incident electromagnetic energy. It re-radiates absorbed energy in all directions. Although this can be fully understood only in quantum mechanical terms, simple classical models are successful in explaining some macroscopic parameters, like the index of refraction. The scattering strength of an atom is expressed as:

[ Power scattered ] / [ Incident power of the beam / m2 ]

… the same sort of ratio as discussed above! Power cancels out and the result is an area, imagined as a ‘cross-section’. The atom acts as if it were an opaque disk of a certain area that ‘cuts out’ a respective part of the incident beam and re-radiates it.

The same concept is used for describing interactions between all kinds of particles (not only photons) – the scattering cross section determines the probability that an interaction will occur:

Particles’ scattering strengths are represented by red disks (area = cross section). The probability of a scattering event going to happen is equal to the ratio of the sum of all red disk areas and the total (blue+red) area. (Wikimedia, User FerdiBf)

First Year of Rooftop Solar Power and Heat Pump: Re-Visiting Economics

After I presented details for selected days, I am going to review overall performance in the first year. From June 2015 to May 2016 …

• … we needed 6.600 kWh of electrical energy in total.
• The heat pump consumed about 3.600 kWh of that …
• … in order to ‘pump it up to’ 16.800 kWh of heating energy (incl. hot tap water heating). This was a mild season! .
• The remaining 3.000kWh were used by household and office appliances, control, and circulation pumps.

(Disclaimer: I am from Austria –> decimal commas and dot thousands separator 🙂

The photovoltaic generator …

• … harvested about 5.600kWh / year – not too bad for our 4,8kW system with panels oriented partly south-east and partly south-west.
• 2.000 kWh of that were used directly and the rest was fed into the grid.
• So 30% of our consumption was provided directly by the PV generator (self-sufficiency quota) and
• 35% of PV energy produced was utilized immediately (self-consumption quota).

Monthly energy balances show the striking difference between summer and winter: In summer the small energy needed to heat hot water can easily be provided by solar power. But in winter only a fraction of the energy needed can be harvested, even on perfectly sunny days.

Figures below show…

• … the total energy consumed in the house as the sum of the energy for the heat pump and the rest used by appliances …
• … and as the sum of energy consumed immediately and the rest provided by the utility.
• The total energy ‘generated’ by the solar panels, as a sum of the energy consumed directly (same aqua bar as in the sum of consumption) and the rest fed into the grid.

In June we needed only 300kWh (10kWh per day). The PV total output was more then 700kWh, and 200kW of that was directly delivered by the PV system – so the PV generator covered 65%. It would be rather easy to become autonomous by using a small, <10kWh battery and ‘shifting’ the missing 3,3kWh per day from sunny to dark hours.

But in January we needed 1100kWh and PV provided less than 200kWh in total. So a battery would not help as there is no energy left to be ‘shifted’.

Daily PV energy balances show that this is true for every single day in January:

We harvest typically less than 10 kWh per day, but we need  more than 30kWh. On the coldest days in January, the heat pump needed about 33kWh – thus heating energy was about 130kWh:

Our house’s heat consumption is typical for a well-renovated old building. If we would re-build our house from scratch, according to low energy standards, we might need only 50-60% energy at best. Then heat pump’s input energy could be cut in half (violet bar). But even then, daily total energy consumption would exceed PV production.

Economics

I have covered economics of the system without battery here and our system has lived up to the expectations: Profits were € 575, the sum of energy sales at market price  (€ 0,06 / kWh) and by not having to pay € 0,18 for power consumed directly.

In Austria turn-key PV systems (without batteries) cost about € 2.000 / kW rated power – so we earned about 6% of the costs. Not bad – given political discussions about negative interest rates. (All numbers are market prices, no subsidies included.)

But it is also interesting to compare profits to heating costs: In this season electrical energy needed for the heat pump translates to € 650. So our profits from the PV generator nearly amounts to the total heating costs.

Economics of batteries

Last year’s assessment of the economics of a system with battery is still valid: We could increase self-sufficiency from 30% to 55% using a battery and ‘shift’ additional 2.000 kWh to the dark hours. This would result in additional € 240 profits of per year.

If a battery has a life time of 20 years (optimistic estimate!) it must not cost more than € 5.000 to ever pay itself off. This is less than prices I have seen in quotes so far.

Off-grid living and autonomy

Energy autonomy might be valued more than economical profits. Some things to consider:

Planning a true off-grid system is planning for a few days in a row without sunshine. Increasing the size of the battery would not help: The larger the battery the larger the losses, and in winter the battery would never be full. It is hard to store thermal energy for another season, but it is even harder to store electrical energy. Theoretically, the area of panels could be massively oversized (by a factor – not a small investment), but then even more surplus has to be ‘wasted’ in summer.

The system has to provide enough energy per day and required peak load in every moment (see spikes in the previous post), but power needs also to be distributed to the 3 phases of electrical power in the right proportion: In Austria energy meters calculate a sum over 3 phases: A system might seem ‘autonomous’ when connected to the grid, but it would not be able to operate off-grid. Example: The PV generator produces 1kW per phase = 3kW in total, while 2kW are used by a water cooker on phase 1. The meter says you feed in 1kW to the grid, but technically you need 1kW extra from the grid for the water cooker and feed in 1kW on phase 2 and 3 each; so there is a surplus of 1kW in total. Disconnected from the grid, the water cooker would not work as 1kW is missing.

A battery does not provide off-grid capabilities automatically, nor do PV panels provide backup power when the sun is shining but the grid is down: During a power outage the PV system’s inverter has to turn off the whole system – otherwise people working on the power lines outside could be hurt by the power fed into the grid. True backup systems need to disconnect from the power grid safely first. Backup capabilities need to be compliant with local safety regulations and come with additional (potentially clunky / expensive) gadgets.

Photovoltaic Generator and Heat Pump: Daily Power Generation and Consumption

You can generate electrical power at home but you cannot manufacture your own natural gas, oil, or wood. (I exempt the minority of people owning forestry). This is often an argument for the combination of heat pump and photovoltaic generator.

Last year I blogged in detail about economics of solar power and batteries and on typical power consumption and usage patterns – and my obsession with tracking down every sucker for electrical energy. Bottom line: Despite related tinkering with control and my own ‘user behaviour’ it is hard to raise self-consumption and self-sufficiency above statistical averages for homes without heat pumps.

In this post I will focus on load profiles and power generation during several selected days to illustrate these points, comparing…

• electrical power provided by the PV generator (logged at Fronius Symo inverter).
• input power needed by the heat pump (logged with energy meter connected to our control unit).
• … power balanced provided by the smart meter: Power is considered positive when fed into the grid is counted  (This meter is installed directly behind the utility’s meter)

A non-modulating, typical brine-water heat pump is always operating at full rated power: We have a 7kW heat pump – 7kW is about the design heat load of the building, as worst case estimate for the coldest day in years. On the coldest day in the last winter the heat pump was on 75% of the time.

Given a typical performance factor of 4 kWh/kWh), the heat pump needs 1/4 of its rated power as input. Thus the PV generator needs to provide about 1-2 kW when the heat pump is on. The rated power of our 18 panels is about 5kW – this is the output under optimum conditions.

Best result near winter solstice

If it is perfectly sunny in winter, the generator can produce enough energy to power the heat pump between 10:00 and 14:00 in the best case.

But such cloudless days are rare, and in the cold and long nights considerable electrical energy is needed, too.

Too much energy in summer

On a perfect summer day hot water could even be heated twice a day by solar power:

These peaks look more impressive than they are compared to the base load: The heat pump needs only 1-2kWh per day compared to 10-11kWh total consumption.

Harvesting energy in spring

On a sunny day in spring the PV output is higher than in summer due to lower ambient temperatures. As we still need space heating energy this energy can also be utilized better:

The heat pump’s input power is similar to the power of a water heater or an electrical stoves. At noon on a perfect day both the heat pump and one appliance could be run on solar power only.

The typical day: Bad timing

On typical days clouds pass and power output changes quickly. This is an example of a day when sunshine and hot water cycle did not overlap much:

At noon the negative peak (power consumption, blue) was about 3,5kW. Obviously craving coffee or tea was string than the obsession with energy efficiency. Even the smartest control system would not be able to predict such peaks in both solar radiation and in erratic user behavior. Therefore I am also a bit sceptical when it comes to triggering the heat pump’s heating cycle by a signal from the PV generator, based on current and ‘expected’ sunshine and weather data from internet services (unless you track individual clouds).

Alien Energy

I am sure it protects us not only from lightning but also from alien attacks and EMP guns …

So I wrote about our lightning protection, installed together with our photovoltaic generator. Now our PV generator is operational for 11 months and we have encountered one alien attack, albeit by beneficial aliens.

The Sunny Baseline

This is the electrical output power of our generator – oriented partly south-east, partly south-west – for some selected nearly perfectly cloudless days last year. Even in darkest winter you could fit the 2kW peak that a water cooker or heat pump needs under the curve at noon. We can heat hot water once a day on a really sunny day but not provide enough energy for room heating (monthly statistics here).

Alien Spikes and an Extended Alien Attack

I was intrigued by very high and narrow spikes of output power immediately after clouds had passed by:

There are two possible explanations: 1) Increase in solar cell efficiency as the panels cool off while shadowed or 2) ‘focusing’ (refraction) of radiation by the edges of nearby clouds.

Such 4kW peaks lasting only a few seconds wide are not uncommon, but typically they do not show up in our standard logging, comprising 5-minute averages.

There was one notable exception this February: Power surged to more than 4kW which is significantly higher than the output on other sunny days in February. Actually, it was higher than the output on the best ever sunny day last May 11 and as high as the peaks on summer solstice (Aliens are green, of course):

Temperature effect and/or ‘focusing’?

On the alien attack day it was cloudy and warmer in the night than on the sunny reference day, February 6. At about 11:30 the sun was breaking through the clouds, hitting rather cool panels:

At that day, the sun was lingering right at the edge of clouds for some time, and global radiation was likely to be higher than expected due to the focusing effect.

The jump in global radiation at 11:30 is clearly visible in our measurements of radiation. But in addition panels had been heated up before by the peak in solar radiation and air temperature had risen, too. So the different effects cannot be disentangled easily .

Power drops by 0,44% of the rated power per decrease in °C of panel temperature. Our generator has 4,77kW, so power decreases by 21W/°C panel temperature.

At 11:30 power was by 1,3kW higher than power on the normal reference day – the theoretical equivalent of a panel temperature decrease by 62°C. I think I can safely attribute the initial surge in output power to the unusual peak in global radiation only.

At 12:30 output power is lower by 300W on the normal sunny day compared to the alien day. This can partly be attributed to the lower input radiation, and partly to a higher ambient temperature.

But only if input radiation is changing slowly, panel temperature has a simple, linear relationship with input temperature. The sun might be blocked for a very short period – shorter than our standard logging interval of 90s for radiation – and the surface of panels cools off intermittently. It is an interesting optimization problem: By just the right combination of blocking period and sunny period overall output could be maximized.

Re-visiting data from last hot August to add more dubious numbers

Panels’ performance was lower for higher ambient air temperatures …

… while global radiation over time was about the same. Actually the enveloping curve was the same, and there were even negative spikes at noon despite the better PV performance:

The difference in peak power was about 750W. The panel temperature difference to account for that would have to be about 36°. This is three times the measured difference in ambient temperature of 39°C – 27°C = 12°C. Is this plausible?

PV planners use a worst-case panel temperature of 75°C – for worst-case hot days like August 12, 2015.

Normal Operating Cell Temperature of panels is about 46°C. Normal conditions are: 20°C of ambient air, 800W/m2 solar radiation, and free-standing panels. One panel has an area of about 1,61m2; our generator with 18 panels has 29m2, so 800W/m2 translates to 23kW. Since the efficiency of solar panels is about 16%, 23kW of input generates about 3,7kW output power – about the average of the peak values of the two days in August. Our panels are attached to the roof and not free-standing – which is expected to result in a temperature increase of 10°C.

So we had been close to normal conditions at noon radiation-wise, and if we had been able to crank ambient temperature down to 20°C in August, panel temperature had been about 46°C + 10°C = 56°C.

I am boldly interpolating now, in order to estimate panel temperature on the ‘colder’ day in August:

 Air Temperature Panel Temperature Comment 20°C 56°C Normal operating conditions, plus typical temperature increase for well-vented rooftop panels. 27°C 63°C August 1. Measured ambient temperature, solar cell temperature interpolated. 39°C 75°C August 12. Measured ambient temperature. Panel temperature is an estimate for the worst case.

Under perfectly stable conditions panel temperature would have differed by 12°C, resulting in a difference of only ~ 250W (12°C * 21W/°C).

Even considering higher panel temperatures at the hotter day or a non-linear relationship between air temperature and panel temperature will not easily give you the 35° of temperature difference required to explain the observed difference of 750W.

I think we see aliens at work again:

At about 10:45 global radiation for the cooler day, August 1, starts to fluctuate – most likely even more wildly than we see with the 90s interval. Before 10:45, the difference in output power for the two days is actually more like 200-300W – so in line with my haphazard estimate for steady-state conditions.

Then at noon the ‘focusing’ effect could have kicked in, and panel surface temperature might haved fluctuated between 27°C air temperature minimum and the estimated 63°C. Both of these effects could result in the required additional increase of a few 100W.

Since ‘focusing’ is actually refraction by particles in the thinned out edges of clouds, I wonder if the effect could also be caused by barely visible variations of the density of mist in the sky as I remember the hot period in August 2015 as sweltry and a bit hazy, rather than partly cloudy.

I think it is likely that both beneficial effects – temperature and ‘focusing’ – will always be observed in unison. On February 11 I had the chance to see the effect of focusing only (or traces of an alien spaceship that just exited a worm-hole) for about half an hour.

________________________________

On temperature dependence of PV output power – from an awesome resource on photovoltaics:

On the ‘focusing’ effect:

• Can You Get More than 100% Solar Energy?
Note especially this comment – describing refraction, and pointing out that refraction of light can ‘focus’ light that would otherwise have been scattered back into space. This commentator also proposes different mechanism for short spikes in power and increase of power during extended periods (such as I observed on February 11).
• Edge-of-Cloud Effect

Source for the 10°C higher temperature of rooftop panels versus free-standing ones: German link, p.3: Ambient air + 20°C versus air + 30°C

Temperature Waves and Geothermal Energy

Nearly all of renewable energy exploited today is, in a sense, solar energy. Photovoltaic cells convert solar radiation into electricity, solar thermal collectors heat hot water. Plants need solar power for photosynthesis, for ‘creating biomass’. The motion of water and air is influenced by the fictitious forces caused by the earth’s rotation, but by temperature gradients imposed by the distribution of solar energy as well.

Also geothermal heat pumps with ground loops near the surface actually use solar energy deposited in summer and stored for winter – that’s why I think that ‘geothermal heat pumps’ is a bit of a misnomer.

Collector (heat exchanger) for brine-water heat pumps.

Within the first ~10 meters below the surface, temperature fluctuates throughout the year; at 10m the temperature remains about constant and equal to 10-15°C for the whole year.

Only at higher depths the flow of ‘real’ geothermal energy can be spotted: In the top layer of the earth’s crust the temperatures rises about linearly, at about 3°C (3K) per 100m. The details depend on geological peculiarities, it can be higher in active regions. This is the energy utilized by geothermal power plants delivering electricity and/or heat.

Geothermal gradient adapted from Boehler, R. (1996). Melting temperature of the Earth’s mantle and core: Earth’s thermal structure. Annual Review of Earth and Planetary Sciences, 24(1), 15–40. (Wikimedia, user Bkilli1). Geothermal power plants use boreholes a few kilometers deep.

This geothermal energy originates from radioactive decays and from the violent past of the primordial earth: when the kinetic energy of celestial objects colliding with each other turned into heat.

The flow of geothermal energy per area directed to the surface, associated with this gradient is about 65 mW/m2 on continents:

Global map of the flow of heat, in mW/m2, from Earth’s interior to the surface. Davies, J. H., & Davies, D. R. (2010). Earth’s surface heat flux. Solid Earth, 1(1), 5-24. (Wikimedia user Bkilli1)

Some comparisons:

• It is small compared to the energy from the sun: In middle Europe, the sun provides about 1.000 kWh per m2 and year, thus 1.000.000Wh / 8.760h = 144W/m2 on average.
• It also much lower than the rule-of-thumb power of ‘flat’ ground loop collectors – about 20W/m2
• The total ‘cooling power’ of the earth is several 1010kW: Would the energy not be replenished by radioactive decay, the earth would lose a some seemingly impressive 1014kWh per year, yet this would result only in a temperature difference of ~10-7°C (This is just a back-of-the-envelope check of orders of magnitude, based on earth’s mass and surface area, see links at the bottom for detailed values).

The constant energy in 10m depth – the ‘neutral zone’ – is about the same as the average temperature of the earth (averaged over one year over the surface of the earth): About 14°C. I will show below that this is not a coincidence: The temperature right below the fluctuating temperature wave ‘driven’ by the sun has to be equal to the average value at the surface. It is misleading to attribute the 10°C in 10m depths to the ‘hot inner earth’ only.

In this post I am toying with theoretical calculations, but in order not so scare readers off too much I show the figures first, and add the derivation as an appendix. My goal is to compare these results with our measurements, to cross-check assumptions for the thermal properties of ground I use in numerical simulations of our heat pump system (which I need for modeling e.g. the expected maximum volume of ice)

1. The surface temperature varies periodically in a year, and I use maximum, minimum and average temperature from our measurements, (corrected a bit for the mild last seasons). These are daily averages as I am not interested in the daily temperature changes between and night.
2. A constant geothermal flow of 65 mW/m2 is superimposed to that.
3. The slow transport of solar energy into ground is governed by a thermal property of ground, called the thermal diffusivity. It describes ‘how quickly’ a lump of heat deposited will spread; its unit is area per time. I use an assumption for this number based on values for soil in the literature.

I am determining the temperature as a function of depth and of time by solving the differential equation that governs heat conduction. This equation tells us how a spatial distribution of heat energy or ‘temperature field’ will slowly evolve with time, given the temperature at the boundary of the interesting part of space in question – in this case the surface of the earth. Fortunately, the yearly oscillation of air temperature is about the simplest boundary condition one could have, so you can calculate the solution analytically.
Another nice feature of the underlying equation is that it allows for adding different solutions: I can just add the effect of the real geothermal flow of energy to the fluctuations caused by solar energy.

The result is a  ‘damped temperature wave’; the temperature varies periodically with time and space: The spatial maximum of temperature moves from the surface to a point below and back: In summer (beginning of August) the measured temperature is maximum at the surface, but in autumn the maximum is found some meters below – heat flows back from ground to the surface then:

Calculated ground temperature, based on measurements of the yearly variation of the temperature at the surface and an assumption of the thermal properties of ground. Calculated for typical middle European maximum and minimum temperatures.

This figure is in line with the images shown in every textbook of geothermal energy. Since the wave is symmetrical about the yearly average, the temperature in about 10m depth, when the wave has ‘run out’, has to be equal to the yearly average at the surface. The wave does not have much chance to oscillate as it is damped down in the middle of the first period, so the length of decay is much shorter than the wavelength.

The geothermal flow just adds a small distortion, an asymmetry of the ‘wave’. It is seen only when switching to a larger scale.

Some data as in previous plot, just extended to greater depths. The geothermal gradient is about 3°C/100m, the detailed value being calculated from the value of thermal conductivity also used to model the fluctuations.

Now varying time instead of space: The higher the depth, the more time it takes for ground to reach maximum temperature. The lag of the maximum temperature is proportional to depth: For 1m difference in depth it is less than a month.

Temporal change of ground temperature at different depths. The wave is damped, but other simply ‘moving into the earth’ at a constant speed.

Measuring the time difference between the maxima for different depths lets us determine the ‘speed of propagation’ of this wave – its wavelength divided by its period. Actually, the speed depends in a simple way on the thermal diffusivity and the period as I show below.

But this gives me an opportunity to cross-check my assumption for diffusivity: I  need to compare the calculations with the experimentally determined delay of the maximum. We measure ground temperature at different depths, below our ice/water tank but also in undisturbed ground:

Temperature measured with Pt1000 sensors – comparing ground temperature at different depths, and the related ‘lag’. Indicated by vertical dotted lines, the approximate positions of maxima and minima. The lag is about 10-15 days.

The lag derived from the figure is in the same order as the lag derived from the calculation and thus in accordance with my assumed thermal diffusivity: In 70cm depth, the temperature peak is delayed by about two weeks.

___________________________________________________

Appendix: Calculations and background.

I am trying to give an outline of my solution, plus some ‘motivation’ of where the differential equation comes from.

Heat transfer is governed by the same type of equation that describes also the diffusion of gas molecules or similar phenomena. Something lumped together in space slowly peters out, spatial irregularities are flattened. Or: The temporal change – the first derivative with respect to time – is ‘driven’ by a spatial curvature, the second derivative with respect to space.

$\frac{\partial T}{\partial t} = D\frac{\partial^{2} T}{\partial x^{2}}$

This is the heat transfer equation for a region of space that does not have any sources or sinks of heat – places where heat energy would be created from ‘nothing’ or vanish – like an underground nuclear reaction (or freezing of ice). All we know about the material is covered by the constant D, called thermal diffusivity.

The equation is based on local conservation of energy: The energy stored in a small volume of space can only change if something is created or removed within that volume (‘sources’) or if it flows out of the volume through its surface. This is a very general principles applicable to almost anything in physics. Without sources or sinks, this translates to:

$\frac{\partial [energy\,density]}{\partial t} = -\frac{\partial \overrightarrow{[energy\,flow]}}{\partial x}$

The energy density [J/m3] stored in a volume of material by heating it up from some start temperature is proportional to temperature, proportionality factors being the mass density ρ [kg/m3] and the specific heat cp [J/kg] of this material. The energy flow per area [W/m2] is typically nearly proportional to the temperature gradient, the constant being heat conductivity κ [W/mK]. The gradient is the first-order derivative in space, so inserting all this we end with the second derivative in space.

All three characteristic constants of the heat conducting material can be combined into one – the diffusivity mentioned before:

$D = \frac{\kappa }{\varrho \, c_{p} }$

So changes in more than one of these parameters can compensate for each other; for example low density can compensate for low conductivity. I hinted at this when writing about heat conduction in our gigantic ice cube: Ice has a higher conductivity and a lower specific heat than water, thus a much higher diffusivity.

I am considering a vast area of ground irradiated by the sun, so heat conduction will be one-dimensional and temperature changes only along the axis perpendicular to the surface. At the surface the temperature varies periodically throughout the year. t=0 is to be associated with beginning of August – our experimentally determined maximum – and the minimum is observed at the beginning of February.

This assumption is just the boundary condition needed to solve this partial differential equation. The real ‘wavy’  variation of temperature is closed to a sine wave, which makes the calculation also very easy. As a physicist I have trained to used a complex exponential function rather than sine or cosine, keeping in mind that only real part describes the real world. This a legitimate choice, thanks to the linearity of the differential equation:

$T(t,x=0) = T_{0} e^{i\omega t}$

with ω being the angular frequency corresponding to one year (2π/ω = 1 year).

It oscillates about 0, with an amplitude of T0. But after all, the definition of 0°C is arbitrary and – again thanks to linearity – we can use this solution and just add a constant function to shift it to the desired value. A constant does neither change with space or time and thus solves the equation trivially.

If you have more complicated sources or sinks, you would represent those mathematically as a composition of simpler ‘sources’, for each of which you can find a quick solution and then add up add the solutions, again thanks to linearity. We are lucky that our boundary condition consist just of one such simple harmonic wave, and we guess at the solution for all of space, adding a spatial wave to the temporal one.

So this is the ansatz – an educated guess for the function that we hope to solve the differential equation:

$T(t,x) = T_{0} e^{i\omega t + \beta x}$

It’s the temperature at the surface, multiplied by an exponential function. x is positive and increasing with depth. β is some number we don’t know yet. For x=0 it’s equal to the boundary temperature. Would it be a real, negative number, temperature would decrease exponentially with depth.

The ansatz is inserted into the heat equation, and every differentiation with respect to either space or time just yields a factor; then the exponential function can be cancelled from the heat transfer equation. We end up with a constraint for the factor β:

$i\omega = D\beta^{2}$

Taking the square root of the complex number, there would be two solutions:

$\beta=\pm \sqrt{\frac{\omega}{2D}}(1+i))$

β has a real and an imaginary part: Using it in T(x,t) the real part corresponds to exponential ‘decay’ while the imaginary part is an oscillation (similar to the temporal one).

Both real and imaginary parts of this function solve the equation (as any linear combination does). So we take the real part and insert β – only the solution for β with negative sign makes sense as the other one would describe temperature increasing to infinity.

$T(t,x) = Re \left(T_{0}e^{i\omega t} e^{-\sqrt{\frac{\omega}{2D}}(1+i)x}\right)$

The thing in the exponent has to be dimension-less, so we can express the combinations of constants as characteristic lengths, and insert the definition of ω=2π/τ):

$T(t,x) = T_{0} e^{-\frac{x}{l}}cos\left(2\pi\left(\frac {t} {\tau} -\frac{x}{\lambda }\right)\right)$

The two lengths are:

• the wavelength of the oscillation $\lambda = \sqrt{4\pi D\tau }$
• and the attenuation length  $l = \frac{\lambda}{2\pi} = \sqrt{\frac{D\tau}{\pi}}$

So the ratio between those lengths does not depend on the properties of the material and the wavelength is always much shorter than the attenuation length. That’s why there is hardly one period visible in the plots.

The plots have been created with this parameters:

• Heat conductivity κ = 0,0019 kW/mK
• Density ρ = 2000 kg/m3
• Specific heat cp = 1,3 kJ/kgK
• tau = 1 year = 8760 hours

Thus:

• Diffusivity D = 0,002631 m2/h
• Wavelength λ = 17 m
• Attenuation length l = 2,7 m

The wave (any wave) propagates with a speed v equivalent to wavelength over period: v = λ / tau.

$v = \frac{\lambda}{\tau} = \frac{\sqrt{4\pi D\tau}}{\tau} = \sqrt{\frac{4\pi D}{\tau}}$

The speed depends only on the period and the diffusivity.

The maximum of the temperature as observed in a certain depth x is delayed by a time equal x over v. Cross-checking our measurements of the temperature T(30cm) and T(100cm), I would thus expect a delay by 0,7m / (17m/8760h) = 360 h = 15 days which is approximately in agreement with experiments (taking orders of magnitude). Note one thing though: Only the square root of D is needed in calculations, so any error I make in assumptions for D will be generously reduced.

I have not yet included the geothermal linear temperature gradient in the calculation. Again we are grateful for linearity: A linear – zero-curvature – temperature profile that does not change with time is also a trivial solution of the equation that can be added to our special exponential solution.

So the full solution shown in the plot is the sum of:

• The damped oscillation (oscillating about 0°C)
• Plus a constant denoting the true yearly average temperature
• Plus a linear decrease with depth, the linear correction being 0 at the surface to meet the boundary condition.

If there would be no geothermal gradient (thus no flow from beneath) the temperature at infinite distance (practically in 20m) would be the same as the average temperature of the surface.

Daily changes could be taken into account by adding yet another solution that satisfies an amendment to the boundary condition: Daily fluctuations of temperatures would be superimposed to the yearly oscillations. The derivation would be exactly the same, just the period is different by a factor of 365. Since the characteristic lengths go with the square root of the period, yearly and daily lengths differ only by a factor of about 19.

________________________________________

Intro to geothermal energy:

Geothermal gradient and energy of the earth:

These data for bore holes using one scale show the gradient plus the disturbed surface region, with not much of a neutral zone in between.

Theory of Heat Conduction

Heat Transfer Equation on Wikipedia
Textbook on Heat Conduction, available on archive.org in different formats.

I have followed the derivation of temperature waves given in my favorite German physics book on Thermodynamics and Statistics, by my late theoretical physics professor Wilhelm Macke. This page quotes the classic on heat conduction, by Carlslaw and Jäger, plus the main results for the characteristic lengths.

Half a Year of Solar Power and Smart Metering

Our PV generator and new metering setup is now operational for half a year; this is my next wall of figures. For the first time I am combining data from all our loggers (PV inverter, smart meter for consumption, and heat pump system’s monitoring), and I give a summary on our scrutinizing the building’s electrical power base load.

For comparison: These are data for Eastern Austria (in sunny Burgenland). Our PV generator has 4.77kWp, 10 panels oriented south-east and 8 south-west. Typical yearly energy production in our place, about 48° latitude: ~ 5.300 kWh. In the first 6 months – May to November 2015 – we harvested about 4.000kWh.
Our house (private home and office) meets the statistical average of an Austrian private home, that is about 3.500 kWh/year for appliances (excl. heating, and cooling is negligible here). We heat with a heat pump and need about 7.200kWh electrical energy per year in total.

In the following plots daily and monthly energy balances are presented in three ways:

1. Total consumption of the building as the sum of the PV energy used immediately, and the energy from the utility.
2. The same total consumption as the sum of the heat pump compressor’s input energy and the remaining energy for appliances, computers, control etc.
3. Total energy generated by PV panels as the sum of energy used (same amount as contributing to 1) and the energy sold to the utility.

In summer there is more PV  energy available than needed and – even with a battery – the rest would needed to be fed into the grid. In October, heating season starts and more energy is needed by the heat pump that can be provided by solar energy.

This is maybe demonstrated best by comparing the self-sufficiency quota (ratio of PV energy and energy consumed) and the self-consumption quota (ratio of PV energy consumed and PV production). Number ‘flip’ in October:

In November we had some unusually hot record-breaking days while the weather became more typical at the end of the month:

This is reflected in energy consumption: November 10 was nearly like a summer day, when the heat pump only had to heat hot water, but on the colder day it needed about 20kWh (resulting in 80-100kWh heating energy).

In July, we had the chance to measure what the building without life-forms needs per day – the absolute minimum baseline. On July 10, 11, and 12 we were away and about 4kWh were consumed per day160W on average.

Note that the 4kWh baseline is 2-3 times the energy the heat pump’s compressor needs for hot water heating every day:

We catalogued all devices, googled for data sheets and measured power consumption, flipped a lot of switches, and watched the smart meter tracking the current consumption of each device.

Consumption minus production: Current values when I started to write this post, the sun was about to set. In order to measure the consumption of individual devices they have been switched an of off one after the other, after sunset.

We abandoned some gadgets and re-considered usage. But in this post I want to focus on the base load only – on all devices that contribute to the 160W baseline.

As we know from quantum physics, the observing changes the result of the measurement. It was not a surprise that the devices used for measuring, monitoring and metering plus required IT infrastructure make up the main part of the base load.

Control & IT base load – 79W

• Network infrastructure, telephone, and data loggers – 35W: Internet provider’s DSL modem / router, our router + WLAN access point, switch, ISDN phone network termination, data loggers / ethernet gateways for our control unit, Uninterruptible Power Supply (UPS).
• Control and monitoring unit for the heat pump system, controlling various valves and pumps: 12W.
• The heat pump’s internal control: 10W
• Three different power meters: 22W: 1) Siemens smart meter of the utility, 2) our own smart meter with data logger and WLAN, 3) dumb meter for overall electrical input energy of the heat pump (compressor plus auxiliary energy). The latter needs 8W despite its dumbness.

Other household base load – 39W

• Unobtrusive small gadgets – 12W: Electrical toothbrush, motion detectors, door bell, water softener, that obnoxious clock at the stove which is always wrong and can’t be turned off either, standby energy of microwave oven and of the PV generator’s inverter.
• Refrigerator – 27W: 0,65 kWh per day.

Non-essential IT networking infrastructure – 10W

• WLAN access point and router for the base floor – for connecting the PV inverter and the smart meter and providing WLAN to all rooms.

These are not required 24/7; you don’t lose data by turning them off. Remembering to turn off daily might be a challenge:

Non-24/7 office devices – 21W. Now turned off with a flip switch every evening, and only turned on when needed.

• Phones and headsets: 9W.
• Scanner/Printer/Fax: 8W. Surprisingly, there was no difference between ‘standby’ and ‘turned off’ using the soft button – it always needs 8W unless you really disconnect it.
• Server in hibernated state 4W. Note that it took a small hack of the operating system already to hibernate the server operating system at all. Years ago the server was on 24/7 and its energy consumption amounted to 500kWh a year.

Stuff retired after this ‘project’ – 16W.

• Radio alarm clock – 5W. Most useless consumption of energy ever. But this post is not meant as bragging about the smartest use of energy ever, but about providing a realistic account backed up by data.
• Test and backup devices – 7W. Backup notebooks, charging all the time, backup router for playground subnet not really required 24/7, timer switch most likely needing more energy than it saved by switching something else.
• Second old Uninterruptable Power Supply – 4W. used for one connected device only, in addition to the main one. It was purchased in the last century when peculiarities of the local power grid had rebooted  computers every day at 4:00 PM.

In total, we were able to reduce the base load by about 40W, 25% of the original value. This does not sound much – it is equivalent to a small light bulb. But on the other hand, it amounts to 350kWh / year, that is 10% of the yearly energy consumption!

___________________________

Logging setup:

• Temperature / compressor’s electrical power: Universal control UVR1611 and C.M.I. as data logger, logging interval 90 seconds. Temperature sensor: PT1000. Power meter:  CAN Energy Meter. Log files are exported daily to CSV files using Winsol. Logging interval: 90 seconds.
• PV output power: Datamanager 2.0 included with PV inverter Fronius Symo 4.5-3-M, logging interval 5 minutes.
• Consumed energy: Smart meter EM-210, logging interval 15 minutes.
• CSV log files are imported into Microsoft SQL Server 2014 for analysis and consolidation. Plots are created with Microsoft Excel as front end to SQL Server, from daily and monthly views on joined UVR1611 / Fronius Symo / EM-210 tables.