The Orphaned Internet Domain Risk

I have clicked on company websites of social media acquaintances, and something is not right: Slight errors in formatting, encoding errors for special German characters.

Then I notice that some of the pages contain links to other websites that advertize products in a spammy way. However, the links to the spammy sites are embedded in this alleged company websites in a subtle way: Using the (nearly) correct layout, or  embedding the link in a ‘news article’ that also contains legit product information – content really related to the internet domain I am visiting.

Looking up whois information tells me that these internet domain are not owned by my friends anymore – consistent with what they actually say on the social media profiles. So how come that they ‘have given’ their former domains to spammers? They did not, and they didn’t need to: Spammers simply need to watch out for expired domains, seize them when they are available – and then reconstruct the former legit content from public archives, and interleave it with their spammy messages.

The former content of legitimate sites is often available on the web archive. Here is the timeline of one of the sites I checked:

Clicking on the details shows:

  • Last display of legit content in 2008.
  • In 2012 and 2013 a generic message from the hosting provider was displayed: This site has been registered by one of our clients
  • After that we see mainly 403 Forbidden errors – so the spammers don’t want their site to be archived – but at one time a screen capture of the spammy site had been taken.

The new site shows the name of the former owner at the bottom but an unobtrusive link had been added, indicating the new owner – a US-based marketing and SEO consultancy.

So my take away is: If you ever feel like decluttering your websites and free yourself of your useless digital possessions – and possibly also social media accounts, think twice: As soon as your domain or name is available, somebody might take it, and re-use and exploit your former content and possibly your former reputation for promoting their spammy stuff in a shady way.

This happened a while ago, but I know now it can get much worse: Why only distribute marketing spam if you can distribute malware through channels still considered trusted? In this blog post Malwarebytes raises the question if such practices are illegal or not – it seems that question is not straight-forward to answer.

Visitors do not even have to visit the abandoned domain explicitly to get hacked by malware served. I have seen some reports of abandoned embedded plug-ins turned into malicious zombies. Silly example: If you embed your latest tweets, Twitter goes out-of-business, and its domains are seized by spammers – you Follow Me icon might help to spread malware.

If a legit site runs third-party code, they need to trust the authors of this code. For example, Equifax’ website recently served spyware:

… the problem stemmed from a “third-party vendor that Equifax uses to collect website performance data,” and that “the vendor’s code running on an Equifax Web site was serving malicious content.”

So if you run any plug-ins, embedded widgets or the like – better check out regularly if the originating domain is still run by the expected owner – monitor your vendors often; and don’t run code you do not absolutely need in the first place. Don’t use embedded active badges if a simple link to your profile would do.

Do a painful boring inventory and assessment often – then you will notice how much work it is to manage these ‘partners’ and rather stay away from signing up and registering for too much services.

Data for the Heat Pump System: Heating Season 2016-2017

I update the documentation of measurement data [PDF] about twice a year. This post is to provide a quick overview for the past season.

The PDF also contains the technical configuration and sizing data. Based on typical questions from an ‘international audience’ I add a summary here plus some ‘cultural’ context:

Building: The house is a renovated, nearly 100-year old building in Eastern Austria: a typical so-called ‘Streckhof’ – an elongated, former small farmhouse. Some details are mentioned here. Heating energy for space heating of two storeys (185m2) and hot water is about 17.000-20.000kWh per year. The roof / attic had been rebuilt in 2008, and the facade was thermally insulated. However, the major part of the house is without an underground level, so most energy is lost via ground. Heating only the ground floor (75m2) with the heat pump reduces heating energy only by 1/3.

Climate: This is the sunniest region of Austria – the lowlands of the Pannonian Plain bordering Hungary. We have Pannonian ‘continental’ climate with low precipitation. Normally, monthly average temperatures in winter are only slightly below 0°C in January, and weeks of ‘ice days’ in a row are very rare.

Heat energy distribution and storage (in the house): The renovated first floor has floor loops while at the ground floor mainly radiators are used. Wall heating has been installed in one room so far. A buffer tank is used for the heating water as this is a simple ‘on-off’ heat pump always operating at about its rated power. Domestic hot water is heated indirectly using a hygienic storage tank.

Heating system. An off-the-shelf, simple brine-water heat pump uses a combination of an unglazed solar-air collector and an underwater water tank as a heat source. Energy is mainly harvested from rather cold air via convection.

Addressing often asked questions: Off-the-shelf =  Same type of heat pump as used with geothermal systems. Simple: Not-smart, not trying to be the universal energy management system, as the smartness in our own control unit and logic for managing the heat source(s). Brine: A mixture of glycol and water (similar to the fluid used with flat solar thermal collectors) = antifreeze as the temperature of brine is below 0°C in winter. The tank is not a seasonal energy storage but a buffer for days or weeks. In this post hydraulics is described in detail, and typical operating conditions throughout a year. Both tank and collector are needed: The tank provides a buffer of latent energy during ‘ice periods’ and it allows to harvest more energy from air, but the collector actually provides for about 75% of the total ambient energy the heat pump needs in a season.

Tank and collector are rather generously sized in relation to the heating demands: about 25m3 volume of water (total volume +10% freezing reserve) and 24m2 collector area.

The overall history of data documented in the PDF also reflects ongoing changes and some experiments, like heating the first floor with a wood stove, toggling the effective area of the collector used between 50% and 100%, or switching off the collector to simulate a harsher winter.

Data for the past season

Finally we could create a giant ice cube naturally. 14m3 of ice had been created in the coldest January since 30 years. The monthly average temperature was -3,6°C, 3 degrees below the long-term average.

(Re the oscillations of the ice volume are see here and here.)

We heated only the ground floor in this season and needed 16.600 kWh (incl. hot water) – about the same heating energy as in the previous season. On the other hand, we also used only half of the collector – 12m2. The heating water inlet temperatures for radiators was even 37°C in January.

For the first time the monthly performance factor was well below 4. The performance factor is the ratio of output heating energy and input electrical energy for heat pump and brine pump. In middle Europe we measure both energies in kWh 😉 The overall seasonal performance factor was 4,3.

The monthly performance factor is a bit lower again in summer, when only hot water is heated (and thus the heat pump’s COP is lower because of the higher target temperature).

Per day we needed about 100kWh of heating energy in January, while the collector could not harvest that much:

In contrast to the season of the Ice Storage Challenge, also the month before the ‘challenge’ (Dec. 2016) was not too collector-friendly. But when the ice melted again, we saw the usual large energy harvests. Overall, the collector could contribute not the full ‘typical’ 75% of ambient energy this season.

(Definitions, sign conventions explained here.)

But there was one positive record, too. In a hot summer of 2017 we consumed the highest cooling energy so far – about 600kWh. The floor loops are used for passive cooling; the heating buffer tank is used to transfer heat from the floor loops to the cold underground tank. In ‘colder’ summer nights the collector is in turn used to cool the tank, and every time hot tap water is heated up the tank is cooled, too.

Of course the available cooling power is just a small fraction of what an AC system for the theoretical cooling load would provide for. However, this moderate cooling is just what – for me – makes the difference between unbearable and OK on really hot days with more than 35°C peak ambient temperature.

Computers, Science, and History Thereof

I am reading three online resources in parallel – on the history and the basics of computing, computer science, software engineering, and the related culture and ‘philosophy’. An accidental combination I find most enjoyable.

Joel on Software: Joel Spolsky’s blog – a collection of classic essays. What every developer needs to know about Unicode. New terms like Astronaut Architects and Leaky Abstractions. How to start a self-funded software company, how to figure out the price of software, how to write functional specifications. Bringing back memories of my first encounters with Microsoft VBA. He has the best examples – Martian Headsets to explain web standards.

The blog started in 1999 – rather shortly after I had entered the IT industry. So it is an interesting time capsule, capturing technologies and trends I was sort of part of – including the relationship with one large well-known software company.

Somewhere deep in Joel’s blog I found references to another classic; it was in an advice on how to show passion as an applicant for a software developer job. Tell them how reading this moved you to tears:

Structure and Interpretation of Computer Programs. I think I have found the equivalent to Feynman’s Physics Lectures in computer science! I have hardly ever read a textbook or attended a class that was both so philosophically insightful and useful in a hands-on, practical way. Using Scheme (Lisp) as an example, important concepts are introduced step-by-step, via examples, viewed from different perspectives.

It was amazing how far you can get with purely Functional Programming. I did not even notice that they had not used a single assignment (Data Mutation) until far into the course.

The quality of the resources made available for free is incredible – which holds for all the content I am praising in this post: Full textbook, video lectures with transcripts, slides with detailed comments. It is also good to know and reassuring that despite the allegedly fast paced changes of technology, basic concepts have not changed that much since decades.

But if you are already indulging in nostalgic thoughts why not catch up on the full history of computing?

Creatures of Thought. A sublime book-like blog on the history of computing – starting from with the history of telephone networks and telegraphs, covering computing machines – electro-mechanical or electronic, related and maybe unappreciated hardware components like the relay, and including biographic vignettes of the heroes involved.

The author’s PhD thesis (available for download on the About page) covers the ‘information utility’ vision that was ultimately superseded by the personal computer. This is an interesting time capsule for me as well, as this story ends about where my personal journey started – touching personal PCs in the late 1980s, but having been taught the basics of programming via sending my batch jobs to an ancient mainframe.

From such diligently done history of engineering I can only learn not to rush to any conclusions. There are no simple causes and effects, or unambiguous stories about who invented what and who was first. It’s all subtle evolution and meandering narratives, randomness and serendipity. Quoting from the post that indicates the beginning of the journey, on the origins of the electric telegraph:

Our physics textbooks have packaged up the messy past into a tidy collection of concepts and equations, eliding centuries of development and conflict between competing schools of thought. Ohm never wrote the formula V = IR, nor did Maxwell create Maxwell’s equations.

Though I will not attempt to explore all the twists and turns of the intellectual history of electricity, I will do my best to present ideas as they existed at the time, not as we retrospectively fit them into our modern categories.

~

Phone, 1970s, Austria

The kind of phone I used at the time when the video lectures for Structure and Interpretation of Computer Programs had been recorded and when I submitted my batch jobs of Fortran code to be compiled. I have revived the phone now and then.

 

Tinkering, Science, and (Not) Sharing It

I stumbled upon this research paper called PVC polyhedra:

We describe how to construct a dodecahedron, tetrahedron, cube, and octahedron out of pvc pipes using standard fittings.

In particular, if we take a connector that takes three pipes each at 120 degree angles from the others (this is called a “true wye”) and we take elbows of the appropriate angle, we can make the edges come together below the center at exactly the correct angles.

A pivotal moment: What you consider tinkering is actually research-paper-worthy science. Here are some images from the Chief Engineer’s workbench.

The supporting construction of our heat exchangers are built from standard parts connected at various angles:

The final result can be a cuboid for holding meandering tubes:

… or cascaded prisms with n-gon basis – for holding spirals of flexible tubes:

The implementation of this design is documented here (a German post whose charm would be lost in translation unless I wanted to create Internet Poetry).

But I also started up my time machine – in order to find traces of my polyhedra research in the early 1980s. From photos and drawings of the three-dimensional crystals in mineralogy books I figured out how to draw two-dimensional maps of maximally connected surface areas. I cut out the map, and glued together the remaining free edges. Today I would be made redundant by Origami AI.

I filled several shelves with polyhedra of increasing number of faces, starting with a tetrahedron and culminating with this rhombicosidodecahedron. If I recall correctly, I cheated a bit with this one and created some of the pyramids as completely separate items.

I think this was a rather standard hobby for the typical nerdy child, among things like growing crystals from solutions of toxic chemicals, building a makeshift rotatable telescope tripod from scraps, or verifying the laws of optics using prisms and lenses from ancient dismantled devices.

The actually interesting thing is that this photo is the only trace of any of these hobbies. In many years after creating this stuff – and destroying it again – I never thought about documenting it. Until today. It seems we weren’t into sharing these days.

Simulations: Levels of Consciousness

In a recent post I showed these results of simulations for our heat pump system:

I focused on the technical details – this post will be more philosophical.

What is a ‘simulation’ – opposed to simplified calculations of monthly or yearly average temperatures or energies? The latter are provided by tools used by governmental agencies or standardization bodies – allowing for a comparison of different systems.

In a true simulation the time intervals so small that you catch all ‘relevant’ changes of a system. If a heating system is turned on for one hour, then turned off again, he time slot needs to be smaller than one hour. I argued before that calculating meaningful monthly numbers requires to incorporate data that had been obtained before by measurements – or by true simulations.

For our system, the heat flow between ground and the water/ice tank is important. In our simplified sizing tool – which is not a simulation – I use average numbers. I validated them by comparing with measurements: The contribution of ground can be determined indirectly; by tallying all the other energies involved. In the detailed simulation I calculate the temperature in ground as a function of time and of distance from the tank, by solving the Heat Equation numerically. Energy flow is then proportional to the temperature gradient at the walls of the tank. You need to make assumptions about the thermal properties of ground, and a simplified geometry of the tank is considered.

Engineering / applied physics in my opinion is about applying a good-enough-approach in order to solve one specific problem. It’s about knowing your numbers and their limits. It is tempting to get carried away by nerdy physics details, and focus on simulating what you know exactly – forgetting that there are huge error bars because of unknowns.

This is the hierarchy I keep in mind:

On the lowest level is the simulation physics, that is: about modelling how ‘nature’ and system’s components react – to changes in the previous time slot. Temperatures change because energies flows, and energy flows because of temperature differences. The heat pump’s output power depends on heating water temperature and brine temperature. Energy of the building is ‘lost’ to the environment via heat conduction; heat exchangers immersed in tanks deposit energy there or retrieve it. I found that getting the serial connection of heat exchangers right in the model was crucial, and it required a self-consistent calculation for three temperatures at the same point of time, rather than trying to ‘follow round the brine’. I used the information on average brine temperatures obtained by these method to run a simplified version of the simulation using daily averages only – for estimating the maximum volume of ice for two decades.

So this means you need to model your exact hydraulic setup, or at least you need to know which features of your setup are critical and worthy to model in detail. But the same also holds for the second level, the simulation of control logic. I try to mirror production control logic as far as possible: This code determines how pumps and valves will react, depending on the system’s prior status before. Both in real life and in the simulation threshold values and ‘hystereses’ are critical: You start to heat if some temperature falls below X, but you only stop heating if it has risen above X plus some Delta. Typical brine-water heat pumps always provide approximately the same output power, so you control operations time and buffer heating energy. If Delta for heating the hot water buffer tank is too large, the heat pump’s performance will suffer. The Coefficient of Performance of the heat pump decreases with increasing heating water temperature. Changing an innocuous parameter will change results a lot, and the ‘control model’ should be given the same vigilance as the ‘physics model’.

Control units can be tweaked at different levels: ‘Experts’ can change the logic, but end users can change non-critical parameters, such as set point temperatures.We don’t restrict expert access in systems we provide the control unit for. But it make sense to require extra input for the expert level though – to prevent accidental changes.

And here we enter level 3 – users’ behavior. We humans are bad at trying to outsmart the controller.

[Life-form in my home] always sets the controller to ‘Sun’. [little sun icon indicating manually set parameters]. Can’t you program something so that nothing actually changes when you pick ‘Sun’?

With heat pumps utilizing ground or water sources – ‘built’ storage repositories with limited capacity – unexpected and irregular system changes are critical: You have to size your source in advance. You cannot simply order one more lorry load of wood pellets or oil if you ‘run out of fuel’. If the source of ambient energy is depleted, the heat pump finally will refuse to work below a certain source temperature. The heat pump’s rated power has match the heating demands and the size of the source exactly. It also must not be oversized in order to avoid turning on and off the compressor too often.

Thus you need good estimates for peak heat load and yearly energy needs, and models should include extreme weather (‘physics’) but also erratic users’ behaviour. The more modern the building, the more important spikes in hot tap water usage get in relation to space heating. A vendor of wood pellet stoves told me that delivering peak energy for hot water – used in private bathrooms that match spas – is a greater challenge today than delivering space heating energy. Energy certificates of modern buildings take into account huge estimated solar and internal energy gains – calculated according to standards. But the true heating power needed on a certain day will depend on the strategy or automation home owners use when managing their shades.

Typical gas boilers are oversized (in terms of kW rated power) by a factor of 2 or more in Germany, but with heat pumps you need to be more careful. However, this also means that heat pump systems cannot and should not be planned for rare peak demands, such as: 10 overnight guests want to shower in the morning one after the other, on an extremely cold day, or for heating up the building quickly after temperature had been decreased during a leave of absence.

The nerdy answer is that a smart home would know when your vacation ends and start heating up well in advance. Not sure what to do about the showering guests as in this case ‘missing’ power cannot be compensated by more time. Perhaps a gamified approach will work: An app will do something funny / provide incentives and notifications so that people wait for the water to heat up again. But what about planning for renting a part of the house out someday? Maybe a very good AI will predict what your grandchildren are likely to do, based on automated genetics monitoring.

The challenge of simulating human behaviour is ultimately governed by constraints on resources – such as the size of the heat source: Future heating demands and energy usage is unknown but the heat source has to be sized today. If the system is ‘open’ and connected to a ‘grid’ in a convenient way problems seem to go away: You order whatever you need, including energy, any time. The opposite is planning for true self-sufficiency: I once developed a simulation for an off-grid system using photovoltaic generators and wind power – for a mountain shelter. They had to meet tough regulations and hygienic standards like any other restaurant, e.g.: to use ‘industry-grade’ dishwashers needing 10kW of power. In order to provide that by solar power (plus battery) you needed to make an estimate on the number of guests likely to visit … and thus on how many people would go hiking on a specific day … and thus maybe on the weather forecast. I tried to factor in the ‘visiting probability’ based on the current weather.

I think many of these problem can be ‘resolved’ by recognizing that they are first world problems. It takes tremendous efforts – in terms of energy use or systems’ complexity – to obtain 100% availability and to cover all exceptional use cases. You would need the design heat load only for a few days every decade. On most winter days a properly sized heat pump is operating for only 12 hours. The simple, low tech solution would be to accept the very very rare intermittent 18,5°C room temperature mitigated by proper clothing. Accepting a 20-minute delay of your shower solves the hot water issue. An economical analysis can reveal the (most likely very small) trade-off of providing exceptional peak energy by a ‘backup’ electrical heating element – or by using that wood stove that you installed ‘as a backup’ but mostly for ornamental reasons because it is dreadful to fetch the wood logs when it is really cold.

But our ‘modern’ expectations and convenience needs are also reflected in regulations. Contractors are afraid of being sued by malicious clients who (quote) sit next their heat pump and count its operating cycles – to compare the numbers with the ones to be ‘guaranteed. In a weather-challenged region at more than 2.000 meters altitude people need to steam clean dishes and use stainless steel instead of wood – where wooden plates have been used for centuries. I believe that regulators are as prone as anybody else to fall into the nerdy trap described above: You monitor, measure, calculate, and regulate the things in detail that you can measure and because you can measure them – not because these things were top priorities or had the most profound impact.

Still harvesting energy from air - during a record-breaking cold January 2017

Heat Transport: What I Wrote So Far.

Don’t worry, The Subversive Elkement will publish the usual silly summer posting soon! Now am just tying up loose ends.

In the next months I will keep writing about heat transport: Detailed simulations versus maverick’s rules of thumb, numerical solutions versus insights from the few things you can solve analytically, and applications to our heat pump system.

So I checked what I have already written – and I discovered a series which does not show up as such in various lists, tags, categories:

[2014-12-14] Cistern-Based Heat Pump – Research Done in 1993 in Iowa. Pioneering work, but the authors dismissed a solar collector for economic reasons. They used a steady-state estimate of the heat flow from ground to the tank, and did not test the setup in winter.

Cistern-Based Water-Source Heat Pump System Design, 1993[2015-01-28] More Ice? Exploring Spacetime of Climate and Weather. A simplified simulation based on historical weather data – only using daily averages. Focus: Estimate of the maximum volume of ice per season, demonstration of yearly variations. As explained later (2017) in more detail I had to use information from detailed simulations though – to calculate the energy harvested by the collector correctly in such a simple model.

Simple simulations of volume of ice[2015-04-01] Ice Storage Challenge: High Score! Our heat pump created an ice cube of about 15m3 because we had turned the collector off. About 10m3 of water remained unfrozen, most likely when / because the ice cube touched ground. Some qualitative discussions of heat transport phenomena involved and of relevant thermal parameters.

Ice formation during the 'ice storage challenge'[2016-01-07] How Does It Work? (The Heat Pump System, That Is) Our system, in a slide-show of operating statuses throughput a typical year. For each period typical temperatures are given and the ‘typical’ direction of heat flow.

System in September - typical operations conditions[2016-01-22] Temperature Waves and Geothermal Energy. ‘Geothermal’ energy used by heat pumps is mainly stored solar energy. A simple model: The temperature at the surface of the earth varies sinusoidally throughout the year – this the boundary condition for the heat equation. This differential equation links the temporal change of temperature to its spatial variation. I solve the equation, show some figures, and check how results compare to the thermal diffusivity of ground obtained from measurements.

Measured 'wave' and propagation time[2016-03-01] Rowboats, Laser Pulses, and Heat Energy (Boring Title: Dimensional Analysis). Re-visiting heat transport and heat diffusion length, this time only analyzing dimensional relationships. By looking at the heat equation (without the need to solve it) a characteristic length can be calculated: ‘How far does heat get in a certain time?’

Temperature waves in ground - attenuation length of about 10 meters[2017-02-05] Earth, Air, Water, and Ice. Data analysis of the heating season 2014/15 (when we turned off the solar/air collector to simulate a harsher winter) – and an attempt to show energy storages, heat exchangers, and heat flows in one schematic. From the net energy ‘in the tank’ the contribution of ground can be calculated.

Energy storage, heat exchangers, heat flow[2017-02-22] Ice Storage Hierarchy of Needs. Continued from the previous post – bird’s eye view: How much energy comes from which sources, and which input parameters are critical? I try to answer when and if simple energy accounting makes sense in comparison to detailed simulations.

Hierarchy of needs - ambient energy in ice months[2017-05-02] Simulating Peak Ice. I compare measurements of the level in the tank with simulations of the evolution of the volume of ice. Simulations (1-minute intervals) comprise a model of the control logic, the varying performance factor of the heat pump, heat transport in ground, energy balances for the hot and cold tanks, and the heat exchangers connected in series.

Simulations of brine and tank temperature and volume of ice, based on system state in 1-minute intervals.(Adding the following after having published this post. However, there is no guarantee I will update this post forever ;-))

[2017-08-17] Simulations: Levels of Consciousness. Bird’s Eye View: How does simulating heat transport fit into my big picture of simulating the heat pump system or buildings or heating systems in general? I consider it part of the ‘physics’ layer of a hierarchy of levels.

Simulation - levels of consciousnessPlanned episodes? Later this year (2017) or next year I might write about the error made when considering simplified geometry – like modeling a linear 1D flow when the actualy symmetry is e.g. spherical.

Spheres in a Space with Trillions of Dimensions

I don’t venture into speculative science writing – this is just about classical statistical mechanics; actually about a special mathematical aspect. It was one of the things I found particularly intriguing in my first encounters with statistical mechanics and thermodynamics a long time ago – a curious feature of volumes.

I was mulling upon how to ‘briefly motivate’ the calculation below in a comprehensible way, a task I might have failed at years ago already, when I tried to use illustrations and metaphors (Here and here). When introducing the ‘kinetic theory’ in thermodynamics often the pressure of an ideal gas is calculated first, by considering averages over momenta transferred from particles hitting the wall of a container. This is rather easy to understand but still sort of an intermediate view – between phenomenological thermodynamics that does not explain the microscopic origin of properties like energy, and ‘true’ statistical mechanics. The latter makes use of a phase space with with dimensions the number of particles. One cubic meter of gas contains ~1025 molecules. Each possible state of the system is depicted as a point in so-called phase space: A point in this abstract space represents one possible system state. For each (point-like) particle 6 numbers are added to a gigantic vector – 3 for its position and 3 for its momentum (mass times velocity), so the space has ~6 x 1025 dimensions. Thermodynamic properties are averages taken over the state of one system watched for a long time or over a lot of ‘comparable’ systems starting from different initial conditions. At the heart of statistical mechanics are distributions functions that describe how a set of systems described by such gigantic vectors evolves. This function is like a density of an incompressible fluid in hydrodynamics. I resorted to using the metaphor of a jelly in hyperspace before.

Taking averages means to multiply the ‘mechanical’ property by the density function and integrate it over the space where these functions live. The volume of interest is a  generalized N-ball defined as the volume within a generalized sphere. A ‘sphere’ is the surface of all points in a certain distance (‘radius’ R) from an origin

x_1^2 + x_2^2 + ... + x_ {N}^2 = R^2

(x_n being the co-ordinates in phase space and assuming that all co-ordinates of the origin are zero). Why a sphere? Because states are ordered or defined by energy, and larger energy means a greater ‘radius’ in phase space. It’s all about rounded surfaces enclosing each other. The simplest example for this is the ellipse of the phase diagram of the harmonic oscillator – more energy means a larger amplitude and a larger maximum velocity.

And here is finally the curious fact I actually want to talk about: Nearly all the volume of an N-ball with so many dimensions is concentrated in an extremely thin shell beneath its surface. Then an integral over a thin shell can be extended over the full volume of the sphere without adding much, while making integration simpler.

This can be seen immediately from plotting the volume of a sphere over radius: The volume of an N-ball is always equal to some numerical factor, times the radius to the power of the number of dimensions. In three dimensions the volume is the traditional, honest volume proportional to r3, in two dimensions the ‘ball’ is a circle, and its ‘volume’ is its area. In a realistic thermodynamic system, the volume is then proportional to rN with a very large N.

The power function rN turn more and more into an L-shaped function with increasing exponent N. The volume increases enormously just by adding a small additional layer to the ball. In order to compare the function for different exponents, both ‘radius’ and ‘volume’ are shown in relation to the respective maximum value, R and RN.

The interesting layer ‘with all the volume’ is certainly much smaller than the radius R, but of course it must not be too small to contain something. How thick the substantial shell has to be can be found by investigating the volume in more detail – using a ‘trick’ that is needed often in statistical mechanics: Taylor expanding in the exponent.

A function can be replaced by its tangent if it is sufficiently ‘straight’ at this point. Mathematically it means: If dx is added to the argument x, then the function at the new target is f(x + dx), which can be approximated by f(x) + [the slope df/dx] * dx. The next – higher-order term would be proportional to the curvature, the second derivation – then the function is replaced by a 2nd order polynomial. Joseph Nebus has recently published a more comprehensible and detailed post about how this works.

So the first terms of this so-called Taylor expansion are:

f(x + dx) = f(x) + dx{\frac{df}{dx}} + {\frac{dx^2}{2}}{\frac{d^2f}{dx^2}} + ...

If dx is small higher-order terms can be neglected.

In the curious case of the ball in hyperspace we are interested in the ‘remaining volume’ V(r – dr). This should be small compared to V(r) = arN (a being the uninteresting constant numerical factor) after we remove a layer of thickness dr with the substantial ‘bulk of the volume’.

However, trying to expand the volume V(r – dr) = a(r – dr)N, we get:

V(r - dr) = V(r) - adrNr^{N-1} + a{\frac{dr^2}{2}}N(N-1)r^{N-2} + ...
= ar^N(1 - N{\frac{dr}{r}} + {\frac{N(N-1)}{2}}({\frac{dr}{r}})^2) + ...

But this is not exactly what we want: It is finally not an expansion, a polynomial, in (the small) ratio of dr/r, but in Ndr/r, and N is enormous.

So here’s the trick: 1) Apply the definition of the natural logarithm ln:

V(r - dr) = ae^{N\ln(r - dr)} = ae^{N\ln(r(1 - {\frac{dr}{r}}))}
= ae^{N(\ln(r) + ln(1 - {\frac{dr}{r}}))}
= ar^Ne^{\ln(1 - {\frac{dr}{r}}))} = V(r)e^{N(\ln(1 - {\frac{dr}{r}}))}

2) Spot a function that can be safely expanded in the exponent: The natural logarithm of 1 plus something small, dr/r. So we can expand near 1: The derivative of ln(x) is 1/x (thus equal to 1/1 near x=1) and ln(1) = 0. So ln(1 – x) is about -x for small x:

V(r - dr) = V(r)e^{N(0 - 1{\frac{dr}{r})}} \simeq V(r)e^{-N{\frac{dr}{r}}}

3) Re-arrange fractions …

V(r - dr) = V(r)e^{-\frac{dr}{(\frac{r}{N})}}

This is now the remaining volume, after the thin layer dr has been removed. It is small in comparison with V(r) if the exponential function is small, thus if {\frac{dr}{(\frac{r}{N})}} is large or if:

dr \gg \frac{r}{N}

Summarizing: The volume of the N-dimensional hyperball is contained mainly in a shell dr below the surface if the following inequalities hold:

{\frac{r}{N}} \ll dr \ll r

The second one is needed to state that the shell is thin – and allow for expansion in the exponent, the first one is needed to make the shell thick enough so that it contains something.

This might help to ‘visualize’ a closely related non-intuitive fact about large numbers, like eN: If you multiply such a number by a factor ‘it does not get that much bigger’ in a sense – even if the factor is itself a large number:

Assuming N is about 1025  then its natural logarithm is about 58 and…

Ne^N = e^{\ln(N)+N} = e^{58+10^{25}}

… 58 can be neglected compared to N itself. So a multiplicative factor becomes something to be neglected in a sum!

I used a plain number – base e – deliberately as I am obsessed with units. ‘r’ in phase space would be associated with a unit incorporating lots of lengths and momenta. Note that I use the term ‘dimensions’ in two slightly different, but related ways here: One is the mathematical dimension of (an abstract) space, the other is about cross-checking the physical units in case a ‘number’ is something that can be measured – like meters. The co-ordinate  numbers in the vector refer to measurable physical quantities. Applying the definition of the logarithm just to rN would result in dimensionless number N side-by-side with something that has dimensions of a logarithm of the unit.

Using r – a number with dimensions of length – as base, it has to be expressed as a plain number, a multiple of the unit length R_0 (like ‘1 meter’). So comparing the original volume of the ball a{(\frac{r}{R_0})}^N to one a factor of N bigger …

aN{(\frac{r}{R_0})}^N = ae^{\ln{(N)} + N\ln{(\frac{r}{R_0})}}

… then ln(N) can be neglected as long as \frac{r}{R_0} is not extreeeemely tiny. Using the same argument as for base e above, we are on the safe side (and can neglect factors) if r is of about the same order of magnitude as the ‘unit length’ R_0 . The argument about negligible factors is an argument about plain numbers – and those ‘don’t exist’ in the real world as one could always decide to measure the ‘radius’ in a units of, say, 10-30 ‘meters’, which would make the original absolute number small and thus the additional factor non-negligible. One might save the argument by saying that we would always use units that sort of match the typical dimensions (size) of a system.

Saying everything in another way: If the volume of a hyperball ~rN is multiplied by a factor, this corresponds to multiplying the radius r by a factor very, very close to 1 – the Nth root of the factor for the volume. Only because the number of dimensions is so large, the volume is increased so much by such a small increase in radius.

As the ‘bulk of the volume’ is contained in a thin shell, the total volume is about the product of the surface area and the thickness of the shell dr. The N-ball is bounded by a ‘sphere’ with one dimension less than the ball. Increasing the volume by a factor means that the surface area and/or the thickness have to be increased by factors so that the product of these factors yield the volume increase factor. dr scales with r, and does thus not change much – the two inequalities derived above do still hold. Most of the volume factor ‘goes into’ the factor for increasing the surface. ‘The surface becomes the volume’.

This was long-winded. My excuse: Also Richard Feynman took great pleasure in explaining the same phenomenon in different ways. In his lectures you can hear him speak to himself when he says something along the lines of: Now let’s see if we really understood this – let’s try to derive it in another way…

And above all, he says (in a lecture that is more about math than about physics)

Now you may ask, “What is mathematics doing in a physics lecture?” We have several possible excuses: first, of course, mathematics is an important tool, but that would only excuse us for giving the formula in two minutes. On the other hand, in theoretical physics we discover that all our laws can be written in mathematical form; and that this has a certain simplicity and beauty about it. So, ultimately, in order to understand nature it may be necessary to have a deeper understanding of mathematical relationships. But the real reason is that the subject is enjoyable, and although we humans cut nature up in different ways, and we have different courses in different departments, such compartmentalization is really artificial, and we should take our intellectual pleasures where we find them.

___________________________________

Further reading / sources: Any theoretical physics textbook on classical thermodynamics / statistical mechanics. I am just re-reading mine.