# Spheres in a Space with Trillions of Dimensions

I don’t venture into speculative science writing – this is just about classical statistical mechanics; actually about a special mathematical aspect. It was one of the things I found particularly intriguing in my first encounters with statistical mechanics and thermodynamics a long time ago – a curious feature of volumes.

I was mulling upon how to ‘briefly motivate’ the calculation below in a comprehensible way, a task I might have failed at years ago already, when I tried to use illustrations and metaphors (Here and here). When introducing the ‘kinetic theory’ in thermodynamics often the pressure of an ideal gas is calculated first, by considering averages over momenta transferred from particles hitting the wall of a container. This is rather easy to understand but still sort of an intermediate view – between phenomenological thermodynamics that does not explain the microscopic origin of properties like energy, and ‘true’ statistical mechanics. The latter makes use of a phase space with with dimensions the number of particles. One cubic meter of gas contains ~1025 molecules. Each possible state of the system is depicted as a point in so-called phase space: A point in this abstract space represents one possible system state. For each (point-like) particle 6 numbers are added to a gigantic vector – 3 for its position and 3 for its momentum (mass times velocity), so the space has ~6 x 1025 dimensions. Thermodynamic properties are averages taken over the state of one system watched for a long time or over a lot of ‘comparable’ systems starting from different initial conditions. At the heart of statistical mechanics are distributions functions that describe how a set of systems described by such gigantic vectors evolves. This function is like a density of an incompressible fluid in hydrodynamics. I resorted to using the metaphor of a jelly in hyperspace before.

Taking averages means to multiply the ‘mechanical’ property by the density function and integrate it over the space where these functions live. The volume of interest is a  generalized N-ball defined as the volume within a generalized sphere. A ‘sphere’ is the surface of all points in a certain distance (‘radius’ R) from an origin

$x_1^2 + x_2^2 + ... + x_ {N}^2 = R^2$

($x_n$ being the co-ordinates in phase space and assuming that all co-ordinates of the origin are zero). Why a sphere? Because states are ordered or defined by energy, and larger energy means a greater ‘radius’ in phase space. It’s all about rounded surfaces enclosing each other. The simplest example for this is the ellipse of the phase diagram of the harmonic oscillator – more energy means a larger amplitude and a larger maximum velocity.

And here is finally the curious fact I actually want to talk about: Nearly all the volume of an N-ball with so many dimensions is concentrated in an extremely thin shell beneath its surface. Then an integral over a thin shell can be extended over the full volume of the sphere without adding much, while making integration simpler.

This can be seen immediately from plotting the volume of a sphere over radius: The volume of an N-ball is always equal to some numerical factor, times the radius to the power of the number of dimensions. In three dimensions the volume is the traditional, honest volume proportional to r3, in two dimensions the ‘ball’ is a circle, and its ‘volume’ is its area. In a realistic thermodynamic system, the volume is then proportional to rN with a very large N.

The power function rN turn more and more into an L-shaped function with increasing exponent N. The volume increases enormously just by adding a small additional layer to the ball. In order to compare the function for different exponents, both ‘radius’ and ‘volume’ are shown in relation to the respective maximum value, R and RN.

The interesting layer ‘with all the volume’ is certainly much smaller than the radius R, but of course it must not be too small to contain something. How thick the substantial shell has to be can be found by investigating the volume in more detail – using a ‘trick’ that is needed often in statistical mechanics: Taylor expanding in the exponent.

A function can be replaced by its tangent if it is sufficiently ‘straight’ at this point. Mathematically it means: If dx is added to the argument x, then the function at the new target is f(x + dx), which can be approximated by f(x) + [the slope df/dx] * dx. The next – higher-order term would be proportional to the curvature, the second derivation – then the function is replaced by a 2nd order polynomial. Joseph Nebus has recently published a more comprehensible and detailed post about how this works.

So the first terms of this so-called Taylor expansion are:

$f(x + dx) = f(x) + dx{\frac{df}{dx}} + {\frac{dx^2}{2}}{\frac{d^2f}{dx^2}} + ...$

If dx is small higher-order terms can be neglected.

In the curious case of the ball in hyperspace we are interested in the ‘remaining volume’ V(r – dr). This should be small compared to V(r) = arN (a being the uninteresting constant numerical factor) after we remove a layer of thickness dr with the substantial ‘bulk of the volume’.

However, trying to expand the volume V(r – dr) = a(r – dr)N, we get:

$V(r - dr) = V(r) - adrNr^{N-1} + a{\frac{dr^2}{2}}N(N-1)r^{N-2} + ...$
$= ar^N(1 - N{\frac{dr}{r}} + {\frac{N(N-1)}{2}}({\frac{dr}{r}})^2) + ...$

But this is not exactly what we want: It is finally not an expansion, a polynomial, in (the small) ratio of dr/r, but in Ndr/r, and N is enormous.

So here’s the trick: 1) Apply the definition of the natural logarithm ln:

$V(r - dr) = ae^{N\ln(r - dr)} = ae^{N\ln(r(1 - {\frac{dr}{r}}))}$
$= ae^{N(\ln(r) + ln(1 - {\frac{dr}{r}}))}$
$= ar^Ne^{\ln(1 - {\frac{dr}{r}}))} = V(r)e^{N(\ln(1 - {\frac{dr}{r}}))}$

2) Spot a function that can be safely expanded in the exponent: The natural logarithm of 1 plus something small, dr/r. So we can expand near 1: The derivative of ln(x) is 1/x (thus equal to 1/1 near x=1) and ln(1) = 0. So ln(1 – x) is about -x for small x:

$V(r - dr) = V(r)e^{N(0 - 1{\frac{dr}{r})}} \simeq V(r)e^{-N{\frac{dr}{r}}}$

3) Re-arrange fractions …

$V(r - dr) = V(r)e^{-\frac{dr}{(\frac{r}{N})}}$

This is now the remaining volume, after the thin layer dr has been removed. It is small in comparison with V(r) if the exponential function is small, thus if ${\frac{dr}{(\frac{r}{N})}}$ is large or if:

$dr \gg \frac{r}{N}$

Summarizing: The volume of the N-dimensional hyperball is contained mainly in a shell dr below the surface if the following inequalities hold:

${\frac{r}{N}} \ll dr \ll r$

The second one is needed to state that the shell is thin – and allow for expansion in the exponent, the first one is needed to make the shell thick enough so that it contains something.

This might help to ‘visualize’ a closely related non-intuitive fact about large numbers, like eN: If you multiply such a number by a factor ‘it does not get that much bigger’ in a sense – even if the factor is itself a large number:

Assuming N is about 1025  then its natural logarithm is about 58 and…

$Ne^N = e^{\ln(N)+N} = e^{58+10^{25}}$

… 58 can be neglected compared to N itself. So a multiplicative factor becomes something to be neglected in a sum!

I used a plain number – base e – deliberately as I am obsessed with units. ‘r’ in phase space would be associated with a unit incorporating lots of lengths and momenta. Note that I use the term ‘dimensions’ in two slightly different, but related ways here: One is the mathematical dimension of (an abstract) space, the other is about cross-checking the physical units in case a ‘number’ is something that can be measured – like meters. The co-ordinate  numbers in the vector refer to measurable physical quantities. Applying the definition of the logarithm just to rN would result in dimensionless number N side-by-side with something that has dimensions of a logarithm of the unit.

Using r – a number with dimensions of length – as base, it has to be expressed as a plain number, a multiple of the unit length $R_0$ (like ‘1 meter’). So comparing the original volume of the ball $a{(\frac{r}{R_0})}^N$ to one a factor of N bigger …

$aN{(\frac{r}{R_0})}^N = ae^{\ln{(N)} + N\ln{(\frac{r}{R_0})}}$

… then ln(N) can be neglected as long as $\frac{r}{R_0}$ is not extreeeemely tiny. Using the same argument as for base e above, we are on the safe side (and can neglect factors) if r is of about the same order of magnitude as the ‘unit length’ $R_0$. The argument about negligible factors is an argument about plain numbers – and those ‘don’t exist’ in the real world as one could always decide to measure the ‘radius’ in a units of, say, 10-30 ‘meters’, which would make the original absolute number small and thus the additional factor non-negligible. One might save the argument by saying that we would always use units that sort of match the typical dimensions (size) of a system.

Saying everything in another way: If the volume of a hyperball ~rN is multiplied by a factor, this corresponds to multiplying the radius r by a factor very, very close to 1 – the Nth root of the factor for the volume. Only because the number of dimensions is so large, the volume is increased so much by such a small increase in radius.

As the ‘bulk of the volume’ is contained in a thin shell, the total volume is about the product of the surface area and the thickness of the shell dr. The N-ball is bounded by a ‘sphere’ with one dimension less than the ball. Increasing the volume by a factor means that the surface area and/or the thickness have to be increased by factors so that the product of these factors yield the volume increase factor. dr scales with r, and does thus not change much – the two inequalities derived above do still hold. Most of the volume factor ‘goes into’ the factor for increasing the surface. ‘The surface becomes the volume’.

This was long-winded. My excuse: Also Richard Feynman took great pleasure in explaining the same phenomenon in different ways. In his lectures you can hear him speak to himself when he says something along the lines of: Now let’s see if we really understood this – let’s try to derive it in another way…

And above all, he says (in a lecture that is more about math than about physics)

Now you may ask, “What is mathematics doing in a physics lecture?” We have several possible excuses: first, of course, mathematics is an important tool, but that would only excuse us for giving the formula in two minutes. On the other hand, in theoretical physics we discover that all our laws can be written in mathematical form; and that this has a certain simplicity and beauty about it. So, ultimately, in order to understand nature it may be necessary to have a deeper understanding of mathematical relationships. But the real reason is that the subject is enjoyable, and although we humans cut nature up in different ways, and we have different courses in different departments, such compartmentalization is really artificial, and we should take our intellectual pleasures where we find them.

___________________________________

Further reading / sources: Any theoretical physics textbook on classical thermodynamics / statistical mechanics. I am just re-reading mine.

# You Never Know

… when obscure knowledge comes in handy!

You can dismantle an old gutter without efforts, and without any special tools:

Just by gently setting it into twisted motion, effectively applying ~1Hz torsion waves that would lead to fatigue break within a few minutes.

I knew my stint in steel research in the 1990s would finally be good for something.

If you want to create a meme from this and tag it with Work Smarter Not Harder, don’t forget to give me proper credits.

# Lest We Forget the Pioneer: Ottokar Tumlirz and His Early Demo of the Coriolis Effect

Two years ago I wrote an article about The Myth of the Toilet Flush, comparing the angular rotation caused by the earth’s rotation to the typical rotation in experiments with garden hoses that make it easy to observe the Coriolis effect. There are several orders of magnitude in difference, and the effect can only be observed in an experiment done extremely carefully, not in the bathtub sink or toilet flush.

Now two awesome science geeks have finally done such a careful experimenteven a time-synchronized one, observing vortices on either hemisphere!

The effect has been demonstrated in a similarly careful experiment in 1908. It had been done on the Northern hemisphere only, but if it can attributed it to the Coriolis effect by ruling out other disturbances, the different senses of rotations are straight-forward.

Austrian physicist Ottokar Tumlirz had published a German  paper called “New physical evidence on the axis of rotation of the earth”. I had created this ugly sketch of his setup:

Rough sketch based on the abstract of Tumlirz’ paper, not showing the vessel containing these components [*]

A cylindrical vessel (not shown in my drawing) is filled with water, and two glass plates are placed into it. The bottom plate has a hole, as well as the vessel. Both holes are connected by a glass tube that has many small holes. The space between the two plates is filled with water and water slowly flows out – from the bulk of the vessel through the the tiny holes into the tube. These radial red lines are bent very slightly due to the Coriolis force, and the Tumlirz added a die to make them visible. He took a photo 24 hours after starting the experiment, and the water must not flow out faster than 1 mm per minute.

Ernst Mach has given an account of Tumlirz’ experiment, quoted in an article titled Inventors I Have Met – anecdotes by a physicist approached by ‘outsider scientists’, once called paradoxers, today often called crackpots. I learned about Ernst Mach’s article from the reference and re-print of the article on this history of physics website.

Mach refers to Tumlirz’ experiment as an example of an idea that seems to belong in the same category at first glance, but is actually correct:

To be sure, Professor Tumlirz has recently performed an experiment which, while externally similar to this, is correct. By this experiment the rotation of the earth can be imitated, if the utmost care is taken, by the direction of the current of water flowing axially out of a cylindrical vessel. Further details are to be found in an article by Tumlirz in the Sitzungsberichte der Wiener Akademie, Vol. 117, 1908. I happened to know the origin of the thought that gave rise to this invention. Tumlirz noticed that the water flowing somewhat unsymmetrically in a glass funnel assumed a swift rotation in the neck of the funnel so that it formed a whirl of air in the axis of the flowing jet. This put it in his mind to increase the slight angular velocity of the water at rest with reference to the earth, by contraction in the axis.

________________________________

Comment on the German abstract: It seems one line or sentence got lost or mangled when processing the original as this does not make sense: so bendet sich das Wasser zwischen den beiden Glasscheiben [here something is missing] nach dem Rohrchen durch die kleinen Öffnungen.

I have not managed to find the full version of the old paper and the figures and photos online. I would be grateful for pointers.

Edit 2017: The link to the abstract used in 2015 is now dead, but I found a full-text version of the paper. Formulas are scrambled though.

________________________________

Update added August 2016: C. Schiller quotes this historical experiment in vol. 1 of his free physics textbook Motion Mountain (p. 135):

Only in 1962, after several attempts by other researchers, Asher Shapiro was the first to verify that the Coriolis effect has a tiny influence on the direction of the vortex flowing out of the bathtub.

Ref: A. H. SHAPIRO, Bath-tub vortex, Nature 196, pp. 1080-1081, 1962

# All Kinds of Turbines

Dave asked an interesting question, commenting on the heat-from-the-tunnel project:

Has anyone considered the fact that the water can be used to first drive turbines and then distributed to supply the input source for the heat pumps?

I am a water turbine fan, and every time I spot a small hydro power plant on a hiking map, I have to find it.

Pelton turbine. The small regional utility has several of them, the flow rate is typically a few 100 liters per second. The NSA should find an image of myself in the reflections.

This does not mean I have developed intuition for the numbers, so I have to do some cross-checks.

You can harvest either kinetic or potential energy from a flowing river in a hydro power plant. Harvesting kinetic energy could be done by something like the ‘under-water version’ of a wind turbine:

Tidal stream generator, rotor raised (Wikimedia user Fundy)

The tunnel produces a flow of 300 liters per second but this information is not yet sufficient for estimating mechanical power.

The kinetic energy of a mass $m$ moving at velocity $v$ is:  $\frac{mv^{2}}{2}$. From the mean velocity in a flow of water we could calculate the energy carried by flow by replacing $m$ in this expression by mass flow.

If 300 liters per second flow through a pipe with an area of 1 m2, the flow velocity is equal to  0,3 m3/s divided by this area, thus 0,3 m/s. This translates to a kinetic energy of:

$\frac{300 ^{.} 0,3^{2}}{2}$ W = 13,5 W

… only, just enough for a small light bulb.

If the cross-section of the pipe would be ten times smaller, the power would be 100 times larger – 1,35 kW.

(Edit: This is just speculating about typical sizes of the natural pipe determined by rocks or whatever. You cannot create energy out of nothing as increasing velocity by a sort of funnel would decrease pressure. I was rather thinking of a river bed open to ambient air – and ambient pressure – than a closed pipe.)

On the other hand, if that water would be allowed to ‘fall’, we could harvest potential energy:

Also this mill wheel is utilizing potential energy from the height difference of a few meters. (Critically inspected by The Chief Engineer, photo by elkement)

This is how commercial hydro power plants work, including those located at rivers in seemingly flat lowlands.

The potential energy of a point mass at height $h$ is $mgh$, $g$ being the acceleration due to gravity (~ 10m/s2). Assuming a usable height of 10m, 300kg/s would result in about

300 . 10 . 10 W = 30kW – quite a difference!

Of course there are huge error bars here but the modest output of kinetic energy is typical for the topography of planet earth.

Mass flow has to be conserved, and it enters both expressions as a factor. If I am interested in comparing potential and kinetic energies relative to each other, it is sufficient to compare $\frac{v^{2}}{2}$ to $gh$.

Cross-checking this for a flow of water we know more about:

The Danube flows at about 3-10 m/s, so

$\frac{v_{Danube}^{2}}{2}$ = 4,5 – 50m2/s2

But we cannot extract all that energy: The flow of water would come to a halt at the turbine – where should the water go then? For the same reasons there is a theoretical maximum percentage of wind power that turbines can harvest, even if perfectly frictionless.

In addition, such a turbine would need to be much smaller than the cross-section of the river. Mass flow needs to be conserved: when part of the water slows down, it gets spread over a larger cross-section.

So the realistic $\frac{v_{Danube}^{2}}{2}$ will be smaller.

I have stumbled upon an Austrian startup offering floating turbines, designed for operations in larger rivers and delivering about 70kW at 3,3m/s flow velocity (Images on the German site). This is small compared to the overall kinetic energy of the Danube of about several MW, calculated from 2.000m3/s (mass flow near Vienna) and about 3m/s.

The first hydro power plant at the Danube in Austria, built in 1959 – an icon of post World War II reconstruction (Wikimedia). The plant is currently modernised, the rated power will be increased by 5% to 250MW. Utilized difference in height: 10m.

So the whole kinetic energy – that cannot be extracted anyway – is still small compared to the rated power of typical power plants which are several 100MW!

If the water of the Danube ‘falls’ about 10m then

$gh_{Danube}$ ~ 100

… which is much larger than realistic values of $\frac{v_{Danube}^{2}}{2}$! Typical usable kinetic energies are lower than typical potential energies.

So if tunnel drain water should drive a turbine, the usable height is crucial. But expected powers are rather low compared to the heat power to be gained (several MW) so this is probably not economically feasible.

I was curious about the largest power plants on earth: Currently the Chinese Three Gorges Dam delivers 22GW. I have heard about plans in Sweden to build a plant that could deliver 50GW – a pumped hydro storage plant utilizing a 50km tunnel between two large lakes, with a difference in altitude of 44m (See the mentions here or here.)

Three Gorges Dam in China (Wikimedia user Filnko)

# Grim Reaper Does a Back-of-the-Envelope Calculation

I have a secondary super-villain identity. People on Google+ called me:
Elke the Ripper or Master of the Scythe.

[FAQ] No, I don’t lost a bet. We don’t have a lawn-mower by choice. Yes, we tried the alternatives including a reel lawn-mower. Yes, I really enjoy doing this.

It is utterly exhausting – there is no other outdoor activity in summer that leaves me with the feeling of really having achieved something!

So I was curious if Grim Reaper the Physicist can express this level of exhaustion in numbers.

Just holding a scythe with arms stretched out would not count as ‘work’. Yet I believe that in this case it is the acceleration required to bring the scythe to proper speed that matters; so I will focus on work in terms of physics.

In order to keep this simple, I assume that the weight of the scythe is a few kilos (say: 5kg) concentrated at the end of a weightless pole of 1,5m length. All the kinetic energy is concentrated in this ‘point mass’.

But how fast does a blade need to move in order to cut grass? Or from experience: How fast do I move the scythe?

One sweep with the scythe takes a fraction of second – probably 0,5s. The blade traverses an arc of about 2m.

Thus the average speed is: 2m / 0,5s = 4m/s

However, using this speed in further calculations does not make much sense: The scythe has two handles that allow for exerting a torque – the energy goes into acceleration of the scythe.

If an object with mass m is accelerated from a velocity of zero to a peak velocity vmax the kinetic energy acquired is calculated from the maximum velocity: m vmax2 / 2. How exactly the velocity has changed with time does not matter – this is just conservation of energy.

But what is the peak velocity?

For comparison: How fast do lawn-mower blades spin?

This page says: at 3600 revolutions per minute when not under load, dropping to about 3000 when under load. How fast would I have to move the scythe to achieve the same?

Velocity of a rotating body is angular velocity times radius. Angular velocity is 2Pi – a full circle – times the frequency, that is revolutions per time. The radius is the length of the pole that I use as a simplified model.

So the scythe on par with a lawn-mower would need to move at:
2Pi * (3000 rev./minute) / (60 seconds/minute) * 1,5m = 471m/s

This would result in the following energy per arc swept. I use only SI units, so the resulting energy is in Joule:

Energy needed for acceleration: 5kg * (471m/s)2 / 2 = 555.000J = 555kJ

I am assuming that this energy is just consumed (dissipated) to cut the grass; the grass brings the scythe to halt, and it is decelerated to 0m/s again.

1 kilocalorie is 4,18kJ, so this amounts to about 133kcal (!!)

That sounds way too much already: Googling typical energy consumptions for various activities I learn that easy work in the garden needs about 100-150kcal kilocalories per half an hour!

If scything were that ‘efficient’ I would put into practice what we always joke about: Offer outdoor management trainings to stressed out IT managers who want to connect with their true selves again through hard work and/or work-out most efficiently. So they would pay us for the option to scythe our grass.

But before I crank down the hypothetical velocity again, I calculate the energy demand per half an hour:

I feel exhausted after half an hour of scything. I pause a few seconds before the next – say 10s – on average. In reality it is probably more like:

scythe…1s…scythe…1s…scythe…1s….scythe…1s….scythe…longer break, gasping for air, sharpen the scythe.

I assume a break of 9,5s on average to make the calculation simpler. So this is 1 arc swept per 10 seconds, 6 arcs per minute, and 180 per half an hour. After half on hour I need to take longer break.

So using that lawn-mower-style speed this would result in:

Energy per half an hour if I were a lawn-mower: 133kJcal * 180 = 23.940kcal

… about five times the daily energy demands of a human being!

Velocity enters the equation quadratically. Assuming now that my peak scything speed is really only a tenth of the speed of a lawn-mower, 47m/2, which is still about 10 times my average speed calculated the beginning, this would result in one hundredth the energy.

A bit more realistic energy per half an hour of scything is then: 239kcal

Just for comparison – to get a feeling for those numbers: Average acceleration is maximum velocity over time. Thus 47m/s would result in:

Average acceleration: (47m/s) / (0,5s)  =  94m/s2

A fast car accelerates to 100km/h within 3 seconds, at (100/3,6)m/s / 3s = 9m/s2

So my assumed scythe’s acceleration is about 10 times a Ferrari’s!

Now I would need a high-speed camera, determine speed exactly and find a way to calculate actual energy needed for cutting.

Is there some conclusion?

This was just playful guesswork but the general line of reasoning and cross-checking orders of magnitude outlined here is not much different from when I try to get my simulations of our heat pump system right – based on unknown parameters, such as the effect of radiation, the heat conduction of ground, and the impact of convection in the water tank. The art is not so much in gettting numbers exactly right but in determining which parameters matter at all and how sensitive the solution is to a variation of those. In this case it would be crucial to determine peak speed more exactly.

In physics you can say the same thing in different ways – choosing one way over the other can make the problem less complex. As in this case, using total energy is often easier than trying to figure out the evolution of forces or torques with time.

The two images above were taken in early spring – when the ‘lawn’ / meadow was actually still growing significantly. Since we do not water it, the relentless Pannonian sun already started to turn it into a mixture of green and brown patches.

This is how the lawn looks now, one week after latest scything. This is not intended to be beautiful – I wanted to add a realistic picture as I had been asked about the ‘quality’ compared to a lawn-mower. Result: Good enough for me!

# Non-Linear Art. (Should Actually Be: Random Thoughts on Fluid Dynamics)

In my favorite ancient classical mechanics textbook I found an unexpected statement. I think 1960s textbooks weren’t expected to be garnished with geek humor or philosophical references as much as seems to be the default today – therefore Feynman’s books were so refreshing.

Natural phenomena featured by visual artists are typically those described by non-linear differential equations . Those equations allow for the playful interactions of clouds and water waves of ever changing shapes.

So fluid dynamics is more appealing to the artist than boring electromagnetic waves.

Is there an easy way to explain this without too much math? Most likely not but I try anyway.

I try to zoom in on a small piece of material, an incredibly small cube of water in a flow at a certain point of time. I imagine this cube as decorated by color. This cube will change its shape quickly and turn into some irregular shape – there are forces pulling and pushing – e.g. gravity.

This transformation is governed by two principles:

• First, mass cannot vanish. This is classical physics, no need to consider the generation of new particles from the energy of collisions. Mass is conserved locally, that is if some material suddenly shows up at some point in space, it had to have been travelling to that point from adjacent places.
• Second, Newton’s law is at play: Forces are equal to a change momentum. If we know the force acting at time t and point (x,y,z), we know how much momentum will change in a short period of time.

Typically any course in classical mechanics starts from point particles such as cannon balls or planets – masses that happen to be concentrated in a single point in space. Knowing the force at a point of time at the position of the ball we know the acceleration and we can calculate the velocity in the next moment of time.

This also holds for our colored little cube of fluid – but we usually don’t follow decorated lumps of mass individually. The behavior of the fluid is described perfectly if we know the mass density and the velocity at any point of time and space. Think little arrows attached to each point in space, probably changing with time, too.

Digesting that difference between a particle’s trajectory and an anonymous velocity field is a big conceptual leap in my point of view. Sometimes I wonder if it would be better not to learn about the point approach in the first place because it is so hard to unlearn later. Point particle mechanics is included as a special case in fluid mechanics – the flowing cannon ball is represented by a field that has a non-zero value only at positions equivalent to the trajectory. Using the field-style description we would say that part of the cannon ball vanishes behind it and re-appears “before” it, along the trajectory.

Pushing the cube also moves it to another place where the velocity field differs. Properties of that very decorated little cube can change at the spot where it is – this is called an explicit dependence on time. But it can also change indirectly because parts of it are moved with the flow. It changes with time due to moving in space over a certain distance. That distance is again governed by the velocity – distance is velocity times period of time.

Thus for one spatial dimension the change of velocity dv associated with dt elapsed is also related to a spatial shift dx = vdt. Starting from a mean velocity of our decorated cube v(x,t) we end up with v(x + vdt, t+dt) after dt has elapsed and the cube has been moved by vdt. For the cannon ball we could have described this simply as v(t + dt) as v was not a field.

And this is where non-linearity sneaks in: The indirect contribution via moving with the flow, also called convective acceleration, is quadratic in v – the spatial change of v is multiplied by v again. If you then allow for friction you get even more nasty non-linearities in the parts of the Navier-Stokes equations describing the forces.

My point here is that even if we neglect dissipation (describing what is called dry water tongue-in-cheek) there is already non-linearity. The canonical example for wavy motions – water waves – is actually rather difficult to describe due to that, and you need to resort to considering small fluctuations of the water surface even if you start from the simplest assumptions.

# Mastering Geometry is a Lost Art

I am trying to learn Quantum Field Theory the hard way: Alone and from textbooks. But there is something harder than the abstract math of advanced quantum physics:

You can aim at comprehending ancient texts on physics.

If you are an accomplished physicist, chemist or engineer – try to understand Sadi Carnot’s reasoning that was later called the effective discovery of the Second Law of Thermodynamics.

At Carnotcycle’s excellent blog on classical thermodynamics you can delve into thinking about well-known modern concepts in a new – or better: in an old – way. I found this article on the dawn of entropy a difficult ready, even though we can recognize some familiar symbols and concepts such as circular processes, and despite or because of the fact I was at the time of reading this article a heavy consumer of engineering thermodynamics textbooks. You have to translate now unused notions such as heat received and the expansive power into their modern counterparts. It is like reading a text in a foreign language by deciphering every single word instead of having developed a feeling for a language.

Stephen Hawking once published an anthology of the original works of the scientific giants of the past millennium: Corpernicus, Galieo, Kepler, Newton and Einstein: On the Shoulders of Giants. So just in case you googled for Hawkins – don’t expect your typical Hawking pop-sci bestseller with lost of artistic illustrations. This book is humbling. I found the so-called geometrical proofs most difficult and unfamiliar to follow. Actually, it is my difficulties in (not) taming that Pesky Triangle that motivated me to reflect on geometrical proofs.

I am used to proofs stacked upon proofs until you get to the real thing. In analysis lectures you get used to starting by proving that 1+1=2 (literally) until you learn about derivatives and slopes. However, Newton and his predecessor giants talk geometry all the way! I have learned a different language. Einstein’s way of tackling problems is most familiar though his physics is the most non-intuitive.

This amazon.com review is titled Now We Know why Geometry is Called the Queen of the Sciences and the reviewer perfectly nails it:

It is simply astounding how much mileage Copernicus, Galileo, Kepler, Newton, and Einstein got out of ordinary Euclidean geometry. In fact, it could be argued that Newton (along with Leibnitz) were forced to invent the calculus, otherwise they too presumably would have remained content to stick to Euclidean geometry.

Science writer Margaret Wertheim gives an account of a 20th century giant trying to recapture Isaac Newton’s original discovery of the law of gravitation in her book Physics on the Fringe (The main topic of the book are outsider physicists’ theories, I have blogged about the book here.).

This giant was Richard Feynman.

Today the gravitational force, gravitational potential and related acceleration objects in the gravitational fields are presented by means of calculus: The potential is equivalent to a rubber membrane model – the steeper the membrane, the higher the force. (However, this is not a geometrical proof – this is an illustration of underlying calculus.)

Model of the gravitational potential. An object trapped in these wells moves along similar trajectories as bodies in a gravitational field. Depending on initial conditions (initial position and velocity) you end up with elliptical, parabolic or hyperbolic orbits. (Wikimedia, Invent2HelpAll)

Today you start from the equation of motion for a object under the action of a force that weakens with the inverse square of the distance between two massive objects, and out pops Kepler’s law about elliptical orbits. It takes some pages of derivation, and you need to recognize conic sections in formulas – but nothing too difficult for an undergraduate student of science.

Newton actually had to invent calculus together with tinkering with the law of gravitation. In order to convince his peers he needed to use the geometrical language and the mental framework common back then. He uses all kinds of intricate theorems about triangles and intersecting lines (;-)) in order to say what we say today using the concise shortcuts of derivatives and differentials.

Wertheim states:

Feynman wasn’t doing this to advance the state of physics. He was doing it to experience the pleasure of building a law of the universe from scratch.

Feynman said to his students:

“For your entertainment and interest I want you to ride in a buggy for its elegance instead of a fancy automobile.”

But he underestimated the daunting nature of this task:

In the preparatory notes Feynman made for his lecture, he wrote: “Simple things have simple demonstrations.” Then, tellingly, he crossed out the second “simple” and replaced it with “elementary.” For it turns out there is nothing simple about Newton’s proof. Although it uses only rudimentary mathematical tools, it is a masterpiece of intricacy. So arcane is Newton’s proof that Feynman could not understand it.

Given the headache that even Corpernicus’ original proofs in the Shoulders of Giants gave me I can attest to:

… in the age of calculus, physicists no longer learn much Euclidean geometry, which, like stonemasonry, has become something of a dying art.

Richard Feynman has finally made up his own version of a geometrical proof to fully master Newton’s ideas, and Feynman’s version covered hundred typewritten pages, according to Wertheim.

Everybody who indulges gleefully in wooden technical prose and takes pride in plowing through mathematical ideas can relate to this:

For a man who would soon be granted the highest honor in science, it was a DIY triumph whose only value was the pride and joy that derive from being able to say, “I did it!”

Richard Feynman gave a lecture on the motion of the planets in 1964, that has later been called his Lost Lecture. In this lecture he presented his version of the geometrical proof which was simpler than Newton’s.

The proof presented in the lecture have been turned in a series of videos by Youtube user Gary Rubinstein. Feynman’s original lecture was 40 minutes long and confusing, according to Rubinstein – who turned it into 8 chunks of videos, 10 minutes each.

The rest of the post is concerned with what I believe that social media experts call curating. I am just trying to give an overview of the episodes of this video lecture. So my summaries do most likely not make a lot of sense if you don’t watch the videos. But even if you don’t watch the videos you might get an impression of what a geometrical proof actually is.

In Part I Kepler’s laws are briefly introduced. The characteristic properties of an ellipse are shown – in the way used by gardeners to creating an elliptical with a cord and a pencil. An ellipse can also be created within a circle by starting from a random point, connecting it to the circumference and creating the perpendicular bisector.

Part II starts with emphasizing that the bisector is actually a tangent to the ellipse (this will become an important ingredient in the proof later). Then Rubinstein switches to physics and shows how a planet effectively ‘falls into the sun’ according to Newton, that is a deviation due to gravity is superimposed to its otherwise straight-lined motion.

Part III shows in detail why the triangles swept out by the radius vector need to stay the same. The way Newton defined the size of the force in terms of parallelogram attached to the otherwise undisturbed path (no inverse square law yet mentioned!) gives rise to constant areas of the triangles – no matter what the size of the force is!

In Part IV the inverse square law in introduced – the changing force is associated with one side of the parallelogram denoting the deviation from motion without force. Feynman has now introduced the velocity as distance over time which is equal to size of the tangential line segments over the areas of the triangles. He created a separate ‘velocity polygon’ of segments denoting velocities. Both polygons – for distances and for velocities – look elliptical at first glance, though the velocity polygon seems more circular (We will learn later that it has to be a circle).

In Part V Rubinstein expounds that the geometrical equivalent of the change in velocity being proportional to 1 over radius squared times time elapsed with time elapsed being equivalent to the size of the triangles (I silently translate back to dv = dt times acceleration). Now Feynman said that he was confused by Newton’s proof of the resulting polygon being an ellipse – and he proposed a different proof:
Newton started from what Rubinstein calls the sun ‘pulsing’ at the same intervals, that is: replacing the smooth path by a polygon, resulting in triangles of equal size swept out by the radius vector but in a changing velocity.  Feynman divided the spatial trajectory into parts to which triangles of varying area e are attached. These triangles are made up of radius vectors all at the same angles to each other. On trying to relate these triangles to each other by scaling them he needs to consider that the area of a triangle scales with the square of its height. This also holds for non-similar triangles having one angle in common.

Part VI: Since ‘Feynman’s triangles’ have one angle in common, their respective areas scale with the squares of the heights of their equivalent isosceles triangles, thus basically the distance of the planet to the sun. The force is proportional to one over distance squared, and time is proportional to distance squared (as per the scaling law for these triangles). Thus the change in velocity – being the product of both – is constant! This is what Rubinstein calls Feynman’s big insight. But not only are the changes in velocity constant, but also the angles between adjacent line segments denoting those changes. Thus the changes in velocities make up for a regular polygon (which seems to turn into a circle in the limiting case).

Part VII: The point used to build up the velocity polygon by attaching the velocity line segments to it is not the center of the polygon. If you draw connections from the center to the endpoints the angle corresponds to the angle the planet has travelled in space. The animations of the continuous motion of the planet in space – travelling along its elliptical orbit is put side-by-side with the corresponding velocity diagram. Then Feynman relates the two diagrams, actually merges them, in order to track down the position of the planet using the clues given by the velocity diagram.

In Part VIII (embedded also below) Rubinstein finally shows why the planet traverses an elliptical orbit. The way the position of the planet has finally found in Part VII is equivalent to the insights into the properties of an ellipse found at the beginning of this tutorial. The planet needs be on the ‘ray’, the direction determined by the velocity diagram. But it also needs to be on the perpendicular bisector of the velocity segment – as force cause a change in velocity perpendicular to the previous velocity segment and the velocity needs to correspond to a tangent to the path.

# Hyper-Jelly – Again. Why We Need Hyperspace – Even in Politics.

All of old.
Nothing else ever.
Ever tried. Ever failed.
No matter.
Try again.
Fail again.
Fail better.

This is a quote from Worstward Ho by Samuel Beckett – a poem as impenetrable and opaque as my post on quantization. There is a version of Beckett’s poem with explanations, so I try again, too!

I stated that the description of a bunch of particles (think: gas in a box) naturally invokes the introduction of a hyperspace having twice as many dimensions as the number of those particles.

But it was obviously not obvious why we need those many dimensions. You have asked:

Why do we need additional dimensions for each particles? Can’t they live in the same space?

Why does the collection of the states of all possible systems occupy a patch in 1026 dimensional phase space?

These are nearly the same questions.

I start from a non-physics example this time, because I believe this might convey the motivation for introducing these dimensions better.

These dimensions are not at all related to hidden, compactified, extra large dimensions you might have read about in popular physics books on string theory and cosmology. They are not tangible dimensions in the sense we could feel them – even if we weren’t like those infamous ants living on the inflating balloon.

In Austria we recently had parliamentary elections. This is the distribution of seats in parliament now:

which is equivalent to these numbers:

SPÖ (52)
ÖVP (47)
FPÖ (40)
Grüne (24)
Team Stronach (11)
NEOS (9)

Using grand physics-inspired language I call that ordered collection of numbers: Austria’s Political State Vector.

These are six numbers thus this is a vector in a 6-dimensional Political State Space.

Before the elections websites consolidating and analyzing different polls have been more popular than ever. (There was a website run by physicist now working in finance.)

I can only display two of 6 dimensions in a plane, so the two axes represent any of those 6 dimensions. The final political state is represented by a single point in this space – the tip of an arrow:

After the elections we know the political state vector with certainty – that is: a probability of 1.

Before the elections different polls constituted different possible state vectors – each associated with a probability lower than 1. I indicate probabilities by different hues of red:

Each points represents a different state the system may finally settle in. Since the polls are hopefully meaningful and voters not too irrational points are not scattered randomly in space but rather close to each other.

Now imagine millions of polls – such as citizens’ political opinions tracked every millisecond by directly wiretapping their brains. This would result in millions of points, all close to each other. Not looking too closely, this is a blurred patch or spot – a fairly confined region of space covered with points that seems to merge into a continuous distribution.

Watching the development of this red patch over time lets us speculate on the law underlying its dynamics – deriving a trend from the dynamics of voters’ opinions.

It is like figuring out the dynamics of a  moving and transforming piece of jelly.

Back to Physics

Statistical mechanics is similar, just the numbers of dimensions are much bigger.

In order to describe what each molecule of gas in a room does, we need 6 numbers per molecules – 3 for its spatial coordinates, and 3 for its velocity.

Each particle lives in the same real space where particles wiggle and bump into each other. All those additional dimensions only emerge because we want to find a mathematical representation where each potential system state shows up as a single dot – tagged with a certain probability. As in politics!

We stuff all positions and velocities of particles into an enormous state vector – one ordered collection with about 1026 different numbers corresponds to a single dot in hyperspace.

The overall goal in statistical mechanics is to calculate something we are really interested in – such as temperature of a gas. We aim at calculating probabilities for different states!

We don’t want to look to closely: We might want to compare what happens if if we start from a configuration with all molecules concentrated in a corner of the room with another one consisting of molecules everywhere in the room. But we don’t need to know where each molecule is exactly. Joseph Nebus has given an interesting example related his numerical calculation of the behavior of a car’s shock absorbers:

But what’s interesting isn’t the exact solution of the exact problem for a particular set of starting conditions. When your car goes over a bump, you’re interested in what the behavior is: is there a sudden bounce and a slide back to normal?  Does the car wobble for a short while?  Does it wobble for a long while?  What’s the behavior?

You have asked me for giving you the equations. I will try my best and keep the intro paragraphs of this post in mind.

What do we know and what do we want to calculate?

We are interested in how that patch moves and is transformed – that is probability (the hue of red) as a function of the positions and momenta of all particles. This is a function of 1026 variables, usually called a distribution function or a density.

We know anything about the system, that is the forces at play. Knowing forces is equivalent to knowing the total energy of a system as a function of any system configuration – if you know the gravitational force a planet exerts than you know gravitational energy.

You could consider the total energy of a system the infamous formula in science fiction movies that spies copy from the computer in the secret laboratories to their USB sticks: If you know how to calculate the total energy as a function of the positions and momenta of all particles – you literally rule the world for the system under consideration.

Hyper-Planes

If we know this energy ‘world function’ we could attach a number to each point in hyperspace that indicate energy, or we could draw the hyper-planes of constant energies – equivalent of isoclines in a map.

The dimension of the hyperplane is the dimension of the hyperspace minus one, just as the familiar 2D planes floating through 3D space.

If energy changes more rapidly with varying particle positions and momenta hyper-planes get closer to each other:

Incompressible Jelly

We are still in a classical world. The equations of motions of hyper-jelly are another way to restate Newton’s equations of motion. You start with writing down Force = mass x accelerating for each particle (1026 times), rearrange these equations by using those huge state vectors just introduced – and you end up with an equation describing the time evolution of the red patch.

I picked the jelly metaphor deliberately as it turns out that hyper-jelly acts as an incompressible fluid. Jelly cannot be destroyed or created. If you try to squeeze it in between two planes it will just flow faster. This really follows from Newton’s law or the conservation of energy!

It might appear complicated to turn something as (seemingly) comprehensible as Newton’s law into that formalism. But that weird way of watching the time evolution of the red patch makes it actually easier to calculate what really matters!

Anything that changes in the real world – the time evolution of any quantity we can measure – is expressed via the time evolution of hyper-jelly.

The Liouville equation puts this into math.

As Richard Feynman once noted wisely (Physics Lectures, Vol.2, Ch. 25), I could put all fundamental equations into a big matrix of equations which I then call the Unwordliness, further denoted as U. Then I can unify them again as

U = 0

What I do here is not that obscure but I use some pseudo-code to obscure the most intimidating math. I do now appreciate science writers who state We use a mathematical crank that turns X into Y – despite or because they know exactly what they are talking about.

For every point in hyperspace the Liouville equation states:

(Rate of change of some interesting physical property in time) =
(Some mathematical machinery entangling spatial variations in system’s energy and spatial variations in ‘some property’)

Spatial variations in the system’s energy can be translated to the distance of those isoclines – this is exactly what Newton’s translates into! (In addition we apply the chain rule in vector calculus).

The mathematical crank is indicated using most innocent brackets, so the right-hand side reads:

{Energy function, interesting property function}

Quantization finally seems to be deceptively simple – the quantum equivalent looks very similar, with the right-hand side proportional to

[Energy function, interesting property function]

The main difference is in the brackets – square versus curly: We consider phase space so any function and changes thereof is calculated in phase space co-ordinates – positions and momenta of particles. These cannot be measured or calculated in quantum mechanics with certainty at the same time.

In a related way the exact order of operations does matter in quantum physics – whereas the classical counterparts are commutative operations. The square bracket versus the angle bracket is where these non-commutative operations are added – as additional constraints to classical theory.

I think I have reached my – current – personal limits in explaining this, while still not turning this blog into in a vector calculus lecture. Probably this stuff is usually not popularized for a reason.

My next post will focus on quantum fields again – and I try to make each post as self-consistent anyway.

# On the Relation of Jurassic Park and Alien Jelly Flowing through Hyperspace

Yes, this is a serious physics post – no. 3 in my series on Quantum Field Theory.

I promised to explain what Quantization is. I will also argue – again – that classical mechanics is unjustly associated with pictures like this:

… although it is more like this:

This shows the timelines in Back to the Future – in case you haven’t recognized it immediately.

What I am trying to say here is – again – is so-called classical theory is as geeky, as weird, and as fascinating as quantum physics.

Experts: In case I get carried away by my metaphors – please see the bottom of this post for technical jargon and what I actually try to do here.

Get a New Perspective: Phase Space

I am using my favorite simple example: A point-shaped mass connected to an massless spring or a pendulum, oscillating forever – not subject to friction.

The speed of the mass is zero when the motion changes from ‘upward’ to ‘downward’. It is maximum when the pendulum reaches the point of minimum height. Anything oscillates: Kinetic energy is transferred to potential energy and back. Position, velocity and acceleration all follow wavy sine or cosine functions.

For purely aesthetic reasons I could also plot the velocity versus position:

From a mathematical perspective this is similar to creating those beautiful Lissajous curves:  Connecting a signal representing position to the x input of an oscillosope and the velocity signal to the y input results in a circle or an ellipse:

This picture of the spring’s or pendulum’s motion is called a phase portrait in phase space. Actually we use momentum, that is: velocity times mass, but this is a technicality.

The phase portrait is a way of depicting what a physical system does or can do – in a picture that allows for quick assessment.

Non-Dull Phase Portraits

Real-life oscillating systems do not follow simple cycles. The so-called Van der Pol oscillator is a model system subject to damping. It is also non-linear because the force of friction depends on the position squared and the velocity. Non-linearity is not uncommon; also the friction an airplane or car ‘feels’ in the air is proportional to the velocity squared.

The stronger this non-linear interaction is (the parameter mu in the figure below) the more will the phase portrait deviate from the circular shape:

Searching for this image I have learned from Wikipedia that the Van der Pol oscillator is used as a model in biology – here the physical quantity considered is not a position but the action potential of a neuron (the electrical voltage across the cell’s membrane).

Thus plotting the rate of change of in a quantity we can measure plotted versus the quantity itself makes sense for diverse kinds of systems. This is not limited to natural sciences – you could also determine the phase portrait of an economic system!

Addicts of popular culture memes might have guessed already which phase portrait needs to be depicted in this post:

Reconnecting to Popular Science

Chaos Theory has become popular via the elaborations of Dr. Ian Malcolm (Jeff Goldblum) in the movie Jurassic Park. Chaotic systems exhibit phase portraits that are called Strange Attractors. An attractor is the set of points in phase space a system ‘gravitates’ to if you leave it to itself.

There is no attractor for the simple spring: This system will trace our a specific circle in phase space forever – the larger the bigger the initial push on the spring is.

The most popular strange attractor is probably the Lorentz Attractor. It  was initially associated with physical properties characteristic of temperature and the flow of air in the earth’s atmosphere, but it can be re-interpreted as a system modeling chaotic phenomena in lasers.

It might be apocryphal but I have been told that it is not the infamous flap of the butterfly’s wing that gave the related effect its name, but rather the shape of the three-dimensional attractor:

We had Jurassic Park – here comes the jelly!

A single point-particle on a spring can move only along a line – it has a single degree of freedom. You need just a two-dimensional plane to plot its velocity over position.

Allowing for motion in three-dimensional space means we need to add additional dimensions: The motion is fully characterized by the (x,y,z) positions in 3D space plus the 3 components of velocity. Actually, this three-dimensional vector is called velocity – its size is called speed.

Thus we need already 6 dimensions in phase space to describe the motion of an idealized point-shaped particle. Now throw in an additional point-particle: We need 12 numbers to track both particles – hence 12 dimensions in phase space.

Why can’t the two particles simply use the same space? Both particles still live in the same 3D space, they could also inhabit the same 6D phase space. The 12D representation has an advantage though: The whole system is represented by a single dot which make our lives easier if we contemplate different systems at once.

Now consider a system consisting of zillions of individual particles. Consider 1 cubic meter of air containing about 1025 molecules. Viewing these particles in a Newtonian, classical way means to track their individual positions and velocities. In a pre-quantum mechanical deterministic assessment of the world you know the past and the future by calculating these particles’ trajectories from their positions and velocities at a certain point of time.

Of course this is not doable and leads to practical non-determinism due to calculation errors piling up and amplifying. This is a 1025 body problem, much much much more difficult than the three-body problem.

Fortunately we don’t really need all those numbers in detail – useful properties of a gas such as the temperature constitute gross statistical averages of the individual particles’ properties. Thus we want to get a feeling how the phase portrait develops ‘on average’, not looking too meticulously at every dot.

The full-blown phase space of the system of all molecules in a cubic meter of air has about 1026 dimensions – 6 for each of the 1025 particles (Physicists don’t care about a factor of 6 versus a factor of 10). Each state of the system is sort of a snapshot what the system really does at a point of time. It is a vector in 1026 dimensional space – a looooong ordered collection of numbers, but nonetheless conceptually not different from the familiar 3D ‘arrow-vector’.

Since we are interesting in averages and probabilities we don’t watch a single point in phase space. We don’t follow a particular system.

We rather imagine an enormous number of different systems under different conditions.

Considering the gas in the cubic vessel this means: We imagine molecule 1 being at the center and very fast whereas molecule 10 is slow and in the upper right corner, and molecule 666 is in the lower left corner and has medium. Now extend this description to 1025 particles.

But we know something about all of these configurations: There is a maximum x, y and z particles can have – the phase portrait is limited by these maximum dimensions as the circle representing the spring was. The particles have all kinds of speeds in all kinds of directions, but there is a most probably speed related to temperature.

The collection of the states of all possible systems occupy a patch in 1026 dimensional phase space.

This patch gradually peters out at the edges in velocities’ directions.

Now let’s allow the vessel for growing: The patch will become bigger in spatial dimensions as particles can have any position in the larger cube. Since the temperature will decrease due to the expansion the mean velocity will decrease – assuming the cube is insulated.

The time evolution of the system (of these systems, each representing a possible system) is represented by a distribution of this hyper-dimensional patch transforming and morphing. Since we consider so many different states – otherwise probabilities don’t make sense – we don’t see the granular nature due to individual points – it’s like a piece of jelly moving and transforming.

Precisely defined initial configurations of systems configurations have a tendency to get mangled and smeared out. Note again that each point in the jelly is not equivalent to a molecule of gas but it is a point in an abstract configuration space with a huge number of dimensions. We can only make it accessible via projections into our 3D world or a 2D plane.

The analogy to jelly or honey or any fluid is more apt than it may seem

The temporal evolution in this hyperspace is indeed governed by equations that are amazingly similar to those governing an incompressible liquid – such as water. There is continuity and locality: Hyper-Jelly can’t get lost and be created. Any increase in hyper-jelly in a tiny volume of phase space can only be attributed to jelly flowing in to this volume from adjacent little volumes.

In summary: Classical mechanical systems comprising many degrees of freedom – that is: many components that have freedom to move in a different way than other parts of the system – can be best viewed in the multi-dimensional space whose dimensions are (something like) positions and (something like) the related momenta.

Can it get more geeky than that in quantum theory?

Finally: Quantization

I said in the previous post that quantization of fields or waves is like turning down intensity in order to bring out the particle-like rippled nature of that wave. In the same way you could say that you add blurry waviness to idealized point-shaped particles.

Another is to consider the loss in information via Heisenberg’s Uncertainly Principle: You cannot know both the position and the momentum of a particle or a classical wave exactly at the same time. By the way, this is why we picked momenta  and not velocities to generate phase space.

You calculate positions and momenta of small little volumes that constitute that flowing and crawling patches of jelly at a point of time from positions and momenta the point of time before. That’s the essence of Newtonian mechanics (and conservation of matter) applied to fluids.

Doing numerical calculation in hydrodynamics you think of jelly as divided into small little flexible cubes – you divide it mentally using a grid, and you apply a mathematical operation that creates the new state of this digitized jelly from the old one.

Since we are still discussing a classical world we do know positions and momenta with certainty. This translates to stating (in math) that it does not matter if you do calculations involving positions first or for momenta.

There are different ways of carrying out steps in these calculations because you could do them one way of the other – they are commutative.

Calculating something in this respect is similar to asking nature for a property or measuring that quantity.

Thus when we apply a quantum viewpoint and quantize a classical system calculating momentum first and position second or doing it the other way around will yield different results.

The quantum way of handling the system of those  1025 particles looks the same as the classical equations at first glance. The difference is in the rules for carrying out calculation involving positions and momenta – so-called conjugate variables.

Thus quantization means you take the classical equations of motion and give the mathematical symbols a new meaning and impose new, restricting rules.

I probably could just have stated that without going off those tangent.

However, any system of interest in the real world is not composed of isolated particles. We live in a world of those enormous phase spaces.

In addition, working with large abstract spaces like this is at the heart of quantum field theory: We start with something spread out in space – a field with infinite degrees in freedom. Considering different state vectors in these quantum systems is considering all possible configurations of this field at every point in space!

_______________________________________

Expert information:

I have taken a detour through statistical mechanics: Introducing Liouville equations as equation of continuity in a multi-dimensional phase space. The operations mentioned – related to positions of velocities – are the replacement of time derivatives via Hamilton’s equations. I resisted the temptation to mention the hyper-planes of constant energy. Replacing the Poisson bracket in classical mechanics with the commutator in quantum mechanics turns the Liouville equation into its quantum counterpart, also called Von Neumann equation.

I know that a discussion about the true nature of temperature is opening a can of worms. We should rather describe temperature as the width of a distribution rather than the average, as a beam of molecules all travelling in the same direction at the same speed have a temperature of zero Kelvin – not an option due to zero point energy.

The Lorenz equations have been applied to the electrical fields in lasers by Haken – here is a related paper. I did not go into the difference of the phase portrait of a system showing its time evolution and the attractor which is the system’s final state. I also didn’t stress that was is a three dimensional image of the Lorenz attractor and in this case the ‘velocities’ are not depicted. You could say it is the 3D projection of the 6D phase portrait. I basically wanted to demonstrate – using catchy images, admittedly – that representations in phase space allows for a quick assessment of a system.

I also tried to introduce the notion of a state vector in classical terms, not jumping to bras and kets in the quantum world as if a state vector does not have a classical counterpart.

I have picked an example of a system undergoing a change in temperature (non-stationary – not the example you would start with in statistical thermodynamics) and swept all considerations on ergodicity and related meaningful time evolutions of systems in phase space under the rug.

# Space Balls, Baywatch and the Geekiness of Classical Mechanics

This is the first post in my series about Quantum Field Theory. What a let-down: I will just discuss classical mechanics.

There is a quantum mechanics, and in contrast there is good old classical, Newtonian mechanics. The latter is a limiting case of the former. So there is some correspondence between the two, and there are rules that let you formulate the quantum laws from the classical laws.

But what are those classical laws?

Chances are high that classical mechanics reminds you of pulleys and levers, calculating torques of screws and Newton’s law F = ma: Force is equal to mass times acceleration.

I argue that classical dynamics is most underrated in terms of geek-factor and philosophical appeal.

[Space Balls]

The following picture might have been ingrained in your brain: A force is tugging at a physical object, such as earth’s gravity is attracting a little ball travelling in space. Now the ball moves – it falls. Actually the moon also falls in a sense when it is orbiting the earth.

Cannon ball and gravity. If the initial velocity is too small the ball traverses a parabola and eventually reaches the ground (A, B). If the ball is just given the right momentum, it will fall forever and orbit the earth (C). If the velocity is too high, the ball will escape the gravitational field (E). (Wikimedia). Now I said it – ‘field’! – although I tried hard to avoid it in this post.

When bodies move their positions change. The strength of the gravitational force depends on the distance from the mass causing it, thus the force felt by the moving ball changes. This is why the three-body problem is hard: You need a computer for calculating the forces three or more planets exert on each other at every point of time.

So this is the traditional mental picture associated associated with classical mechanics. It follows these incremental calculations:
Force acts – things move – configuration changes – force depends on configuration – force changes.

In order to get this going you need to know the configuration at the beginning – the positions and the velocities of all planets involved.

So in summary we need:

• the dependence of the force on the position of the masses.
• the initial conditions – positions and velocities.
• Newton’s law.

But there is an alternative description of classical dynamics, offering an alternative philosophy of mechanics so to speak. The description is mathematically equivalent, yet it feels unfamiliar.

In this case we trade the knowledge of positions and velocities for fixing the positions at a start time and an end time. Consider it a sort of game: You know where the planets are at time t1 and at time t2. Now figure out how they have moved / will move between t1 and t2. Instead of the force we consider another, probably more mysterious property:

It is called the action. The action has a dimension of [energy time], and – as the force – it has all information about the system.

The action is calculated by integrating…. I am reluctant to describe how the action is calculated. Action (or its field-y counterparts) will be considered the basic description of a system – something that is given, in the way had been forces had been considered given in the traditional picture. The important thing is: You attach a number to each imaginable trajectory, to each possible history.

The trajectory a particle traverses in time slot t1-t2 are determined by the Principle of Least Action (which ‘replaces’ Newton’s law): The action of the system is minimal for the actual trajectories. Any deviation – such as a planet travelling in strange loops – would increase the action.

Principle of least action. Given: The positions of the particle at start time t1 and end t2. Calculated: The path the particle traverse – by testing all possible paths and calculating their associated actions. Near the optimum (red) path the variation does hardly vary (Wikimedia).

This sounds probably awkward – why would you describe nature like this?
(Of course one answer is: this description will turn out useful in the long run – considering fields in 4D space-time. But this answer is not very helpful right now).

That type of logic is useful in other fields of physics: A related principle lets you calculate the trajectory of a beam of light: Given the start point and the end point a beam, light will pick the path that is traversed in minimum time (This rule is called Fermat’s principle).

This is obvious for a straight laser beam in empty space. But Fermat’s principle allows for picking the correct path in less intuitive scenarios, such as: What happens at the interface between different materials, say air and glass? Light is faster in air than in glass, thus is makes sense to add a kink to the path and utilize air as much as possible.

[Baywatch]

Richard Feynman used the following example: Consider you walk on the beach and hear a swimmer crying for help. Since this is a 1960s text book the swimmer is a beautiful girl. In order to reach her you have to: 1) Run some meters on the sandy beach and 2) swim some meters in the sea. You do an intuitive calculation about the ideal point of where to enter the water: You can run faster than you can swim. By using a little more intelligence we would realize that it would be advantageous to travel a little greater distance on land in order to decrease the distance in the water, because we go so much slower in the water (Source: Feynman’s Lecture Vol. 1 – available online since a few days!)

Refraction at the interface between air and water (Wikimedia). The trajectory of the beam has a kink thus the pole appears kinked.

Those laws are called variational principles: You consider all possible paths, and the path taken is indicated by an extremum, in these cases: a minimum.

Near a minimum stuff does not vary much – the first order derivative is zero at a minimum. Thus on varying paths a bit you actually feel when are close to the minimum – in the way you, as a car driver, would feel the bottom of a valley (It can only go up from here).

Doesn’t this description add a touch of spooky multiverses to classical mechanics already? It seems as if nature has a plan or as if we view anything that has ever or will ever happen from a vantage point outside of space-time.

Things get interesting when masses or charges become smeared out in space – when there is some small ‘infinitesimal’ mass at every point in space. Or generally: When something happens at every point in space. Instead of a point particle that can move in three different directions – three degrees of freedom in physics lingo – we need to deal with an infinite number of degrees of freedom.

Then we are entering the world of fields that I will cover in the next post.