The Twisted Garden Hose and the Myth of the Toilet Flush

If you have wrapped your head around why and how the U-shaped tube in the flow meter (described in my previous post) is twisted by the Coriolis force – here is a video of a simple experiment brought to my attention by the author of the quoted article on gyroscope physics:

You could also test it yourself using a garden hose!

Accidentally you can observe this phenomenon so clearly because the typical angular frequencies of manual rotation result in a rather strong Coriolis force – in contrast to other every day phenomena that are falsely attributed to the Coriolis force associated with the rotation of the earth.

It is often stated – and I even found this in lecture notes and text books – that the Coriolis force is accountable for the unambiguously different sense of rotation of vortices in water flowing down the sink of your bathtub or toilet: In the Northern hemisphere water should spin anti-clockwise, in the Southern hemisphere clockwise. Numerous articles debunk this as an urban legend – I pick a random one.

On principle the statement on the sense of rotation is correct as the rotation of hurricanes is impacted by the Coriolis force. But for toilet flushes and the like the effect is negligible compared to other random factors impacting the flow of water. As pointed out in this article the momentum of leaves thrown into a bowl of water at a location near the equator of the earth (often used in demonstrations of entertain tourists) do have more impact than the Coriolis force.

Near the equator the Coriolis force is nearly zero, or more precisely: Since it is both perpendicular to the velocity and the axis of rotation the Coriolis force would be directed perpendicular to the surface of the earth – but there is no component of the Coriolis force that would allow water to flow North-South or East-West. Thus very near the equator the force is infinitesimally small – much smaller than the forces acting on, say, middle European or Austrian bathtubs. And even with those the Coriolis force does not determine the spin of rotation unambiguously.

How to estimate this impact – and why can we observe the twist in the garden hose experiment?

The size of the acceleration due to the Coriolis force is

2 times (angular frequency [rad/s]) times (component of the velocity [m/s] perpendicular to the axis of rotation)

The angular frequency in radians per second is 2 Pi times the number of rotations per second. Thus the angular frequency of the rotation of the earth is about 0,0000727 radians per second. The frequency of the motion of the garden hose was rather several turns per second, thus about 1 radians per second.

Imagine a slice or volume element of water flowing in a sink or a garden hose: Assuming a similar speed with an order of magnitude of 1 meter per second. The Coriolis force differs by several orders of magnitude

  • Bathtub vortex: 0,00015 m/s2
  • Garden hose: 2 m/s2

On the other hand, the acceleration due to gravity is equal to 9,81 m/s2.

The garden hose in the video moves under the influence of gravity – like a swing or pendulum – and the Coriolis force (The additional force due to motion of the hands of the experimenter is required to overcome friction). Since the Coriolis force is of the same order of magnitude as gravity you would expect some significant impact as the resulting force on every slice or volume element of water is the vector sum of the two.

It is also important to keep track of the origins of the components of the velocity:

The radial flow velocity (assumed to be about 1 m/s) in the hose is constant and simply dictated the by the pressure in the water line. There is no tangential flow velocity unless caused by the Coriolis force.

In case of the bath tub the assumed 1 m/s do not refer to the velocity of the tangential motion in the vortex, but to the radial velocity of the water flowing “down” the sink. The tangential velocity is what would be caused by the Coriolis force – ideally.

Any initial velocity is is subject to the initial conditions applied to the experiment.

Any random tangential component of the flow velocity in the vortex increases when the water flows down:

If there is a small initial rotation – a small velocity directed perpendicular to the symmetry axis of the flush – pronounced vortices will develop due to the conservation of angular momentum: As the radius of rotation decreases – dictated by the shape of the bathtub or toilet – angular frequency needs to increase to keep the angular momentum constant. Thus in your typical flush you see how a random and small disturbance is “amplified”.

However, if you would conduct such experiment very carefully and wait long enough for any disturbance to die out, you would actually see the vortices due to Coriolis force only.[*] I have now learned from Wikipedia that it was an Austrian physicist who published the first paper on this in 1908 – Ottokar Tumlirz. Here is the full-text of his German paper: Ein neuer physikalischer Beweis für die Achsendrehung der Erde.  (Link edited in 2017. In 2013 I haven’t found the full text but only the abstract).

Tumlirz calculated the vortices’ velocity of rotation and used the following setup to confirm his findings:


My sketch of the experimental setup Ottokar Tumlirz used in 1908 as it is described in the abstract of his German paper “New physical evidence on the axis of rotation of the earth”. The space between the two plates is filled with water, and the glass tube is open at the bottom. Water (red) flows radially to the tube. The red lines are bent very slightly due to the Coriolis force.

Holes in a cylindrical tube – which is open at the bottom – allow water to enter the tube radially. This is not your standard flush, but a setup built for preventing the amplification of tangential components in the flow. Due to the Coriolis force the flow lines are not straight lines, but slightly bent.

Tumlirz noted that the water must not flow with a speed not higher than about 1 mm per minute.

Edit, Oct 2, 2013: See Ernst Mach’s original account of Tumlirz’ experiment (who was Mach’s student)

Edit, June 3, 2015: Actually somebody did that careful experiment now – and observed the tiny effect just due to the Coriolis force, as Tumlirz. Follow-up post here.

Edit, Augist 2016: Stumbled upon another reference to an experiment done in 1962 and published in Nature (and filmed) – link added to the post from 2015.

Intuition and the Magic of the Gyroscope – Reloaded

I am baffled by the fact that my article The Spinning Gyroscope and Intuition in Physics is the top article on this blog so far.

So I believe I owe you, dear readers, an update.

In the previous article I have summarized the textbook explanation, some more intuitive comments in Feynman’s Physics Lectures, and a new paper by Eugene Butikov.

But there is an explanation of the gyroscope’s motion that might become my new favorite:

Gyroscope Physics by Cleon Teunissen

It is not an accident that Cleon is also the main author of the Wikipedia article on the Coriolis flow meter, as his ingenious take on explaining the gyroscope’s precession is closely related to his explanation of the flow meter.

The Coriolis force is a so-called pseudo-force you “feel” in a rotating frame of reference: Imagine yourself walking across a rotating disk, or rolling a ball soaked in white color rolling across a black disk and watch its trace. In the center of the rotating disk, there is no centrifugal force. But you would still feel being dragged to the right if the disk is rotating counter-clockwise (viewed from the top).

This force dragging you to the right or making the path of the ball bend even in the center – this is the Coriolis force. It also makes tubes bend in the following way when a liquid flows through them and allows for determining the flow velocity from the extent of bending:

Vibration pattern of the tubes during mass flow. Credits: Cleon Teunissen.

Vibration pattern of the tubes during mass flow. Credits: Cleon Teunissen, used with permission – Coriolis flow meter

The “loop” formed by the tubes rotates about an axis which is parallel to the direction of the flow whose speed should be measured (though not the full 360°). Now the Coriolis force always drags a moving particle “to the right” – same with the volume elements in the liquid. Note that the force is always directed perpendicular to the axis of rotation and perpendicular to the velocity of the flowing volume element (mind the example of the rolling ball).

In the flow meter, the liquid moves away from the axis in one arm, but to the axis in the other arm. Thus the forces acting on each arm are antiparallel to each other and a torque is exerted on the “loop” that consists of the two arms. The loop is flexible and thus bent by the torque in the way shown in the figure above.

Now imagine a gyroscope:

Gyroscope with Coriolis force per quadrant indicated. Credits: Cleon Teunissen.

Gyroscope with Coriolis force per quadrant indicated. Credits: Cleon Teunissen – Gyroscope Physics

Gravity (acting on the weight mounted on the gyroscope’s axis) tries to make the gyroscope pitch. Cleon now shows why precession results in an “upward pitch” that compensates for that downward pitch and thus finally keeps the gyroscope stable.

The clue is to consider the 4 quadrants the gyroscope wheel consist of separately – in a way similar to evaluating the Coriolis force acting on each of the arms of the flow meter:

The tangential velocity associated with the rotation about the symmetry axis of the gyroscope (“roll”, spinning of the wheel) is equivalent to the velocity of the flowing liquid. In each quadrant, mass is moving – “flowing” – away from or to the “swivel axis” – the axis of precession (indicated in black, parallel to gravity).

The per-quadrant Coriolis force is again perpendicular to the swivel axis and perpendicular to the “flow velocity”. Imaging yourself sitting on the blue wheel and looking into the direction of the tangential velocity: Again you are “dragged to the right” if precession in counter-clockwise. As right is defined in relation to the tangential velocity, the direction of the force is reversed in the two lower quadrants.

A torque tries to pitch the wheel “upward”.


If you want to play with gyroscopes yourself: I have stumped upon a nice shop selling gyroscopes – incl. a steampunk version, miniature Stirling engines, combustion engines (… and strange materials such as ferrofluids).

Random Thoughts on Temperature and Intuition in Thermodynamics

Recent we felt a disturbance of the force: It has been demonstrated that the absolute temperature of a real system can be pushed to negative values.

The interesting underlying question is: What is temperature really? Temperature seems to be an intuitive everyday concept, yet the explanations of ‘negative temperatures’ prove that it is not.

Actually, atoms have not really been ‘chilled to negative temperatures’. I pick two explanations of this experiment that I found particularly helpful – and entertaining:

As Matt Srpinger points out The issue is simply that formally temperature is a relationship between energy and entropy, and you can do some weird things to entropy and energy and get the formal definition of temperature to come out negative.

Aatish Bhatia manages to convey the fact that temperature is inversely proportional to the slope of the entropy vs. energy curve using compelling analogs from economics. The trick is to find meaningful economic terms that are related in a way similar to the obscure physical properties you want to explain. MinutePhysics did something similar in explaining fundamental forces.

I had once worked in laser physics, so Matt’s explanation involving two-level system speaks to me. His explanation avoids to touch on entropy – a ‘mysterious’, not self-explanatory term .

You can calculate the probabilities of population of these two states from temperature – or vice versa. If you manage to tweak the population by some science-fiction-like method (creating non equilibrium states) you can end up with a distribution that formally results in negative temperatures if you run the math backwards.

[In order to allow for tagging this post with Physics in a Nutshell I need to state that the nutshell part ends here.]

But how come that ‘temperature’ ever became such an abstract concept?

From a very pragmatic perspective – focussing on macroscopic, everyday phenomena temperature is what we measure by thermometers, that is: calculated from the change of the volume of gases or liquids.

You do not need any explanation of what temperature or even entropy really is if you want to design efficient machines, such as turbines.

As a physics PhD working towards an MSc in energy engineering, I have found lectures in Engineering Thermodynamics eye-opening: As a physicist I had been trained to focus on fundamental explanations: What is entropy really? How do we explain physical properties microscopically? That is: calculating statistical averages of the properties of zillions of gas molecules or imagining an abstract ‘hyperspace’ whose number of dimensions is proportional to the number of particles. The system as such moves through this abstract space as times passes by.

In engineering thermodynamics the question to What is entropy? was answered by: Consider it some calculated property that is used to judge the efficiency of machines and processes.

Rankine cycle with reheat

Temperature Entropy diagram for steam. The red line represents a Rankine cycle process: A turbine is delivering mechanical energy when temperature and pressure of the steam is decreased.

New terms in science have been introduced for fundamental reasons and/or because they came in handy in calculations. For example in my point of view, enthalpy belongs to the second class because it makes descriptions of gases and fluids flowing through apparatuses more straight-forward. Entropy is different despite it can be reduced to its practical aspects. Entropy has been introduced in order to tame heat and irreversibility.

Richard Feynman stated (in Vol. I of his Physics Lectures, published 1963) that research in engineering contributed two times to the foundations of physics: The first time when Sadi Carnot formulated the Second Law of Thermodynamics  (which can be stated in terms of an ever increasing entropy) and the second time when Shannon founded information theory – using the term entropy in a new way. So musing about entropy and temperature – this is where hands-on engineering meets the secrets of the universe!

I tend to state that temperature had never been that understandable and familiar:

Investigations of the behavior of ideal gases (fortunately air, even moist air, is an ideal gas) have revealed that there needs to be an absolute zero temperature – when the volume of an ideal gas would approach zero.

When Clausius coined the term Entropy in 1865 1850 (*), he was searching for a function that allows to depict any process in a diagram such as the figure above, in a sense.

Heat is a vague term – it only exists ‘in transit’: Heat is exchanged, but you cannot assign a certain amount of heat to a state. Clausius searched for a function that could be used to denote one specific state in such a map of states, and he came up with a beautiful and simple relationship. The differential change in heat is equal to the change in entropy times the absolute temperature!  So temperature entered the mathematical formulations of the laws of thermodynamics when doing something really non-intuitive with differentials.

Entropy really seems to be the more fundamental property. You could actually start from the Second Law and define temperature in terms of the efficiency of perfect machines that are just limited by the fact that entropy can only increase (or that heat always needs to flow from the hotter to the colder object):

Beta stirling animation

Stirling motor – converting heat drawn from a hot gas to mechanical energy. Its optimum efficiency would be similar to that of Carnot’s theoretical machine.

The more we learn about the microscopic underpinnings of the laws that have been introduced phenomenologically before, the less intuitive explanations became. It does not help  to circumvent entropy by considering what each of the particles in the system does. We think of temperature as something as some average over velocities (squared). But a single particle travelling its path through empty space would not have temperature. Neither would any directed motion of a beam of particles contribute to temperature. So temperature is better defined as the mean deviation of a distribution of speeds.

Even if we consider simple gas molecules, we could define different types of temperature: There is a kinetic temperature calculated from velocities. In the long run – when equilibrium has been reached – the other degrees of freedom (sich as rotations) would exhibit the same temperature. But when a gas is heated up, heat is transferred via collisions: So first the kinetic temperature rises, and then the energy is transferred to rotations. You could also calculate a temperature from rotations, and this temperature would be different from the kinetic temperature.

So temperature is a property that is derived from what an incredible number of single particles do. It is a statistical property and it makes only sense when a system had enough time to reach an equilibrium. As soon as we push the microscopic constituents of the system that makes them deviate from their equilibrium behaviour, we get strange results for temperature – such as negative values.

(*) Edit: Though Clausius is known as the creator of the term entropy, the concept as such has been developed earlier by Rankine.

Joule, Thomson, and the birth of big science

I know that I might be guilty of putting too much emphasis on the fancy / sci-fi / geeky fields in physics, as demonstrated by my recent post on quantum field theory.

In order to compensate for that I want to reblog this excellent post by carnotcycle in order to demonstrate that I really like thermodynamics. And I mean good, old, phenomenological thermodynamics – pistons, steam engines, and seemingly simple machines (that look like exhibits at a steampunk convention).

Classical thermodynamics is underrated (re geekiness) compared to pondering on entropy and the arrow of time or entropy as it is used in computer science.

It is deceptively simple – you might think it is easy to understand the behavior of ideal gases and steam-powered engines. But isn’t it that type of experiments that often baffles the audience in science shows on TV?
The history of the research done by Joule and Thomson could give you a taste of that. I don’t think it is intuitive why or why not a gas should cool when flowing to a region of lower pressure.


Historical background

In early May 1852, in the cellar of a house in Acton Square, Salford, Manchester (England), two men began working a mechanical apparatus which consisted of the above hand-operated forcing pump attached to a coiled length of lead piping equipped with a stopcock at its far end to act as a throttle.

The two men were the owner of the house, 33-year-old James Joule, a Manchester brewer who was rapidly making a name for himself as a first-rate experimental scientist, and 27-year-old William Thomson (later Lord Kelvin), a maverick theoretician who was already a professor of natural sciences at Glasgow University. Over a period of 10 days, they were to conduct a series of experiments with this highly original apparatus which would serve to crank experimental research into the modern era and herald the birth of what we would now call big science.

What Joule and Thomson were looking for…

View original post 1,824 more words

Quantum Field Theory or: It’s More Than a Marble Turned into a Wiggly Line

I had been trained as an experimental physicist which meant I was good at locating vacuum leaks, adjusting lasers and lenses, telling reasonable data from artefacts, and being the only person that ever replenished the paper feed of the X-ray diffractometer (Yes, at that time we used paper records).

Exactly because of that I took pride in the fact that I attended some non-mandatory lectures in quantum theory – in particular in order to understand the quantum mechanical underpinnings of condensed matter physics and superconductivity.

Ironically, it was most likely my focus on superconductors that made me miss an important point.

Superconductivity is one of those effects that can be described as quantum physics emerging at a macroscopic scale – there is a ‘giant wave function’ comprising many particles, similar to infamous Bose-Einstein condensation. (I am indulging on sloppy terminology here. The giant wave function has been proposed by Ginzburg and Landau as a phenomenological explanation of superconductivity, and Bardeen, Cooper and Schrieffer finally formulated a full ‘microscopic’ theory.)

Bose Einstein condensate

Bose Einstein condensate (Wikimedia). Many particles acting as one, representing a giant wave-function. If there is a probability for one particle to be in a certain place, overlapping many of those probability waves results in a simple distribution – representing how many particles are in a particular place.

What’s the irony? In condensed matter physics you are investigating the interactions of many (many, many) particles. That’s why I was under the impression that advanced theories in quantum physics are basically the theories applied to single particles plus some way of doing statistics on them. I was not familiar with the term Quantum Field Theory, but back then or in my personal condensed matter corner of the world this was called Quantum Statistics or Second Quantization.

Until very recently, when the discussions on the Higgs boson etc. rekindled my interest in fundamentals of physics, I was indeed clueless and unable to connect ‘Quantum Statistics’ with the Standard Model in particle physics and basically anything related to single particles.

So what is the connection between understanding the behavior or many particles in a piece of metal and the high-energy experiments with colliding protons?

In popular science books the transition from the classical world to the quantum world is often depicted as the replacement of solid marbles with little wiggly lines. A ‘particle’ becomes a ‘wave’. This is associated with all kinds of philosophical discussions. I tend to state that there would not be any discussion at all if we would not use these pictures. A ‘particle’ is neither a marble or a wiggly line, it is the concept of a single particle as such that ceases to exist in advanced quantum (field) theory.

It is true that simple quantum systems can be described as particles turned waves: such as the hydrogen atom that can be described nicely using a single-particle Schrödinger wave function. A particle in a box can be represented by a quantum mechanical wave function that represents the probability to find the particle at a certain position:


Particle in an infinite square well (‘box’) – solutions of quantum mechanical wave equations (Wikimedia)

In order to see why and when the particle as such ceases to exist, insights from quantum mechanics (QM) need to be combined with special relativity (SR) , the famous E=mc2 in particular.

  • Large momentum – as per QM: Based on Heisenberg’s uncertainty principle, we cannot measure position and momentum of a particle precisely. So if we try to confine the particle – e.g. lock it up in a box – the uncertainty in momentum will increase. Chances increase that the particle will exhibit large momenta.
  • Large energy  – as per SR: Large (uncertainties in) momenta mean large (uncertainties in) energies. For a massless particle (as the photon, the quantum of light) energy and mass are proportional, and these properties are connected by a simple relationship in special relativity. If velocities are low compared to the speed of light you might simply think of momentum as the product of (rest) mass and velocity, whereas kinetic energy is the product of mass and velocity squared over 2.
  • Particle creation –  as per SR: We know from the most famous formula in the world that energy is equivalent to mass. So if the uncertainty in energy increases chances increase that new particles are created.  More precisely, particles are created in pairs in order not to violate other conservation laws (e.g. electrical charge).

The uncertainty principle however also related energy to time: The shorter the time scale, the higher the energy that can be used in the creation of – virtual – particles. The full-blown theory of quantum (field) theory deals with all those cases – virtual and real particles, slow or near the speed of light, lonely ones or packs.

So if we take a closer look at a particular particle, we can identify two interesting length scales:

  • The wavelength of wave associated with a single particle as long as this interpretation makes sense. This is called the De Broglie wavelength.
  • The length scale where the particle as a single entity ceases to exist. This is called the Compton wave length.

But where do this particles ‘come from’? The answer is of course – they come from an ‘underlying field’, though this may not sound as a satisfactory explanation.

It might be unusual to think of particles are something transient or something subordinate to a mysterious field.

However, we had become used to the fact that electromagnetic radiation can be viewed as or turned into particles called photons. But if we can imagine a field becoming particles why not consider particles being sort of a manifestation of a ‘field’? David Tong calls this field a ‘sea of stuff’ in his first lecture on Quantum Field Theory (BTW I would highly recommend his lectures – notes and videos – as an introduction to QFT). So protons are just ripples in this sea of proton stuff just as photons are just ripples of the electromagnetic field.

I believe that we often some ideas in science for granted or consider them plausible or familiar if we had been exposed to them often enough. But after all: Do we really understand or feel the electromagnetic field any better as the sea of stuff that pops out electrons, protons (or the Higgs boson maybe)? Just because we are surrounded by EM wave smog transmitting Facebook posts and TV shows?

Intuition in quantum physics – if there is any – can in my point of view only be acquired by wading through the math. There is no shortcut. Stating a particle originates from a field or vice versa is just a vague replacement of something that only equations can capture precisely.

Or might there be acceptable shortcuts?

I really do enjoy MinutePhysics – watch the explanation of what matter isexplanation of what matter is. In passing, the electron field is introduced as an ‘electron-ness’ – similar to ‘three-ness’ invoked when we use the number of three.


Are We All Newtonians?

In my most recent posts I showed off: 1) Sandra Bullock killing a computer virus and ordering pizza online, 2) a cartoon making fun of all academic disciplines I refer to this blog, 3) images of cute furry animals – dead and alive.

I will not be able to top that.

Thus I feel free to bore you readers to death again with one of my classical wall-of-text-y posts on physics, buffered by some fluffy philosophical musings. I will  even do what is a no-no for a wannabe popular science blogger: Adding a mathematical equation.

No, seriously:

I am quite obsessed with trying to understand how and if the world around us makes sense to us. Are we natural Newtonians, provided with a feeling for system dynamics? Disclaimer: This posting is strictly limited to classical dynamics. Even simple classical systems can get complicated, messy, and very interesting, so we do not yet need to invoke spooky quantum effects.

Physics shows on TV present simple facts as amazing stuff – and the audience is baffled. So my gut feeling is that we don’t have a very natural grasp for understanding nature.

Isaac Newton has provided us with a formalism that allows for the calculating how objects move in space when forces act on them. This is calculus, a field often encountered with fear and/or fascination by non-science majors. Differential equations, such as Newton’s First Law, put into symbols concisely what we mean by feedback loops and incremental reactions and backreactions. This is powerful, elegant, allows for predictions – and makes a lot of metaphysical musings on ‘causality’ obsolete – in my humble opinion.

Consider this: A stretched string exerts a force onto a marble or some other object attached to it. (Well, it’s a cube, but I prefer down-to-earth notion ‘marble’ to arcane ‘cubic object’)


Motion of a object attached to a spring. x…distance, v…velocity, a…acceleration (Wikimedia)

Due to the force the marble moves, and due to the motion the force changes – since the force depends on the position of the marble. Due to the changed force the motion of the marble in the next moment changes. This might sound even poetic, but isn’t the equation version stunning?


x is the position of the marble, m is the mass of the spring and k a parameter characterizing the stiffness of the spring. The distance of the marble is supposed to be a function of time, x(t). You might see that I did not include gravity here –  the marble is not dangling on a spring extended in vertical direction, but the spring is moving horizontally, such as gliding on a table. The table compensates for gravity (in a sense).

The second derivative of this function – the acceleration – is proportional to the function itself, at any point of time.

Some great physicist, I believe it was Landau but I cannot say for sure (I would appreciate corrections), once said that you only understand a differential equation if you can describe the solution without actually solving it. An excellent description of the full solution can be found in the fabulous Physics Tutorials blog.

What I mean by an intuitive understanding is this: You can describe the motion in natural language, and you can translate back and forth to the mathematical description without calculating every single step. Yet there is no shortcut for or replacement of a deep understanding of the math. But just starting from the equation and looking up or recalling the memorized standard solution for ‘homogeneous ordinary differential equations of first order’ is not what I would call intuition.

Sketch of the position of a marble attached to an oscillating spring versus time

Imagine the spring…. We need to start somewhere right in the middle, so let’s start with the spring stretched to the maximum extent. The marble stands still, velocity is zero. Now we release the marble. According to the equation, there is a rather large acceleration – proportional to the large stretch of the string, with opposite sign to the distance of the marble from its equilibrium position.

The acceleration will give rise to an increase in velocity, actually the acceleration is the increase in velocity – it is the curvature of x(t). Since the acceleration is large, the absolute value of velocity will increase fast, and x(t) would appear strongly curved. I am drawing a little curved snippet of x(t) in red.

As velocity gets larger, the distance x becomes smaller. As x becomes smaller, the acceleration becomes smaller. Since the acceleration decreases, the velocity does hardly change. If the velocity does not change, x(t) is expected to be a straight line. Thus near x=0 I expect the solution to be a straight line and draw a snippet of x(t) accordingly.

When the marble has passed the zero position, x increases again, but with the opposite sign. I extend the red, nearly linear snippet. An acceleration starts to build up that corresponds to a decreasing velocity. At a certain position x the absolute value of the acceleration is the same as for the corresponding x ‘on the other side’. Thus the motion is kind of mirrored. So we need to hit a point with velocity equal zero again. The acceleration is also zero then which means the function x(t) displays an extremum (zero curvature). Starting from the extremum, x(t) is bent again back to the line x=0. Adding a curved snippet of x(t) below the t axis which looks like the mirror image of the initial snippet, just translated by a time difference.

So in summary x(t) is characterized by:

  • Alternating extrema: The curvature has a sign that pushes the curve back to x=0
  • At x=0 there is no curvature at all.

Now we try to connect the snippets, and I guess, the sketched curve already looks familiar. We have rediscovered the characteristic features of sine (or cosine) functions without solving the equation.

The equation even gives us a first taste of an important concept in physics: Symmetry. It is symmetric with respect to time, in physics lingo. If we replace t by -t nothing happens, as there is only t squared. ‘Nothing happens’ translates to: If we have found solution x(t) (sine, cosine, combinations of them) we can replace t by -t in the solution and the result is another valid solution.

If we would add friction to it, that will change: Typical friction forces are functions of the velocity – so the first derivative of x with respect to time enters the equation and the infamous arrow of time cannot be reverted again. Now replacing t by -t would result in replacing something like exponential decay by exponential rise.

Back to language and philosophy: What is ’cause’ and what is ‘effect’ here – the force exerted by the string or the motion of the marble? I feel myself confirmed by Bertrand Russell who considered many ‘classical philosophical, metaphysical questions’ just unnecessary musings that would be resolved by a more rigid analysis of language and a solid framework of logics.

Russell even mentions differential equations in physics in History of Western Philosophy:

Since Einstein, distance is between events, not between things, and involves time as well as space. It is essentially a causal conception, and in modern physics there is no action at a distance. All this, however, is based upon empirical rather than logical grounds. Moreover the modern view cannot be stated except in terms of differential equations, and would therefore be unintelligible to the philosophers of antiquity.

This does not mean that these differential equations are intuitive? I think there are two pre-requisites for developing intuition:

  • Practice, practice, practice. Make them a part of daily routine.
  • At the beginning – after having recovered from initial shock of exposure – try to wrap your head around and understand the physics behind the mathematics. Do not simply follow the solutions manual.

Today mathematics software allows for solving algebraic equations and differential equations easily. I have mixed feelings about this. I have always admired the analytical skills and problem solving skills of scientists from Eastern Europe who had been trained in the cold war era. They had to learn how to solve stuff with brain, pencil and paper due to the sheer lack of resources in terms of expensive computers.

Thus it is remarkable (to us, today) that Newton has proved his laws in calculus geometrically. I stand in awe of geometrical proofs of stuff like that, e.g. as published in On The Shoulders of Giants. Margaret Wertheim describes in Physics on the Fringe that even Richard Feynman once struggled real hard with understanding Newton’s original proof.

I would be interested in other definitions of an intuitive understanding of nature. Do you think mathematics is required, nice-to-have or has it just been picked by biased natural scientists who simply prefer to think about – for example – motion in the way I have described? Probably my deliberate choice of such a simple equation is a bit of cheating anyway: My so-called intuition fails epicly when (mechanical!) systems become more complicated and chaotic, and I cling to the remember-the-solution approach.


Related reading: The Spinning Gyroscope and Intuition in Physics
(I am blogging since less than a year and yet have started to repeat myself)

The Spinning Gyroscope and Intuition in Physics

Antique spinning topIf we would set this spinning top into motion, it would not fall, even if its axis would not be oriented perpendicular to the floor. Instead, its axis would change its orientation slowly. The spinning motion seems to stabilize the gyroscope, just as the moving bicycle is sort of stabilized by its turning wheels. This sounds simple and familiar, but can this really be grasped by intuition immediately?

I do not think so – otherwise it would not have taken us 2000 years to get over Aristotle’s assumptions on motion and rest. And simple experiments demonstrated in science shows would not baffle us – such as the motion of a helium balloon in an accelerating car.

The standard text-book explanation goes like this: There is gravity, as we assume that the spinning top is not supported in its center of gravity. Thus there is a torque. The gyroscope is whirling, thus it has angular momentum. A torque corresponds to a change in angular momentum, analogous to a force resulting in a change of (linear) momentum. The torque vector is perpendicular to gravity and to the axis of the gyroscope. Thus the change in angular momentum is always perpendicular to the current angular momentum vector and the tip of the spinning top moves in a circle. The angular momentum vectors changes all the time – not in length, but in direction – which is called precession.

As Richard Feynman pointed out in his Physics Lectures, this explanation constitutes rather mathematical step-by-step instructions than a real explanation. We do not see immediately why the spinning top precesses instead of falling to the ground.

Our skepticism is justified: The text-book explanation does not fully expound the dynamics of the systems and explain what really happens – in the very moment the spinning top starts to move. It rather refers to a self-consistent solution: If the gyroscope would already precess in a circle, that circular movement is consistent with the torque. As everybody in his right mind (R. Feynman) would assume, it actually might fall a bit if it is released.

Spinning Top - PrecessionGenerally, the tip of the gyroscope keeps tracing out a wavy or loopy path, which is called nutation.

If the spinning top nutates / starts falling, it looses potential energy. This has to compensated by an increase in rotational energy, the velocity of the tip of the gyroscope is not a constant. (Note that the total angular momentum of the gyroscope is composed of contributions from the fast spinning motion and the slow precession). The tip of the gyroscope moves on a curved trajectory bending upwards, which finally leads to overshooting the average height.

Friction can make the wobbling decay and finally turn the trajectory into the simple-text-book-path. This simulation allows for turning on friction (which is also equivalent to Feynman’s explanation).

An excellent explanation can be found in this remarkable paper (related to the simulation): The gyroscope is set into rotational motion while still supported. When “gravity is suddenly turned on” by removing the support, the additional vertical component of the angular momentum – due to to precession – is suddenly turned on. The point is that the initial angular momentum is parallel to the symmetry axis of the gyroscope, and the axis starts from velocity zero. The total angular momentum – still parallel to the symmetry axis – is the sum of the one related to precession and the one related to the gyroscope’s fast movement. So the latter is not parallel to the axis any more: The tip of the axis starts tracing out the loopy path (nutation) when it precesses. Only if we tune the angular frequency carefully before we release the spinning top, the text-book solution can be obtained. In this case precession is really maintained by the torque.

So do we understand the gyroscope intuitively now? A deep understanding of angular momentum and torque is a pre-requisite in my point of view. On principle, all of classical mechanics can be derived from Newton’s laws, so the notions of force and momentum should be sufficient. Nevertheless, without introducing angular momentum, there is no way to explain the motion of the gyroscope briefly.

Why do we need “torque” in general? Such concepts are shortcuts that allow for a concise description, but they also reveal the underlying symmetry or essential aspects of a problem. You could describe the dynamics of a rigid body by considering the motion of all little pieces the body is composed of. But since it is rigid, actually two points would be sufficient. You can select any two points or basically any set of independent coordinates – 6 independent numbers.

The preferred choice is: 3 numbers – such as Cartesian co-ordinates, x,y,z – describing the motion of the center of gravity and 3 numbers describing the rotation of the body. You need two numbers to denote the direction of the axis about which to rotate (similar to two longitude and latitude to describe a point on a sphere), and one number to denote the angle – how much you rotate. You could also describe any rotation in terms of the components discussed for the gyroscope: precession, nutation and internal rotation.

Then Newton’s equation of motion for the rigid body can be re-written as a law of motion for the center of gravity (Force equal change of momentum of the center of gravity) and a law for two new properties of the system: the torque equals the change of the angular momentum. Actually, this equation defines what these properties really are. Checking the definitions that have evolved from the law of motion we conclude that the angular momentum is linear momentum times the lever arm, and the torque is force times lever arm. But these definitions as such would not make sense if they would not have been generated by the reformulation of Newton’s law.

I think we sometimes adopt or memorize definitions carelessly and consider this learning because these definitions are required by standards / semi-legal requirements and used within a specific community of experts. But there is no shortcut and no replacement of understanding by rote learning.

I believe you need to keep the whole entangled web of relations between fundamental laws in mind, but it is hard to restrict the scope. We could now advance from gyroscope and angular momentum to the deeper connections between symmetries and conservation laws. In order not get stuck in these philosophical musings all the time – and do something useful (e.g. as an engineer), you need to be able to switch to shut-up-and-calculate-mode (‘Shut up and calculate’ is often attributed to Richard Feynman, but I could not find an authoritative confirmation).

Sniffing the Path (On the Fascination of Classical Mechanics)

Newton’s law has been superseded by relativity and quantum mechanics, and our universe is strange and compelling from a philosophical perspective. Classical Mechanics is dull.

I do not believe that.

The fundamentals of Newtonian Mechanics can be represented in a way that is different from well-known Force = Mass Times Acceleration – being mathematically equivalent, but providing a different philosophical twist. I consider this as fascinating as the so-called spooky action-at-a-distance of quantum mechanics.

The standard explanation is this:

  • There are forces described by respective laws (e.g.: The gravitational force)
  • Forces act on matter and result in the acceleration of particles.
  • In every point of time, the path of the particle can  be calculated based on its acceleration if you know its location and its velocity in the point of time before.
  • Thus step by step, the particle explores its path and the final trajectory is composed of all these tiny steps.

This is why the classical world seems to be deterministic. (Yes, this explanation lacks the interdependence of space(time) and masses and the limitations imposed by quantum mechanics.)

The deterministic laws can be stated in terms of The Principle of Least Action

  • Consider the point in space where the particle starts off and the end point of the journey. Thus we look at the path in hindsight: We demand that the particle needs to travel from A to B, and we also fix the points of time.
  • Now we evaluate all possible trajectories the particle might travel from A to B.
  • For every path we calculate a number: This is called the “Action” (In simple particle mechanics this is equivalent to integration over the difference of kinetic and potential energy – the “action” isn’t something you can easily “feel” like the force or the momentum). Note that the total energy needs to be conserved, thus e.g. there should be no friction. But at a microscopic level, all forces are conservative anyway.
  • Above all: Note that a single number is assigned to a full path which consists of all the points in space the particle traverses.
  • The path that is actually traversed / realized is the path that us assigned the least action.

Thus it seems that the particle sniffs all the paths ((c) Richard Feynman) and selects a path distinguished by a particular property. In addition, we have replaced the necessity to know the initial location and velocity by the knowledge of the location at the start point and the end point.

It seems we are (nature is) working backwards.

Actually, the particle is really sort of sniffing the path: This is a minimum, exactly: an extremum. Near an extremum the slope of a function is nearly zero. Thus the particle sniffs the neighboring paths and checks for changes in the action. The apparent contradiction between working forward and backwards is resolved if The Principle of Least Action is applied to smaller and smaller pieces of the trajectory. Since the principle holds for any path, it also needs to true for infinitesimal parts of a path. For these infinitesimal paths, the principle boils down (mathematically) to Newton’s law.

A mathematical derivation might not be satisfactory from a philosophical point of view. Probably the following may serve as an explanation: Working with the Principle of Least Action we do not know or do not need to know the velocity at the start time. Thus we need some other information instead. By the way, also data as the total energy, momentum or angular momentum may be used as a substitute of the total information about the initial conditions in terms of position and velocity.

Using the Principle, we know only where we are heading for. Since we do not know the initial velocity – the tangent to our path at start time – we need more guidance. The Principle provides such guidance and allows the particle for sniffing for other paths in order to determine that tangent at every point of time.

Stupid Questions and So-Called Intuition

At the very beginning of my career I organized a meeting of a research project team, still an undergraduate. The project was concerned with the development of thin films for microwave circuits, and my task in the project was the optimization of those films. But I was not an expert in waveguides and microwave circuits, and I had not developed a feeling for the propagation of waves along the walls of our tiny cavities. I knew Maxwell’s equations, but this is not the same as feeling how a specific electromagnetic system would evolve over time.

Since I was the most inexperienced member of the team I dared to utter what I considered a really stupid and basic question at the end of the meeting, after we had discussed all sorts of details and project management issues:

How exactly do these waves propagate? How are they confined to the waveguides?

There was no answer except: This is really a question which would make us all sweat.

In that moment I promised to myself that I will always dare to ask the so-called stupid questions – both to myself and to others, and I would not care about damaging my reputation as an expert (that I might become eventually, in some fields). I think that I have been able to keep the promise, though it is sometimes hard to resist the temptation to cover gaps in basic knowledge by resorting to expert lingo.

Science shows are successful partly due the unexpected twist in the explanation of simple everyday phenomena. And I would believe that every student of physics has seen an experienced professor guessing wrong when trying to explain such phenomena. Consider the famous experiment of the helium filled balloon in a car: The balloon is attached to a stiff wire and the other end of the wire is attached to the seat of the car. The length of the wire is shorter than the height of the car. If the car is not accelerating, the angle between the wire and the bottom plate of the car would be 90 degrees. What would happen if the car is accelerates?

Intuition might imply that the balloon would move in the opposite direction as the acceleration of the car, but actually it moves in the direction of the acceleration (one can find numerous videos on the internet that demonstrate how fascinating this result seems to be.) The first idea is not totally wrong: if the car would be evacuated, the balloon would move in the other direction. In an evacuated car you would simply see the relative movement of the accelerating car and the balloon that is still keeping its velocity (relative to the street or to the earth). The clue of this experiment is that the car is filled with air and “normally”, that is when we observe any body moving in air, we can neglect its effect. Both the body and the air have inertia, that is mass or more exactly mass density – mass per unit volume. When the car accelerates, any stuff floating inside the car keep its original velocity. In a frame of reference attached to the car the movement of floating bodies inside the car could be described by assigning to them an acceleration pointing in the opposite direction as the acceleration of the car. So air and helium try to move to the back of the car, driven by the same acceleration. Since the density of air is higher, the (pseudo)force per unit volume is higher for air than for helium. Thus air wins and pushes helium aside. Since helium is confined to a certain volume by the balloon we can only judge from the movement of the balloon what is going on.

There is a different way to explain it which makes it intuitive again: Consider the balloon being under the influence of gravity instead of the influence of the inertial pseudo-force. Isn’t it natural and intuitive that the balloon would move upward in an oxygen/nitrogen atmosphere under the pull of a gravitational field? Again in a vacuum, e.g. on the moon or an asteroid that does not have an atmosphere, the helium filled balloon would be attracted by the celestial body. You might say one needs to be careful in order to avoid the balloon escaping on the asteroid – just as the astronauts in science fiction movies who take a leap and are propelled into space. But this is something different – this is about the momentum or velocity that allows a body to leave the gravitational field and it can be calculated by equating kinetic energy and the potential energy at the surface of the asteroid (assuming the potential energy is zero at infinity). But in the experiment I described the balloon would be placed gently in mid-air or better say in mid-vacuum so that it does not have any velocity before gravity starts to pull on it.

So in summary our physics intuition is often bad. It took humankind about 2000 years to find out that it is not required to apply a force in order to make a body move in a straight line.

I believe the key is that our so-called intuition is tied to our everyday environment. We fail terribly if we change essential details, such as removing friction – which makes planets moving forever in contrast to anything moving in our everyday world). I also fail to solve such puzzles, and I fully agree with Richard Feynman who states (in his Physics Lectures, part 1, on explaining the gyroscope) that it is very often easier to follow the math step by step than really understanding what is going on. This becomes worse and worse the more abstract the concepts in physics become. All those so-called paradoxa related to relativity and quantum mechanics are due to the fact that we have no intuition whatsoever for velocities near the speed of light or the microscopic world.

I am still determined to sharpen my physics intuition – I have been re-visiting foundations and explanations of allegedly simply phenomena all the time, especially after I encountered the following: Though I “left science” for a period of time I never stopped reading text books. I was particularly interested in continuing to learn about theoretical physics to compensate for selecting applied and experimental physics in graduate school. I was actually proud to get into quantum statistics again and follow the math. However, when was about to go for a degree in energy engineering I re-visited simple thermodynamics, and I had considerable difficulties to develop an intuition (again – if I ever had it…) for discerning quickly how a simple apparatus or machine does work. Yet statistical thermodynamics and those hyper-dimensional spaces felt more familiar to me. I am not yet sure what the reasons are, these are my working hypotheses:

  • I have just discovered the differences between physics and engineering – I was just more trained in physics thinking than in engineering reasoning.
  • Anything is a matter of training – no matter how simple it is per se. Having worked with functions over abstract spaces requires training, but reading construction plans does as well.

Real Physicists Do Not Read Popular Science Books

At least this is what I believed for quite a while. Now I think I was wrong – not only for the reason that also real scientists might enjoy light entertainment or simply stay informed about the science communication activities of their colleagues.

First of all I am not even sure if I would qualify as a real physicist anyway: I have a PhD in physics and had been employed as a scientist, but I turned to what is called “industry” in academia (as a side note I always found that term a bit misleading as a lot of physicists end up in the consulting business which I would not call consulting industry). Fortunately in my country you can work as a self-employed physicist much like a Professional Engineer in other countries, thus theoretically I am entitled now (again) to call myself a professional physicist.

Having said that I must admit that with regard to contemporary theoretical physics I might just count as a member of the educated public. I had specialized on applied physics, in particular solid state physics, superconductors, optics and laser physics. This required me to deal with related theories, such as the BCS theory of superconductivity or the theory of many-particle-interactions. But I never learned anything about general relativity for example or the standard model (as theory is concerned, lectures on experimental aspects of nuclear physics do not count).

Some time ago – out of the blue, in the middle of a career in the corporate world – I decided that I wanted to understand better what the LHC in CERN is good for in detail, and what the Higgs particle was. I was really clueless: it took me some time to find out that so-called Quantum Field Theory deals with the same stuff I had been known in terms of 2nd Quantization or Quantum Statistics. Given the fact that I was working on different stuff in a quite exhausting day job, I think I did fairly well in working with text books, lecture notes and videos on QFT and string theory.

Even I was surprised that I was able to follow mathematical derivations, it was quite hard for me sometimes to “get the big picture”, such as the conceptual differences of QFT applied to particle physics and to condensed matter physics. Actually this is something generic about learning (at least my learning): I can remember how hard it was to calculate the Coriolis force in the first semester of undergraduate studies (before I had learned some vector algebra) and how proud I was to be capable of reproducing results mathematically. However I think I did not get the concept of rotating reference frames really at that time. Richard Feynman refers to this in his famous Physics Lectures: The more complicated problems become, the easier it is to follow the math but the more likely it is to miss the concepts. I believe that thorough training in theoretical physics allows one to grasp these deep conceptual messages immediately from just reading the equations, whereas somebody like me spends too much time with digesting the math.

In addition I believe that you learn more than you think from being embedded into a community, that is from informal discussions (“on the floor”). This holds true for any technical subject according to my experience. After all scientific discovery and technical innovation are also social processes happening in a community that follows specific rules. I understand any type of discovery much better if I know more about people involved and their motivations and opinions. Or probably this is just a shortcut which is useful specifically for me – to tag equations with stories with a human touch.

To some extent popular science books might be a bit of a replacement for all this. If I read books like The Trouble with Physics by Lee Smolin or Warped Passages by Lisa Randall  it is easier for me to put all those technical stuff into context.

But I still believe that such books are much, much more helpful if you are also willing to learn about the details. I have read Lisa Randall’s book before and after I had learned at least some QFT, General Relativity and String Theory, and I felt that I was able to really understand much more. All of those book start out with concepts that can be understood fairly well in terms of high school or freshman physics (e.g. special relativity and simple quantum mechanics, Schrödinger wave function etc.) but then concepts become more and more abstract (as very often these books roughly follow the historical development). I think I could track down the page or section of science books where I had to admit that I am lacking the mathematical background and where I could effectively just understand the topics in terms of very crude (and I believe often misleading) analogies. The good news is that this effect allows you to track your progress in understanding by re-reading the books after some exploration into the world of hard text books.

So I still believe that popular science books can give you the comforting – but false – feeling that you have understood the concepts of modern physics. My goal is to advance the topics from a humble perspective and better assume I did not fully understand them unless I have also confronted myself with some of the equations behind. And I am aware of the fact that process is slow. But my motto (for this blog, but also in general) is to combine a lot of diverse stuff, so my benchmark is not to understand as much as a real professional theorist, but to do fairly well compared to an engineer, to a physicist working in industry, or to a science writer who is interested in theoretical physics. My motivation for writing this blog partly stems from the self-commitment that is involved in declaring such goals to “the internet community”.