The Twisted Garden Hose and the Myth of the Toilet Flush

If you have wrapped your head around why and how the U-shaped tube in the flow meter (described in my previous post) is twisted by the Coriolis force – here is a video of a simple experiment brought to my attention by the author of the quoted article on gyroscope physics:

You could also test it yourself using a garden hose:

Accidentally you can observe this phenomenon so clearly because the typical angular frequencies of manual rotation result in a rather strong Coriolis force – in contrast to other every day phenomena that are falsely attributed to the Coriolis force associated with the rotation of the earth.

It is often stated – and I even found this in lecture notes and text books – that the Coriolis force is accountable for the unambiguously different sense of rotation of vortices in water flowing down the sink of your bathtub or toilet: In the Northern hemisphere water should spin anti-clockwise, in the Southern hemisphere clockwise. Numerous articles debunk this as an urban legend – I pick a random one.

On principle the statement on the sense of rotation is correct as the rotation of hurricanes is impacted by the Coriolis force. But for toilet flushes and the like the effect is negligible compared to other random factors impacting the flow of water. As pointed out in this article the momentum of leaves thrown into a bowl of water at a location near the equator of the earth (often used in demonstrations of entertain tourists) do have more impact than the Coriolis force.

Near the equator the Coriolis force is nearly zero, or more precisely: Since it is both perpendicular to the velocity and the axis of rotation the Coriolis force would be directed perpendicular to the surface of the earth – but there is no component of the Coriolis force that would allow water to flow North-South or East-West. Thus very near the equator the force is infinitesimally small – much smaller than the forces acting on, say, middle European or Austrian bathtubs. And even with those the Coriolis force does not determine the spin of rotation unambiguously.

How to estimate this impact – and why can we observe the twist in the garden hose experiment?

The size of the acceleration due to the Coriolis force is

2 times (angular frequency [rad/s]) times (component of the velocity [m/s] perpendicular to the axis of rotation)

The angular frequency in radians per second is 2 Pi times the number of rotations per second. Thus the angular frequency of the rotation of the earth is about 0,0000727 radians per second. The frequency of the motion of the garden hose was rather several turns per second, thus about 1 radians per second.

Imagine a slice or volume element of water flowing in a sink or a garden hose: Assuming a similar speed with an order of magnitude of 1 meter per second. The Coriolis force differs by several orders of magnitude

  • Bathtub vortex: 0,00015 m/s2
  • Garden hose: 2 m/s2

On the other hand, the acceleration due to gravity is equal to 9,81 m/s2.

The garden hose in the video moves under the influence of gravity – like a swing or pendulum – and the Coriolis force (The additional force due to motion of the hands of the experimenter is required to overcome friction). Since the Coriolis force is of the same order of magnitude as gravity you would expect some significant impact as the resulting force on every slice or volume element of water is the vector sum of the two.

It is also important to keep track of the origins of the components of the velocity:

The radial flow velocity (assumed to be about 1 m/s) in the hose is constant and simply dictated the by the pressure in the water line. There is no tangential flow velocity unless caused by the Coriolis force.

In case of the bath tub the assumed 1 m/s do not refer to the velocity of the tangential motion in the vortex, but to the radial velocity of the water flowing “down” the sink. The tangential velocity is what would be caused by the Coriolis force – ideally.

Any initial velocity is is subject to the initial conditions applied to the experiment.

Any random tangential component of the flow velocity in the vortex increases when the water flows down:

If there is a small initial rotation – a small velocity directed perpendicular to the symmetry axis of the flush – pronounced vortices will develop due to the conservation of angular momentum: As the radius of rotation decreases – dictated by the shape of the bathtub or toilet – angular frequency needs to increase to keep the angular momentum constant. Thus in your typical flush you see how a random and small disturbance is “amplified”.

However, if you would conduct such experiment very carefully and wait long enough for any disturbance to die out, you would actually see the vortices due to Coriolis force only.[*] I have now learned from Wikipedia that it was an Austrian physicist who published the first paper on this in 1908 – Ottokar Tumlirz. Here is the full-text of his German paper: Ein neuer physikalischer Beweis für die Achsendrehung der Erde.  (Link edited in 2017. In 2013 I haven’t found the full text but only the abstract).

Tumlirz calculated the vortices’ velocity of rotation and used the following setup to confirm his findings:


My sketch of the experimental setup Ottokar Tumlirz used in 1908 as it is described in the abstract of his German paper “New physical evidence on the axis of rotation of the earth”. The space between the two plates is filled with water, and the glass tube is open at the bottom. Water (red) flows radially to the tube. The red lines are bent very slightly due to the Coriolis force.

Holes in a cylindrical tube – which is open at the bottom – allow water to enter the tube radially. This is not your standard flush, but a setup built for preventing the amplification of tangential components in the flow. Due to the Coriolis force the flow lines are not straight lines, but slightly bent.

Tumlirz noted that the water must not flow with a speed not higher than about 1 mm per minute.

Edit, Oct 2, 2013: See Ernst Mach’s original account of Tumlirz’ experiment (who was Mach’s student)

Edit, June 3, 2015: Actually somebody did that careful experiment now – and observed the tiny effect just due to the Coriolis force, as Tumlirz. Follow-up post here.

Edit, Augist 2016: Stumbled upon another reference to an experiment done in 1962 and published in Nature (and filmed) – link added to the post from 2015.

Intuition and the Magic of the Gyroscope – Reloaded

I am baffled by the fact that my article The Spinning Gyroscope and Intuition in Physics is the top article on this blog so far.

So I believe I owe you, dear readers, an update.

In the previous article I have summarized the textbook explanation, some more intuitive comments in Feynman’s Physics Lectures, and a new paper by Eugene Butikov.

But there is an explanation of the gyroscope’s motion that might become my new favorite:

Gyroscope Physics by Cleon Teunissen

It is not an accident that Cleon is also the main author of the Wikipedia article on the Coriolis flow meter, as his ingenious take on explaining the gyroscope’s precession is closely related to his explanation of the flow meter.

The Coriolis force is a so-called pseudo-force you “feel” in a rotating frame of reference: Imagine yourself walking across a rotating disk, or rolling a ball soaked in white color rolling across a black disk and watch its trace. In the center of the rotating disk, there is no centrifugal force. But you would still feel being dragged to the right if the disk is rotating counter-clockwise (viewed from the top).

This force dragging you to the right or making the path of the ball bend even in the center – this is the Coriolis force. It also makes tubes bend in the following way when a liquid flows through them and allows for determining the flow velocity from the extent of bending:

Vibration pattern of the tubes during mass flow. Credits: Cleon Teunissen.

Vibration pattern of the tubes during mass flow. Credits: Cleon Teunissen – Coriolis flow meter

The “loop” formed by the tubes rotates about an axis which is parallel to the direction of the flow whose speed should be measured (though not the full 360°). Now the Coriolis force always drags a moving particle “to the right” – same with the volume elements in the liquid. Note that the force is always directed perpendicular to the axis of rotation and perpendicular to the velocity of the flowing volume element (mind the example of the rolling ball).

In the flow meter, the liquid moves away from the axis in one arm, but to the axis in the other arm. Thus the forces acting on each arm are antiparallel to each other and a torque is exerted on the “loop” that consists of the two arms. The loop is flexible and thus bent by the torque in the way shown in the figure above.

Now imagine a gyroscope:

Gyroscope with Coriolis force per quadrant indicated. Credits: Cleon Teunissen.

Gyroscope with Coriolis force per quadrant indicated. Credits: Cleon Teunissen – Gyroscope Physics

Gravity (acting on the weight mounted on the gyroscope’s axis) tries to make the gyroscope pitch. Cleon now shows why precession results in an “upward pitch” that compensates for that downward pitch and thus finally keeps the gyroscope stable.

The clue is to consider the 4 quadrants the gyroscope wheel consist of separately – in a way similar to evaluating the Coriolis force acting on each of the arms of the flow meter:

The tangential velocity associated with the rotation about the symmetry axis of the gyroscope (“roll”, spinning of the wheel) is equivalent to the velocity of the flowing liquid. In each quadrant, mass is moving – “flowing” – away from or to the “swivel axis” – the axis of precession (indicated in black, parallel to gravity).

The per-quadrant Coriolis force is again perpendicular to the swivel axis and perpendicular to the “flow velocity”. Imaging yourself sitting on the blue wheel and looking into the direction of the tangential velocity: Again you are “dragged to the right” if precession in counter-clockwise. As right is defined in relation to the tangential velocity, the direction of the force is reversed in the two lower quadrants.

A torque tries to pitch the wheel “upward”.


If you want to play with gyroscopes yourself: I have stumped upon a nice shop selling gyroscopes – incl. a steampunk version, miniature Stirling engines, combustion engines (… and strange materials such as ferrofluids).

This is a gyroscope that is subject to precession – due to the counter weight:

Gyroscope gimbals kit with counter weight. Credits:

Random Thoughts on Temperature and Intuition in Thermodynamics

Recent we felt a disturbance of the force: It has been demonstrated that the absolute temperature of a real system can be pushed to negative values.

The interesting underlying question is: What is temperature really? Temperature seems to be an intuitive everyday concept, yet the explanations of ‘negative temperatures’ prove that it is not.

Actually, atoms have not really been ‘chilled to negative temperatures’. I pick two explanations of this experiment that I found particularly helpful – and entertaining:

As Matt points out The issue is simply that formally temperature is a relationship between energy and entropy, and you can do some weird things to entropy and energy and get the formal definition of temperature to come out negative.

Aatish manages to convey the fact that temperature is inversely proportional to the slope of the entropy vs. energy curve using compelling analogs from economics. The trick is to find meaningful economic terms that are related in a way similar to the obscure physical properties you want to explain. MinutePhysics did something similar in explaining fundamental forces (cannot resist this digression):

I had once worked in laser physics, so Matt’s explanation involving two-level system speaks to me. His explanation avoids to touch on entropy and thus avoids to use the mysterious term entropy to explain mysterious temperature.

You can calculate the probabilities of population of these two states from temperature – or vice versa. If you manage to tweak the population by some science-fiction-like method (creating non equilibrium states) you can end up with a distribution that formally results in negative temperatures if you run the math backwards.

[In order to allow for tagging this post with Physics in a Nutshell I need to state that the nutshell part ends here.]

But how come that ‘temperature’ ever became such an abstract concept?

From a very pragmatic perspective focussed on macroscopic, everyday phenomena temperature is what we measure by thermometers, that is: calculated from the change of the volume of gases or liquids.

You do not need any explanation of what temperature or even entropy really is if you want to design efficient machines, such as turbines.

As a physics PhD working towards an MSc in energy engineering, I have found lectures in Engineering Thermodynamics eye-opening:

As a physicist I had been trained to focus on fundamental explanations: What is entropy really? How do we explain physical properties microscopically? That is: calculating statistical averages of the properties of zillions of gas molecules or imagining an abstract ‘hyperspace’ whose number of dimensions is proportional to the number of particles. The system as such moves through this abstract space as times passes by.

In engineering thermodynamics the question to What is entropy? was answered by: Consider it some property than can be calculated (and used to evaluate machines and processes).

Rankine cycle with reheat

Temperature Entropy diagram for steam. The red line represents a process called Rankine cycle: A turbine is delivering mechanical energy when temperature and pressure of the steam is decreased.

New terms in science have been introduced for fundamental conceptual reasons and/or because they came in handy in calculations. In my point of view, enthalpy belongs to the second class because it makes descriptions of gases and fluids flowing through apparatuses more straight-forward.

Entropy is different despite it can be reduced to its practical aspects. Entropy has been introduced in order to tame heat and irreversibility.

Richard Feynman stated (in Vol. I of his Physics Lectures, published 1963) that research in engineering contributed two times to the foundations of physics: The first time when Sadi Carnot formulated the Second Law of Thermodynamics  (which can be stated in terms of an ever increasing entropy) and the second time when Shannon founded information theory – using the term entropy in a new way. So musing about entropy and temperature – this is where hands-on engineering meets the secrets of the universe.

I tend to state that temperature had never been that understandable and familiar:

Investigations of the behavior of ideal gases (fortunately air, even moist air, is an ideal gas) have revealed that there needs to be an absolute zero temperature – when the volume of an ideal gas would approach zero.

When Clausius coined the term Entropy in 1865 1850 (*), he was searching for a function that allows to depict any process in a diagram such as the figure above, in a sense.
(*) Edit 1 – Jan. 31: Thanks to a true historian of science.

Heat is a vague term – it only exists ‘in transit’: Heat is exchanged, but you cannot assign a certain amount of heat to a state. Clausius searched for a function that could be used to denote one specific state in such a map of states, and he came up with a beautiful and simple relationship. The differential change in heat is equal to the change in entropy times the absolute temperature!  So temperature entered the mathematical formulations of the laws of thermodynamics when doing something really non-intuitive with differentials.

Entropy really seems to be the more fundamental property. You could actually start from the Second Law and define temperature in terms of the efficiency of perfect machines that are just limited by the fact that entropy can only increase (or that heat always needs to flow from the hotter to the colder object):

Beta stirling animation

Stirling motor – converting heat drawn from a hot gas to mechanical energy. Its optimum efficiency would be similar to that of Carnot’s theoretical machine.

The more we learn about the microscopic underpinnings of the laws that have been introduced phenomenologically before, the less intuitive explanations became. It does not help trying to circumvent entropy by considering what each of the particles in the system does. We think of temperature as something as some average over velocities (squared). But a single particle travelling its path through empty space would not have temperature. Neither would any directed motion of a beam of particles contribute to temperature. So temperature is better defined as the mean deviation of a distribution of speeds.

Even if we consider simple gas molecules, we could define different types of temperature: There is a kinetic temperature calculated from velocities. In the long run – when equilibrium has been reached – the other degrees of freedom (sich as rotations) would exhibit the same temperature. But when a gas is heated up, heat is transferred via collisions: So first the kinetic temperature rises, and then the energy is transferred to rotations. You could calculate a temperature from rotations, and this temperature would be different from the kinetic temperature.

So temperature is a property that is derived from what an incredible number of single particles do. It is a statistical property and it makes only sense when a system had enough time to reach an equilibrium. As soon as we push the microscopic constituents of the system that makes them deviate from their equilibrium behaviour, we get strange results for temperature – such as negative values.

Further reading:
This post was also inspired by some interesting discussions on LinkedIn a while ago – on the second law and the nature of temperature.
(*) Edit 2 – Feb. 2: Though Clausius is known as the creator of the term entropy, the concept as such has been developed earlier by Rankine.

Joule, Thomson, and the birth of big science

I know that I might be guilty of putting too much emphasis on the fancy / sci-fi / geeky fields in physics, as demonstrated by my recent post on quantum field theory.

In order to compensate for that I want to reblog this excellent post by carnotcycle in order to demonstrate that I really like thermodynamics. And I mean good, old, phenomenological thermodynamics – pistons, steam engines, and seemingly simple machines (that look like exhibits at a steampunk convention).

Classical thermodynamics is underrated (re geekiness) compared to pondering on entropy and the arrow of time or entropy as it is used in computer science.

It is deceptively simple – you might think it is easy to understand the behavior of ideal gases and steam-powered engines. But isn’t it that type of experiments that often baffles the audience in science shows on TV?
The history of the research done by Joule and Thomson could give you a taste of that. I don’t think it is intuitive why or why not a gas should cool when flowing to a region of lower pressure.


Historical background

In early May 1852, in the cellar of a house in Acton Square, Salford, Manchester (England), two men began working a mechanical apparatus which consisted of the above hand-operated forcing pump attached to a coiled length of lead piping equipped with a stopcock at its far end to act as a throttle.

The two men were the owner of the house, 33-year-old James Joule, a Manchester brewer who was rapidly making a name for himself as a first-rate experimental scientist, and 27-year-old William Thomson (later Lord Kelvin), a maverick theoretician who was already a professor of natural sciences at Glasgow University. Over a period of 10 days, they were to conduct a series of experiments with this highly original apparatus which would serve to crank experimental research into the modern era and herald the birth of what we would now call big science.

What Joule and Thomson were looking for…

View original post 1,824 more words

Quantum Field Theory or: It’s More Than a Marble Turned into a Wiggly Line

I had been trained as an experimental physicist which meant I was good at locating vacuum leaks, adjusting lasers and lenses, telling reasonable data from artefacts, and being the only person that ever replenished the paper feed of the X-ray diffractometer (Yes, at that time we used paper records).

Exactly because of that I took pride in the fact that I attended some non-mandatory lectures in quantum theory – in particular in order to understand the quantum mechanical underpinnings of condensed matter physics and superconductivity.

Ironically, it was most likely my focus on superconductors that made me miss an important point.

Superconductivity is one of those effects that can be described as quantum physics emerging at a macroscopic scale – there is a ‘giant wave function’ comprising many particles, similar to infamous Bose-Einstein condensation. (I am indulging on sloppy terminology here. The giant wave function has been proposed by Ginzburg and Landau as a phenomenological explanation of superconductivity, and Bardeen, Cooper and Schrieffer finally formulated a full ‘microscopic’ theory.)

Bose Einstein condensate

Bose Einstein condensate (Wikimedia). Many particles acting as one, representing a giant wave-function. If there is a probability for one particle to be in a certain place, overlapping many of those probability waves results in a simple distribution – representing how many particles are in a particular place.

What’s the irony? In condensed matter physics you are investigating the interactions of many (many, many) particles. That’s why I was under the impression that advanced theories in quantum physics are basically the theories applied to single particles plus some way of doing statistics on them. I was not familiar with the term Quantum Field Theory, but back then or in my personal condensed matter corner of the world this was called Quantum Statistics or Second Quantization.

Until very recently, when the discussions on the Higgs boson etc. rekindled my interest in fundamentals of physics, I was indeed clueless and unable to connect ‘Quantum Statistics’ with the Standard Model in particle physics and basically anything related to single particles.

So what is the connection between understanding the behavior or many particles in a piece of metal and the high-energy experiments with colliding protons?

In popular science books the transition from the classical world to the quantum world is often depicted as the replacement of solid marbles with little wiggly lines. A ‘particle’ becomes a ‘wave’. This is associated with all kinds of philosophical discussions. I tend to state that there would not be any discussion at all if we would not use these pictures. A ‘particle’ is neither a marble or a wiggly line, it is the concept of a single particle as such that ceases to exist in advanced quantum (field) theory.

It is true that simple quantum systems can be described as particles turned waves: such as the hydrogen atom that can be described nicely using a single-particle Schrödinger wave function. A particle in a box can be represented by a quantum mechanical wave function that represents the probability to find the particle at a certain position:


Particle in an infinite square well (‘box’) – solutions of quantum mechanical wave equations (Wikimedia)

In order to see why and when the particle as such ceases to exist, insights from quantum mechanics (QM) need to be combined with special relativity (SR) , the famous E=mc2 in particular.

  • Large momentum – as per QM: Based on Heisenberg’s uncertainty principle, we cannot measure position and momentum of a particle precisely. So if we try to confine the particle – e.g. lock it up in a box – the uncertainty in momentum will increase. Chances increase that the particle will exhibit large momenta.
  • Large energy  – as per SR: Large (uncertainties in) momenta mean large (uncertainties in) energies. For a massless particle (as the photon, the quantum of light) energy and mass are proportional, and these properties are connected by a simple relationship in special relativity. If velocities are low compared to the speed of light you might simply think of momentum as the product of (rest) mass and velocity, whereas kinetic energy is the product of mass and velocity squared over 2.
  • Particle creation –  as per SR: We know from the most famous formula in the world that energy is equivalent to mass. So if the uncertainty in energy increases chances increase that new particles are created.  More precisely, particles are created in pairs in order not to violate other conservation laws (e.g. electrical charge).

The uncertainty principle however also related energy to time: The shorter the time scale, the higher the energy that can be used in the creation of – virtual – particles. The full-blown theory of quantum (field) theory deals with all those cases – virtual and real particles, slow or near the speed of light, lonely ones or packs.

So if we take a closer look at a particular particle, we can identify two interesting length scales:

  • The wavelength of wave associated with a single particle as long as this interpretation makes sense. This is called the De Broglie wavelength.
  • The length scale where the particle as a single entity ceases to exist. This is called the Compton wave length.

But where do this particles ‘come from’? The answer is of course – they come from an ‘underlying field’, though this may not sound as a satisfactory explanation.

It might be unusual to think of particles are something transient or something subordinate to a mysterious field.

However, we had become used to the fact that electromagnetic radiation can be viewed as or turned into particles called photons. But if we can imagine a field becoming particles why not consider particles being sort of a manifestation of a ‘field’? David Tong calls this field a ‘sea of stuff’ in his first lecture on Quantum Field Theory (BTW I would highly recommend his lectures – notes and videos – as an introduction to QFT). So protons are just ripples in this sea of proton stuff just as photons are just ripples of the electromagnetic field.

I believe that we often some ideas in science for granted or consider them plausible or familiar if we had been exposed to them often enough. But after all: Do we really understand or feel the electromagnetic field any better as the sea of stuff that pops out electrons, protons (or the Higgs boson maybe)? Just because we are surrounded by EM wave smog transmitting Facebook posts and TV shows?

Intuition in quantum physics – if there is any – can in my point of view only be acquired by wading through the math. There is no shortcut. Stating a particle originates from a field or vice versa is just a vague replacement of something that only equations can capture precisely.

Or might there be acceptable shortcuts?

I really do enjoy MinutePhysics – watch the explanation of what is matter. In passing, the electron field is introduced as an ‘electronness’ – similar to ‘threeness’ invoked when we use the number of three:


Further reading:
I am maintaining a list of my favorite physics resources, incl. lecture notes on QFT, on this page.

Are We All Newtonians?

In my most recent posts I showed off: 1) Sandra Bullock killing a computer virus and ordering pizza online, 2) a cartoon making fun of all academic disciplines I refer to this blog, 3) images of cute furry animals – dead and alive.

I will not be able to top that.

Thus I feel free to bore you readers to death again with one of my classical wall-of-text-y posts on physics, buffered by some fluffy philosophical musings. I will  even do what is a no-no for a wannabe popular science blogger: Adding a mathematical equation.

No, seriously:

I am quite obsessed with trying to understand how and if the world around us makes sense to us. Are we natural Newtonians, provided with a feeling for system dynamics? Disclaimer: This posting is strictly limited to classical dynamics. Even simple classical systems can get complicated, messy, and very interesting, so we do not yet need to invoke spooky quantum effects.

Physics shows on TV present simple facts as amazing stuff – and the audience is baffled. So my gut feeling is that we don’t have a very natural grasp for understanding nature.

Isaac Newton has provided us with a formalism that allows for the calculating how objects move in space when forces act on them. This is calculus, a field often encountered with fear and/or fascination by non-science majors (I have not read this book, but based on reviews I expect it to be a good read, probably more for the non-mathematically trained). Differential equations, such as Newton’s First Law, put into symbols concisely what we mean by feedback loops and incremental reactions and backreactions. This is powerful, elegant, allows for predictions – and makes a lot of metaphysical musings on ‘causality’ obsolete – in my humble opinion.

Consider this: A stretched string exerts a force onto a marble or some other object attached to it. (Well, it’s a cube, but I prefer down-to-earth notion ‘marble’ to arcane ‘cubic object’)


Motion of a object attached to a spring. x…distance, v…velocity, a…acceleration (Wikimedia)

Due to the force the marble moves, and due to the motion the force changes – since the force depends on the position of the marble. Due to the changed force the motion of the marble in the next moment changes. This might sound even poetic, but isn’t the equation version stunning?


x is the position of the marble, m is the mass of the spring and k a parameter characterizing the stiffness of the spring. The distance of the marble is supposed to be a function of time, x(t). You might see that I did not include gravity here –  the marble is not dangling on a spring extended in vertical direction, but the spring is moving horizontally, such as gliding on a table. The table compensates for gravity (in a sense).

The second derivative of this function – the acceleration – is proportional to the function itself, at any point of time.

Some great physicist, I believe it was Landau but I cannot say for sure (I would appreciate corrections), once said that you only understand a differential equation if you can describe the solution without actually solving it. An excellent description of the full solution can be found in the fabulous Physics Tutorials blog.

What I mean by an intuitive understanding is this: You can describe the motion in natural language, and you can translate back and forth to the mathematical description without calculating every single step. Yet there is no shortcut for or replacement of a deep understanding of the math. But just starting from the equation and looking up or recalling the memorized standard solution for ‘homogeneous ordinary differential equations of first order’ is not what I would call intuition.

Sketch of the position of a marble attached to an oscillating spring versus time

Imagine the spring…. We need to start somewhere right in the middle, so let’s start with the spring stretched to the maximum extent. The marble stands still, velocity is zero. Now we release the marble. According to the equation, there is a rather large acceleration – proportional to the large stretch of the string, with opposite sign to the distance of the marble from its equilibrium position.

The acceleration will give rise to an increase in velocity, actually the acceleration is the increase in velocity – it is the curvature of x(t). Since the acceleration is large, the absolute value of velocity will increase fast, and x(t) would appear strongly curved. I am drawing a little curved snippet of x(t) in red.

As velocity gets larger, the distance x becomes smaller. As x becomes smaller, the acceleration becomes smaller. Since the acceleration decreases, the velocity does hardly change. If the velocity does not change, x(t) is expected to be a straight line. Thus near x=0 I expect the solution to be a straight line and draw a snippet of x(t) accordingly.

When the marble has passed the zero position, x increases again, but with the opposite sign. I extend the red, nearly linear snippet. An acceleration starts to build up that corresponds to a decreasing velocity. At a certain position x the absolute value of the acceleration is the same as for the corresponding x. Thus the motion is kind of mirrored. So we need to hit a point with velocity equal zero again. The acceleration is also zero then which means the function x(t) displays an extremum (zero curvature). Starting from the extremum, x(t) is bent again back to the line x=0. Adding a curved snippet of x(t) below the t axis which looks like the mirror image of the initial snippet, just translated by a time difference.

So in summary x(t) is characterized by:

  • Alternating extrema: The curvature has a sign that pushes the curve back to x=0
  • At x=0 there is no curvature at all.

Now we try to connect the snippets, and I guess, the sketched curve already looks familiar. We have rediscovered the characteristic features of sine (or cosine) functions without solving the equation.

The equation even gives us a first taste of an important concept in physics: Symmetry. It is symmetric with respect to time, in physics lingo. If we replace t by -t nothing happens, as there is only t squared. ‘Nothing happens’ translates to: If we have found solution x(t) (sine, cosine, combinations of them) we can replace t by -t in the solution and the result is another valid solution.

If we would add friction to it, that will change: Typical friction forces are functions of the velocity – so the first derivative of x with respect to time enters the equation and the infamous arrow of time cannot be reverted again. Now replacing t by -t would result in replacing something like exponential decay by exponential rise which does not make happen

Back to language and philosophy: What is ’cause’ and what is ‘effect’ here – the force exerted by the string or the motion of the marble? I feel myself confirmed by Bertrand Russell who considered many ‘classical philosophical, metaphysical questions’ just unnecessary musings that would be resolved by a more rigid analysis of language and a solid framework of logics.

Russell even mentions differential equations in physics in History of Western Philosophy:

Since Einstein, distance is between events, not between things, and involves time as well as space. It is essentially a causal conception, and in modern physics there is no action at a distance. All this, however, is based upon empirical rather than logical grounds. Moreover the modern view cannot be stated except in terms of differential equations, and would therefore be unintelligible to the philosophers of antiquity.

This does not mean that these differential equations are intuitive? I think there are two pre-requisites for developing intuition:

  • Practice, practice, practice. Make them a part of daily routine.
  • At the beginning – after having recovered from initial shock of exposure – try to wrap your head around and understand the physics behind the mathematics. Do not simply follow the solutions manual.

Today mathematics software allows for solving algebraic equations and differential equations easily. I have mixed feelings about this. I have always admired the analytical skills and problem solving skills of scientists from Eastern Europe who had been trained in the cold war era. They had to learn how to solve stuff with brain, pencil and paper due to the sheer lack of resources in terms of expensive computers.

Thus it is remarkable (to us, today) that Newton has proved his laws in calculus geometrically. I stand in awe of geometrical proofs of stuff like that, e.g. as published in On The Shoulders of Giants. Margaret Wertheim describes in Physics on the Fringe that even Richard Feynman once struggled real hard with understanding Newton’s original proof.

I would be interested in other definitions of an intuitive understanding of nature. Do you think mathematics is required, nice-to-have or has it just been picked by biased natural scientists who simply prefer to think about – for example – motion in the way I have described? Probably my deliberate choice of such a simple equation is a bit of cheating anyway: My so-called intuition fails epically when (mechanical!) systems become more complicated and chaotic, and I cling to the remember-the-solution approach.


Related reading: The Spinning Gyroscope and Intuition in Physics
(I am blogging since less than a year and yet have started to repeat myself)

The Spinning Gyroscope and Intuition in Physics

Antique spinning topIf we would set this spinning top into motion, it would not fall, even if its axis would not be oriented perpendicular to the floor. Instead, its axis would change its orientation slowly. The spinning motion seems to stabilize the gyroscope, just as the moving bicycle is sort of stabilized by its turning wheels. This sounds simple and familiar, but can this really be grasped by intuition immediately?

I do not think so – otherwise it would not have taken us 2000 years to get over Aristotle’s assumptions on motion and rest. And simple experiments demonstrated in science shows would not baffle us – such as the motion of a helium balloon in an accelerating car.

The standard text-book explanation goes like this: There is gravity, as we assume that the spinning top is not supported in its center of gravity. Thus there is a torque. The gyroscope is whirling, thus it has angular momentum. A torque corresponds to a change in angular momentum, analogous to a force resulting in a change of (linear) momentum. The torque vector is perpendicular to gravity and to the axis of the gyroscope. Thus the change in angular momentum is always perpendicular to the current angular momentum vector and the tip of the spinning top moves in a circle. The angular momentum vectors changes all the time – not in length, but in direction – which is called precession.

As Richard Feynman pointed out in his Physics Lectures, this explanation constitutes rather mathematical step-by-step instructions than a real explanation. We do not see immediately why the spinning top precesses instead of falling to the ground.

Our skepticism is justified: The text-book explanation does not fully expound the dynamics of the systems and explain what really happens – in the very moment the spinning top starts to move. It rather refers to a self-consistent solution: If the gyroscope would already precess in a circle, that circular movement is consistent with the torque. As everybody in his right mind (R. Feynman) would assume, it actually might fall a bit if it is released.

Spinning Top - PrecessionGenerally, the tip of the gyroscope keeps tracing out a wavy or loopy path, which is called nutation.

If the spinning top nutates / starts falling, it looses potential energy. This has to compensated by an increase in rotational energy, the velocity of the tip of the gyroscope is not a constant. (Note that the total angular momentum of the gyroscope is composed of contributions from the fast spinning motion and the slow precession). The tip of the gyroscope moves on a curved trajectory bending upwards, which finally leads to overshooting the average height.

Friction can make the wobbling decay and finally turn the trajectory into the simple-text-book-path. This simulation allows for turning on friction (which is also equivalent to Feynman’s explanation).

An excellent explanation can be found in this remarkable paper (related to the simulation): The gyroscope is set into rotational motion while still supported. When “gravity is suddenly turned on” by removing the support, the additional vertical component of the angular momentum – due to to precession – is suddenly turned on. The point is that the initial angular momentum is parallel to the symmetry axis of the gyroscope, and the axis starts from velocity zero.  The total angular momentum – still parallel to the symmetry axis – is the sum of the one related to precession and the one related to the gyroscope’s fast movement. So the latter is not parallel to the axis any more: The tip of the axis starts tracing out the loopy path (nutation) when it precesses. Only if we tune the angular frequency carefully before we release the spinning top, the text-book solution can be obtained. In this case precession is really maintained by the torque.

So do we understand the gyroscope intuitively now? A deep understanding of angular momentum and torque is a pre-requisite in my point of view. On principle, all of classical mechanics can be derived from Newton’s laws, so the notions of force and momentum should be sufficient. Nevertheless, without introducing angular momentum, there is no way to explain the motion of the gyroscope briefly.

Why do we need “torque” in general? Such concepts are shortcuts that allow for a concise description, but they also reveal the underlying symmetry or essential aspects of a problem. You could describe the dynamics of a rigid body by considering the motion of all little pieces the body is composed of. But since it is rigid, actually two points would be sufficient. You can select any two points or basically any set of independent coordinates – 6 independent numbers.

The preferred choice is: 3 numbers – such as Cartesian co-ordinates, x,y,z – describing the motion of the center of gravity and 3 numbers describing the rotation of the body. You need two numbers to denote the direction of the axis about which to rotate (similar to two longitude and latitude to describe a point on a sphere), and one number to denote the angle – how much you rotate. You could also describe any rotation in terms of the components discussed for the gyroscope: precession, nutation and internal rotation.

Then Newton’s equation of motion for the rigid body can be re-written as a law of motion for the center of gravity (Force equal change of momentum of the center of gravity) and a law for two new properties of the system: the torque equals the change of the angular momentum. Actually, this equation defines what these properties really are. Checking the definitions that have evolved from the law of motion we conclude that the angular momentum is linear momentum times the lever arm and the torque is force times lever arm. But these definitions as such would not make sense if they would not have been generated by the reformulation of Newton’s law.

I think we sometimes adopt or memorize definitions carelessly and consider this learning because these definitions are required by standards / semi-legal requirements and used within a specific community of experts. But there is no shortcut and no replacement of understanding by learning definitions by heart.

I believe you need to keep the whole entangled web of relations between fundamental laws and absolutely necessary quantities in mind, but it is hard how to restrict the scope. We could now advance from gyroscope and angular momentum to the deeper connections between symmetries and conservation laws. In order not get stuck in these philosophical musings all the time – and do something useful (e.g. as an engineer), you need to be able to switch to shut-up-and-calculate-mode (‘Shut up and calculate’ is often attributed to Richard Feynman, but I could not find an authoritative confirmation).