Computers, Science, and History Thereof

I am reading three online resources in parallel – on the history and the basics of computing, computer science, software engineering, and the related culture and ‘philosophy’. An accidental combination I find most enjoyable.

Joel on Software: Joel Spolsky’s blog – a collection of classic essays. What every developer needs to know about Unicode. New terms like Astronaut Architects and Leaky Abstractions. How to start a self-funded software company, how to figure out the price of software, how to write functional specifications. Bringing back memories of my first encounters with Microsoft VBA. He has the best examples – Martian Headsets to explain web standards.

The blog started in 1999 – rather shortly after I had entered the IT industry. So it is an interesting time capsule, capturing technologies and trends I was sort of part of – including the relationship with one large well-known software company.

Somewhere deep in Joel’s blog I found references to another classic; it was in an advice on how to show passion as an applicant for a software developer job. Tell them how reading this moved you to tears:

Structure and Interpretation of Computer Programs. I think I have found the equivalent to Feynman’s Physics Lectures in computer science! I have hardly ever read a textbook or attended a class that was both so philosophically insightful and useful in a hands-on, practical way. Using Scheme (Lisp) as an example, important concepts are introduced step-by-step, via examples, viewed from different perspectives.

It was amazing how far you can get with purely Functional Programming. I did not even notice that they had not used a single assignment (Data Mutation) until far into the course.

The quality of the resources made available for free is incredible – which holds for all the content I am praising in this post: Full textbook, video lectures with transcripts, slides with detailed comments. It is also good to know and reassuring that despite the allegedly fast paced changes of technology, basic concepts have not changed that much since decades.

But if you are already indulging in nostalgic thoughts why not catch up on the full history of computing?

Creatures of Thought. A sublime book-like blog on the history of computing – starting from with the history of telephone networks and telegraphs, covering computing machines – electro-mechanical or electronic, related and maybe unappreciated hardware components like the relay, and including biographic vignettes of the heroes involved.

The author’s PhD thesis (available for download on the About page) covers the ‘information utility’ vision that was ultimately superseded by the personal computer. This is an interesting time capsule for me as well, as this story ends about where my personal journey started – touching personal PCs in the late 1980s, but having been taught the basics of programming via sending my batch jobs to an ancient mainframe.

From such diligently done history of engineering I can only learn not to rush to any conclusions. There are no simple causes and effects, or unambiguous stories about who invented what and who was first. It’s all subtle evolution and meandering narratives, randomness and serendipity. Quoting from the post that indicates the beginning of the journey, on the origins of the electric telegraph:

Our physics textbooks have packaged up the messy past into a tidy collection of concepts and equations, eliding centuries of development and conflict between competing schools of thought. Ohm never wrote the formula V = IR, nor did Maxwell create Maxwell’s equations.

Though I will not attempt to explore all the twists and turns of the intellectual history of electricity, I will do my best to present ideas as they existed at the time, not as we retrospectively fit them into our modern categories.


Phone, 1970s, Austria

The kind of phone I used at the time when the video lectures for Structure and Interpretation of Computer Programs had been recorded and when I submitted my batch jobs of Fortran code to be compiled. I have revived the phone now and then.


Learning General Relativity

Math blogger Joseph Nebus does another A – Z series of posts, explaining technical terms in mathematics. He asked readers for their favorite pick of things to be covered in this series, and I came up with General Covariance. Which he laid out in this post – in his signature style, using neither equations nor pop-science images like deformed rubber mattresses – but ‘just words’. As so often, he manages to explain things really well!

Actually, I asked for that term as I am in the middle of yet another physics (re-)learning project – in the spirit of my ventures into QFT a while back.

Since a while I have now tried (on this blog) to cover only the physics related to something I have both education in and hands-on experience with. Re General Relativity I have neither: My PhD was in applied condensed-matter physics – lasers, superconductors, optics – and this article by physicist Chad Orzel about What Math Do You Need For Physics? covers well what sort of math you need in that case. Quote:

I moved into the lab, and was concerned more with technical details of vacuum pumps and lasers and electronic circuits and computer data acquisition and analysis.

So I cannot find the remotest way to justify why I would need General Relativity on a daily basis – insider jokes about very peculiarly torus-shaped underground water/ice tanks for heat pumps aside.

My motivation is what I described in this post of mine: Math-heavy physics is – for me, that means a statistical sample of 1 – the best way of brazing myself for any type of tech / IT / engineering work. This positive effect is not even directly related to math/physics aspects of that work.

But I also noticed ‘on the internet’ that there is a community of science and math enthusiasts, who indulge in self-studying theoretical physics seriously as a hobby. Often these are physics majors who ended up in very different industry sectors or in management / ‘non-tech’ jobs and who want to reconnect with what they once learned.

For those fellow learners I’d like to publish links to my favorite learning resources.

There seem to be two ways to start a course or book on GR, and sometimes authors toggle between both modes. You can start from the ‘tangible’ physics of our flat space (spacetime) plus special relativity and then gradually ‘add a bit of curvature’ and related concepts. In this way the introduction sounds familiar, and less daunting. Or you could try to introduce the mathematical concepts at a most rigorous abstract level, and return to the actual physics of our 4D spacetime and matter as late as possible.

The latter makes a lot of sense as you better unlearn some things you took for granted about vector and tensor calculus in flat space. A vector must no longer be visualized as an arrow that can be moved around carelessly in space, and one must be very careful in visualizing what transforming coordinates really means.

For motivation or as an ‘upper level pop-sci intro’…

Richard Feynman’s lecture on curved space might be a very good primer. Feynman explains what curved space and curved spacetime actually mean. Yes, he is using that infamous beetle on a balloon, but he also gives some numbers obtained by back-of-the-envelope calculations that explain important concepts.

For learning about the mathematical foundations …

I cannot praise these Lectures given at the Heraeus International Winter School Gravity and Light 2015 enough. Award-winning lecturer Frederic P. Schuller goes to great lengths to introduce concepts carefully and precisely. His goal is to make all implicit assumptions explicit and avoid allusions to misguided ‘intuitions’ one might got have used to when working with vector analysis, tensors, gradients, derivatives etc. in our tangible 3D world – covered by what he calls ‘undergraduate analysis’. Only in lecture 9 the first connection is made back to Newtonian gravity. Then, back to math only for some more lectures, until finally our 4D spacetime is discussed in lecture 13.

Schuller mentions in passing that Einstein himself struggled with the advanced math of his own theory, e.g. in the sense of not yet distinguishing clearly between the mathematical structure that represents the real world (a topological manifold) and the multi-dimensional chart we project our world onto when using an atlas. It is interesting to pair these lectures with this paper on the history and philosophy of general relativity – a link Joseph Nebus has pointed to in his post on covariance.

Learning physics or math from videos you need to be much more disciplined than with plowing through textbooks – in the sense that you absolutely have to do every single step in a derivation on your own. It is easy to delude oneself that you understood something by following a derivation passively, without calculating anything yourself. So what makes these lectures so useful is that tutorial sessions have been recorded as well: Tutorial sheets and videos can be found here.
(Edit: The Youtube channel of the event has not all the recordings of the tutorial sessions, only this conference website has. It seems the former domain does not work any more, but the content is perserved at

You also find brief notes for these lectures here.

For a ‘physics-only’ introduction …

… I picked a classical, ‘legendary’ resource: Landau and Lifshitz give an introduction to General Relativity in the last third of the second volume in their Course of Theoretical Physics, The Classical Theory of Fields. Landau and Lifshitz’s text is terse, perhaps similar in style to Dirac’s classical introduction to quantum mechanics. No humor, but sublime and elegant.

Landau and Lifshitz don’t need manifolds nor tangent bundles, and they use the 3D curvature tensor of space a lot in addition to the metric tensor of 4D spacetime. They introduce concepts of differences in space and time right from the start, plus the notion of simultaneity. Mathematicians might be shocked by a somewhat handwaving, ‘typical physicist’s’ way to deal with differentials, the way vectors on different points in space are related, etc. – neglecting (at first sight, explore every footnote in detail!) the tower of mathematical structures you actually need to do this precisely.

But I would regard Lev Landau sort of a Richard Feynman of The East, so it takes his genius not make any silly mistakes by taking the seemingly intuitive notions too literally. And I recommend this book only when combined with a most rigorous introduction.

For additional reading and ‘bridging the gap’…

I recommend Sean Carroll’s  Lecture Notes on General Relativity from 1997 (precursor of his textbook), together with his short No-Nonsense Introduction to GR as a summary. Carroll switches between more intuitive physics and very formal math. He keeps his conversational tone – well known to readers of his popular physics books – which makes his lecture notes a pleasure to read.

Artist's concept of general relativity experiment (Public Domain, NASA, Wikimedia)


So this was a long-winded way to present just a bunch of links. This post should also serve as sort of an excuse that I haven’t been really active on social media or followed up closely on other blogs recently. It seems in winter I am secluding myself from the world in order to catch up on theoretical physics.

On Learning

Some years ago I was busy with projects that required a lot of travelling but I also needed to stay up-to-date with latest product features and technologies. When a new operating system was released a colleague asked how I could do that – without having time for attending trainings. Without giving that too much thought, and having my personal test lab in mind, I replied:

I think I always try to solve some problem!

tl;dr – you can skip the rest as this has summed it all up.

About one year ago I ‘promised’ to write about education, based on my experiences as a student and as a lecturer or trainer. I haven’t done so far – as I am not sure if my simplistic theory can be generalized.

There are two very different modes of learning that I enjoy and consider effective:

  1. Trying to solve some arbitrary problem that matters to me (or a client) and starting to explore the space of knowledge from that angle.
  2. Indulging in so-called theory seemingly total unrelated to any practical problem to be solved.

Mode 2 was what I tried to convey in my post about the positive effects of reading theoretical physics textbooks in the morning. The same goes for cryptography.

I neither need advanced theoretical physics when doing calculations for heat pump systems, nor do I need the underlying math and computer science when tweaking digital certificates. When I close the theory books, I am in mode 1.

In the last weeks that mode 1 made me follow a rather steep learning curve with respect to database servers and SQL scripts. I am sure I have made any possible stupid mistake when exploring all the options. I successfully killed performance by too much nested sub-queries and it took me some time to recognize that the referral to the row before is not as straight-forward as in a spreadsheet program. One could argue that a class on database programming might have been more effective here, and I cannot prove otherwise. But most important for me was: I finally achieved what I wanted and it was pure joy all the way. I am a happy dilettante perhaps.

I might read a theoretical book on data structures and algorithms someday and let it merge with my DIY tinkering experience in my subconsciousness – as this how I think those two modes work together.

As for class-room learning and training, or generally learning with or from others, I like those ways best that cater to my two modes:

I believe that highly theoretical subjects are suited best for traditional class-room settings. You cannot google the foundations of some discipline as such foundations are not a collection of facts (each of them to be googled) but a network of interweaving concepts – you have to work with some textbook or learn from somebody who lays out that network before you in a way that allows for grasping the structure – the big picture and the details. This type of initial training also prepares you for future theoretical self-study. I still praise lectures in theoretical physics and math I attended 25 years ago to the skies.

And then there is the lecturer speaking to mode 2: The seasoned expert who talks ‘noted from the field’. The most enjoyable lecture in my degree completed last year was a geothermal energy class – given by a university professor who was also the owner of an engineering consultancy doing such projects. He introduced the theory in passing but he talked about the pitfalls that you would not expect from learning about best practices and standards.

I look back on my formal education(s) with delight as most of the lectures, labs, or projects were appealing to either mode 1 or mode 2. In contrast to most colleagues I loved the math-y theory. In projects on the other hand I had ample freedom to play with stuff – devices, software, technology – and to hone practical skills, fortunately without much supervision. In retrospect, the universities’ most important role with respect to the latter was to provide the infrastructure. By infrastructure I mean expensive equipment – such as the pulsed UV lasers I once played with, or contacts to external ‘clients’ that you would not have had a chance to get in touch otherwise. Two years ago I did the simulations part of a students’ group project, which was ‘ordered’ by the operator of a wind farm. I brought the programming skills to the table – as this was not an IT degree program –  but I was able to apply them to a new context and learn about the details of wind power.

In IT security I have always enjoyed the informal exchange of stories from the trenches with other experienced professionals – this includes participation in related forums. Besides it fosters the community spirit, and there is no need to do content-less ‘networking’ of any other sort. I have just a few days of formal education in IT.

But I believe that your mileage may vary. I applied my preferences to my teaching, that is: explaining theory in – probably too much – depth and then jumping onto any odd question asked by somebody and trying something out immediately. I was literally oscillating between the flipchart and the computer with my virtual machines – I had been compared to a particle in quantum mechanics whose exact location is unknown because of that. I am hardly able to keep to my own agenda even if I had been given any freedom whatsoever to design a lecture or training and to write every slide from scratch. And I look back in horror on delivering trainings (as an employed consultant) based on standardized slides not to be changed. I think I was not the best teacher for students and clients who expected well organized trainings – but I know that experts enjoyed our jam sessions formerly called workshops.

When I embarked on another degree program myself three years ago, I stopped doing any formal teaching myself – before I had given a lecture on Public Key Infrastructure for some years, in a master’s degree program in IT security. Having completed my degree in renewable energy last year I figured that I was done now with any formal learning. So far, I feel that I don’t miss out on anything, and I stay away from related job offerings – even if ‘prestigious’.

In summary, I believe in a combination of pure, hard theory, not to be watered down, and not necessarily to be made more playful – combined with learning most intuitively and in an unguided fashion from other masters of the field and from your own experiments. This is playful no matter how often you bang your head against the wall when trying to solve a puzzle.

Physics book from 1895

A physics book written in 1895, a farewell present by former colleagues in IT – one the greatest gifts I ever got. My subconsciousness demands this is the best way to illustrate this post. I have written a German post on this book which will most likely never be translated as the essence of this post are quotes showing the peculiar use of the German language which strikes the modern reader quite odd.

“Student Friendly Quantum Field Theory”

As other authors of science blogs have pointed out: Most popular search terms are submitted by students. So I guess it is not the general public who is interested in: the theory of gyroscopes, (theory of) microwaves, (theory of) heat pumps, (theory of) falling slinkies, or the Coriolis force.

I believe that these search terms are submitted by students in physics or engineering.

“Student Friendly Quantum Field Theory” has been the top search term for this blog since I had put the textbook of the same title by Robert Klauber on my physics resources list.

So I pay my dues now and dedicate a post to this textbook: I am reviewing the first edition 2013, as I have just missed the publication of the 2nd. In short: I think the book is a pedagogical masterpiece.

This is also an auxiliary posting in my series on QFT. I want to keep this post to a reasonable non-technical level not to scare off my typical readers too much (but I apologize for some technical terms – having the “target audience” of physics students in mind).

Quantum field theory for the gifted amateur has been searched for as well. I believe indeed that this is a book for the gifted amateur in terms of a self-studying quantum physics enthusiast, at least more so than other books on QFT.

However, also the amateur should have had a thorough education in theoretical physics. If you have mastered your typical [*] four (?) semesters in theoretical physics – classical mechanics, electrodynamics, (non-relativistic) quantum theory, and statistical mechanics you should be well prepared to understand the material in this book. If the following key words trigger some memories of equations, you meet the requirements: Lagrange formalism of classical mechanics, Poisson bracket, Maxwell’s equations in four-vector notation.
[*] I have graduated at a time when bachelor’s degrees have been unheard of here in Europe – I cannot explain the prerequisites properly in terms of modern curricula or “graduate” versus “undergraduate”.

I had some exposure to quantum field theory that is used in solid state physics, too, but I don’t believe this is a pre-requisite.

I was most interested in a thorough understanding of the basics and less so in an elegant discussion of leading-edge theories. As discussed in detail earlier I can track down exactly when I don’t understand popular physics books – and by “understanding” I mean being able to recognize the math behind a popular text. (However, in this sense pop-sci books can be definition not be “understood” by the lay audience they are written for).

I didn’t have an idea how the Higgs boson gives the particles and mass, and I could not image how the electron’s spin could be a by-product yielded by a theory – so I wanted to plow through the foundations of QFT. If you want to understand the Higgs boson and field, too, this book does not yet explain this – but I believe you need some thorough grounding as given by SFQFT if you want to tackle more advanced texts.

Student Friendly Quantum Field Theory by Robert Klauber.

Student Friendly Quantum Field Theory by Robert Klauber. I have put my personal slinky on top of the book for no particular reason.

We don’t learn much about Robert Klauber himself. The blurb says:

Bob Klauber, PhD, is retired from a career of working in industry, where he led various research projects and obtained over twenty patents. At different times during and after that career, he taught a diverse number of graduate and undergraduate level physics courses.

So the author is not a tenured professor, and I believe this might be advantageous.

Written solely for students, not for peers

Klauber does not need to show off his smartness to his peers. Yes, he has some pet peeves – such as questioning the true nature of the vacuum, often painted in popular science as a violent sea consisting of pairs of particles popping out of nowhere and vanishing again. Klauber tags some opinions of his as non-mainstream [**], and he links to a few related papers of his own – but he does so in a rather humble way. Your milage may vary  but I found it very refreshing not to find allusions to the impact and grandness of this own original work or to his connectedness in the scientific community (in terms of …when I occasionally talked to Stephen Hawking last time at That Important Conference…)
[**] I feel the need to add a disclaimer: This is not at all “outsider physics” or unorthodox in the way the term is used by professionals bombarded with questionable manuscripts by authors set to refute Einstein or Newton.

But it is not an “elegant” book either. It is not providing professionals with “new ways to see QFT as you never saw it before”; it is an anti-Feynman-y book so to speak. It is not a book I would describe in the way the publishers of the Commemorative Issue of Feynman’s Physics Lectures (1989) did:

Rereading the books, one sometimes seems to catch Feynman looking over his shoulder, not at his audience, but directly at his colleagues, saying, “Look at that! Look how I finessed that point! Wasn’t that clever?”

Nothing is Trivial, Easy and Obvious – and brevity is avoided

Student Friendly Quantum Field Theory (SFQFT) is dedicated to tackling the subject from the perspective of the learning student primarily and only. Klauber goes to great lengths to anticipate questions that might be on the reader’s mind and often refers to his own learning experience – and he always perfectly nails it. He explicitly utters his contempt for declaring things trivial or straight-forward.

Klauber has put considerable efforts into developing the perfect way(s) of presenting the material. Read a summary of his pedagogical strategy here. He avoids conciseness and brevity and he wonders why these seem to be held in such high regard – in education. This also explains why a book of more than 500 pages covers basics only. The same ideas are expounded in different forms:

  • Summary upfront, “big picture”.
  • Through derivations. In case of renormalization, he also gives sort of a “detailed overview” version in a single chapter before the theory unfolds in several chapters. The structure of the book is fractal so to speak: There are whole chapters dedicated to an overview – such as Bird’s Eye View given in Ch. 1 or the summary chapter on renormalization, and each chapter and section contains their own summaries, too.
  • So-called Wholeness Charts, tabular representations of steps in derivations. I found also the charts in the first chapters extremely useful that allow for comparing non-relativistic QM and QFT, and between “particle QM” and field theory – I owe Klauber for finally clearing up my personal confusions – since I haven’t noticed before that I had been trained in non-relativistic field theories. The summary of major steps in the development of the theory for different kinds of particles are laid out in three columns of a table covering several pages, one for each type of particle.
  • Another Summary the the end.

Nothing is omitted (The ugly truth).

The downside:

Now I have understood why Dirac called this an ugly theory he refused to consider the final fundamental theory of the universe. Klauber gives you all the unwieldy algebra. I have not seen something as ugly and messy as the derivations of renormalization. The book has about 520 pages: 100 of them are dedicated to renormalization, and 85 to the calculation of cross-sections in order to compare them with experiment.

The good things:

Klauber gives you really all the derivations, not a single step is omitted. Very often equations quoted in earlier chapters are repeated for convenience of the reader. The book contains problems, but none of the derivations essential for grasping new concepts are completely outsourced to the problems sections.

Scope of the book

You can read the first chapters of the book online, and here is the Table of Contents.

Klauber suspects the addition of modern theories and applications would be confusing and I believe he is right.

He starts with the relation of QFT and non-relativistic and/or non-field-y quantum physics.  I like his penchant for the Poisson bracket in particular and the thorough distinction between wave functions and fields, and how and if there is a correspondence. Take this with a grain of salt as I had been confused a lot with an older book that referred to anything – Schrödinger wave function as well as field – as “waves”.

Klauber uses quantum electrodynamics as the example for explaining concepts. Thus he follows the historical route approximately, and he quotes Feynman who stated that he always thought about theories in terms of palpable examples.

The table of contents is rather “orthodox”.

Free fields are covered first and related equations for scalar bosons (the simplest example), fermions and vector bosons. The latter are needed as ingredients of QED – electrons and photons. I enjoyed the subtle remarks about over-emphasizing the comparison with harmonic oscillators.

Field equations for fermions (such as electrons) do not have classical counter-parts – this is where all attempts to explain by metaphor must end. I set out to write a pop-sci series on QFT and accidentally read the chapter on fermions at the same time when David Yerle posted this challenge on his blog – how to explain the electron’s spin: Now I believe there is no shortcut to understanding the electron’s spin – and as far as I recall Richard Feynman and Sean Carroll (my benchmarks in terms of providing correct popularizations) weren’t able to really explain the electron’s spin in popular terms either. There are different ways to start from but these field equations don’t have classical counterpart, and you always end up with introducing or “discovering” mathematical objects that behave in an non-intuitive way – “objects” that anti-commute without being equal to zero (There aren’t any numbers A and B that satisfy AB = -BA unless either A or B are zero).

Feynman Diagram

Picture of a Feynman diagram, inscribed by Richard P. Feynman to Wikimedia user Ancheta Wis, in Volume 3 of his Feynman Lectures on Physics (Quantum Mechanics).

Interactions are introduced via Maxwell’s equations and QED. Inspecting these equations we finally learn how symmetry and forces are related – usually cloaked as symmetry gives rise to forces in popular texts. Actually, this was one of the things I was most interested in and it was a bit hard to plow through the chapter on spinors (structures representing electrons) before getting to that point.

Symmetry is covered in two chapters – first for free fields and then for interacting fields. All that popular talk about rotating crystals etc. will rather not explain what Gauge Symmetry really is. Again I come to the conclusion that using QED (and the Lagrangian associated with Maxwell’s equation) as an example is the right thing to do, but I will need to re-read other accounts that introduce interacti0ns immediately after having explained scalar b0s0ns.

The way Feynman, Schwinger and Tomonaga dealt with infinities via renormalization is introduced after the chapter on interactions. Since this is the first time I learned about renormalization in detail it is difficult to comment on the quality. But I tend to agree with Klauber who states that students typically get lost in these extremely lengthy derivations that include many side-tracks. Klauber tries to keep it somewhat neat by giving an overview first – explaining the strategy of these iterations (answering: What the hell is going on here?) and digging deeper in the next chapters.

Applications are emphasized, so we learn about the daunting way of calculating scattering cross-sections to be compared with experiments. Caveat: Applications refer to particle physics, not to solid-state physics – but this was exactly what I, as a former condensed matter physicist, was looking for.

Klauber uses the canonical quantization that I had tried to introduce in my series on QFT, too (though I tried to avoid the term). Nevertheless, at the end of the book a self-contained introduction to path integrals is given, too, and part of it is available online.

In summary I wholeheartedly recommend this book to any QFT newbie who is struggling with conciser texts. But I am not a professional, haven’t read all QFT books in the world and my requirements as a student are probably peculiar ones.

Learning Physics, Metaphors, and Quantum Fields

In my series on Quantum Field Theory I wanted to document my own learning endeavors but it has turned into a meta-contemplation on the ‘explain-ability’ of theoretical physics.

Initially I had been motivated by a comment David Tong made in his introductory lecture: Comparing different QFT books he states that Steven Weinberg‘s books are hard reads because at the time of writing Weinberg was probably the person knowing more than anyone else in the world on Quantum Field Theory. On the contrary Weinberg’s book on General Relativity is accessible which Tong attributes to Weinberg’s learning GR himself when he was writing that textbook.

Probably I figured nothing can go awry if I don’t know too much myself. Of course you should know what you are talking about – avoiding to mask ignorance by vague phrases such as scientists proved, experts said, in a very complicated process XY has been done.

Yet my lengthy posts on phase space didn’t score too high on the accessibility scale. Science writer Jennifer Ouelette blames readers’ confusion on writers not knowing their target audience:

This is quite possibly the most difficult task of all. You might be surprised at how many scientists and science writers get the level of discourse wrong when attempting to write “popular science.” Brian Greene’s The Elegant Universe was an undeniably important book, and it started off quite promising, with one of the best explications of relativity my layperson’s brain has yet encountered. But the minute he got into the specifics of string theory — his area of expertise — the level of discourse shot into the stratosphere. The prose became littered with jargon and densely packed technical details. Even highly science-literate general readers found the latter half of the book rough going.

Actually, I have experienced this effect myself as a reader of popular physics books. I haven’t read The Elegant Universe, but Lisa Randall’s Warped Passages or her Knocking on Heaven’s Door are in my opinion similar with respect to an exponential learning curve.

Authors go to great lengths in explaining the mysteries of ordinary quantum mechanics: the double-slit experiment, Schrödinger’s cat, the wave-particle dualism, probably a version of Schrödinger’s equation motivated by analogies to hydrodynamics.

Curved space

An icon of a science metaphor – curved space (Wikimedia, NASA).

Then tons of different fundamental particles get introduced – hard to keep track of if you don’t a print-out of the standard model in particle physics at hand, but still doable. But suddenly you find yourself in a universe you lost touch with. Re-reading such books again now I find full-blown lectures on QFT compressed into single sentences. The compression rate here is much higher than for the petty QM explanations.

I have a theory:

The comprehensibility of a popular physics text is inversely proportional to the compression factor of the math used (even if math is not explicitly referenced).

In PI in the Sky John Barrow mulls on succinct laws of nature in terms of the unreasonable effectiveness of mathematics. An aside: Yet Barrow is as critical as Nassim Taleb with respect to the allure of Platonicity’What is most remarkable about the success of mathematics  in [particle physics and cosmology] is that they are most remote from human experience (Quote from PI in the Sky).

Important concepts in QM can be explained in high school math. My old high school physics textbook contained a calculation of the zero point energy of a Fermi gas of electrons in metals.

Equations in advanced theoretical physics might still appear simple, still using symbols taken from the Latin or Greek alphabet. But unfortunately these letters denote mathematical objects that are not simple numbers – this is highly efficient compressed notation. These objects are the proverbial mathematical machinery(*) that act on other objects. Sounds like the vague phrases I scathed before, doesn’t it? These operators are rather like a software programs using the thing to the right of this machine as an input – but that’s already too much of a metaphor as the ‘input’ is not a number either.
(*) I used the also common term mathematical crank in earlier posts which I avoid now due to obvious reasons.

You can create rather precise metaphors for differential operators in classical physics, using references to soft rolling hills and things changing in time or (three-dimensional) space. You might be able to introduce the curly small d’s in partial derivatives when applying these concepts to three-dimensional space. More than three-dimensions can be explained resorting by the beetle-on-balloon or ant-in-the-hose metaphors.

But if it gets more advanced than that I frankly run out of metaphors I am comfortable with. You ought to explain some purely mathematical concepts before you continue to discuss physics.

I think comprehension of those popular texts on advanced topics works this way:

  • You can understand anything perfectly if you have once developed a feeling for the underlying math. For example you can appreciate descriptions of physical macroscopic objects moving under the influence of gravity, such as in celestial mechanics. Even if you have forgotten the details of your high school calculus lectures you might remember some facts on acceleration and speed you need to study when cramming for your driver license test.
  • When authors start to introduce new theoretical concepts there is a grey area of understanding – allowing for stretching your current grasp of math a bit. So it might be possible to understand a gradient vector as a slope of a three-dimensional hill even if you never studied vector calculus.
  • Suddenly you are not sure if the content presented is related to anything you have a clue of or if metaphors rather lead you astray. This is where new mathematical concepts have been introduced silently.

The effect of silently introduced cloaked math may even be worse as readers believe they understand but have been led astray. Theoretical physicist (and seasoned science blogger) Sabine Hossenfelder states in her post on metaphors in science:

Love: Analogies and metaphors build on existing knowledge and thus help us to understand something quickly and intuitively.

Hate: This intuition is eventually always misleading. If a metaphor were exact, it wouldn’t be a metaphor.

And while in writing, art, and humor most of us are easily able to tell when an analogy ceases to work, in science it isn’t always so obvious.

My plan has been to balance metaphors and rigor by reading textbooks in parallel with popular science books. I am mainly using Zee’s Quantum Field Theory in a Nutshell, Klauber’s Student Friendly Quantum Field Theory, and Tong’s lecture notes and videos.

Feynman penguin diagram

Feynman diagrams are often used in pop-sci texts to explain particle decay paths and interactions. Actually they are shortcuts for calculating terms in daunting integrals. The penguin is not a metaphor but a crib – a funny name for a specific class of diagrams that sort of resemble penguins.

But I also enjoyed Sean Carroll’s The Particle at the End of the Universe – my favorite QFT- / Higgs-related pop-sci book. Reading his chapters on quantum fields I felt he has boldly gone where no other physicist writing pop-sci had gone before. In many popular accounts of the Higgs boson and Higgs field we find somewhat poetic accounts of particles that communicate forces, such as the photon being the intermediary of electromagnetic forces.

Sean Carroll goes to the mathematical essence of the relationship of (rather abstract) symmetries, connection fields and forces:

The connection fields define invisible ski slopes at every point in space, leading to forces that push particles in different directions, depending on how they interact. There’s a gravitational ski slope that affects every particle in the same way, an electromagnetic ski slope that pushes positively charged particles one way and negatively charged particles in the opposite direction, a strong-interaction ski slope that is only felt by quarks and gluons, and a weak-interaction ski slope that is felt by all the fermions of the Standard Model, as well as by the Higgs boson itself. 

Indeed, in his blog Carroll writes:

So in the end, recognizing that it’s a subtle topic and the discussion might prove unsatisfying, I bit the bullet and tried my best to explain why this kind of symmetry leads directly to what we think of as a force. Part of that involved explaining what a “connection” is in this context, which I’m not sure anyone has ever tried before in a popular book. And likely nobody ever will try again!

This is the best popular account of symmetries and forces I could find so far – yet I confess: I could not make 100% sense of this before I had plowed through the respective chapters in Zee’s book. This is the right place to add a disclaimer: Of course I hold myself accountable for a possibly slow absorbing power or wrong approach of self-studying, as well as for confusing my readers. My brain is just the only one I have access to for empirical analysis right now and the whole QFT thing is an experiment. I should maybe just focus on writing about current research in an accessible way or keeping a textbook-style learner’s blog blog similar to this one.

Back to metaphors: Symmetries are usually explained by invoking rotating regular objects and crystals, but I am not sure if this image will inspire anything close to gauge symmetry in readers’ minds. Probably worse: I had recalled gauge symmetry in electrodynamics, but it was not straight-forward how to apply and generalize it to quantum fields – I needed to see some equations.

Sabine Hossenfelder says:

If you spend some time with a set of equations, pushing them back and forth, you’ll come to understand how the mathematical relationships play together. But they’re not like anything. They are what they are and have to be understood on their own terms.

Actually I had planned a post on the different routes to QFT – complementary to my post on the different ways to view classical mechanics. Unfortunately I feel the mathematically formidable path integrals would lend themselves more to metaphoric popularization – and thus more confusion.

You could either start with fields and quantize them which turn the classical fields (numbers attached to any point in space and time) into mathematical operators that actually create and destroy particles. Depending on the book you pick this is introduced as something straight-forward or as a big conceptual leap. My initial struggles with re-learning QFT concepts were actually due to the fact I had been taught the ‘dull’ approach (many years ago):

  • Simple QM deals with single particles. Mathematically, the state of those is described by the probability of a particle occupying this state. Our mathematical operators let you take the proverbial quantum leap – from one state to another. In QM lingo you destroy or create states.
  • There are many particles in condensed matter, thus we just extend our abstract space. The system is not only described by the properties of each particle, but also by the number of particles present. Special relativity might not matter.
  • Thus it is somehow natural that our machinery now destroys or annihilates particles.

The applications presented in relation to this approach were all taken from solid state physics where you deal with lots of particles anyway and creating and destroying some was not a big deal. It is more exciting if virtual particles are created from the vacuum and violating the conservation of energy for a short time, in line with the uncertainty principle.

The alternative route to this one (technically called the canonical quantization) is so-called path integral formalism. Zee introduces it via an anecdote of a wise guy student (called Feynman) who pesters his teacher with questions on the classical double-slit experiment: A particle emitted from a source passes through one of two holes and a detector records spatially varying intensity based on interference. Now wise guy asks: What if we drill a third hole, a fourth hole, a fifth hole? What if we add a second screen, a third screen? The answer is that adding additional paths the particle might take the amplitudes related to these paths will also contribute to the interference pattern.

Now the final question is: What if we remove all screens – drilling infinite holes into those screens? Then all possible paths the particle can traverse from source to detector would contribute. You sum over all (potential) histories.

I guess, a reasonable pop-sci article would probably not go into further details of what it means to sum over an infinite number of paths and yet get reasonable – finite – results, or to expound why on earth this should be similar to operators destroying particles. We should add that the whole amplitude-adding business was presented as an axiom. This is weird, but this is how the world seems to work! (Paraphrasing Feynman).

Then we would insert an opaque blackbox [something about the complicated machinery – see details on path integrals if you really want to] and jump directly to things that can eventually be calculated like scattering cross-sections and predictions how particle will interact with each other in the LHC … and gossip about Noble Prize winners.

Yet it is so tempting to ponder on how the classical action (introduced here) is related to this path integral: Everything we ‘know about the world’ is stuffed into the field-theoretical counterpart of the action. The action defines the phase (‘angle’) attached to a path. (Also Feynman talks about rotating arrows!) Quantum phenomena emerge when the action becomes comparable to Planck’s constant. If the action is much bigger most of the paths are cancelled out because  If phases fluctuate wildly contributions of different amplitudes get cancelled.

“I am not gonna simplify it. If you don’t like it – that’s too bad!”

May the Force Field Be with You: Primer on Quantum Mechanics and Why We Need Quantum Field Theory

As Feynman explains so eloquently – and yet in a refreshingly down-to-earth way – understanding and learning physics works like this: There are no true axioms, you can start from anywhere. Your physics knowledge is like a messy landscape, built from different interconnected islands of insights. You will not memorize them all, but you need to recapture how to get from one island to another – how to connect the dots.

The beauty of theoretical physics is in jumping from dot to dot in different ways – and in pondering on the seemingly different ‘philosophical’ worldviews that different routes may provide.

This is the second post in my series about Quantum Field Theory, and I  try to give a brief overview on the concept of a field in general, and on why we need QFT to complement or replace Quantum Mechanics. I cannot avoid reiterating some that often quoted wave-particle paraphernalia in order to set the stage.

From sharp linguistic analysis we might conclude that is the notion of Field that distinguishes Quantum Field Theory from mere Quantum Theory.

I start with an example everybody uses: a so-called temperature field, which is simply: a temperature – a value, a number – attached to every point in space. An animation of monthly mean surface air temperature could be called the temporal evolution of the temperature field:

Monthly Mean Temperature

Solar energy is absorbed at the earth’s surface. In summer the net energy flow is directed from the air to the ground, in winter the energy stored in the soil is flowing to the surface again. Temperature waves are slowly propagating perpendicular to the surface of the earth.

The gradual evolution of temperature is dictated by the fact that heat flows from the hotter to the colder regions. When you deposit a lump of heat underground – Feynman once used an atomic bomb to illustrate this point – you start with a temperature field consisting of a sharp maximum, a peak, located in a region the size of the bomb. Wait for some minutes and this peak will peter out. Heat will flow outward, the temperature will rise in the outer regions and decrease in the center:

Diffluence of a bucket of heat, goverend by the Heat Transfer EquationModelling the temperature field (as I did – in relation to a specific source of heat placed underground) requires to solve the Heat Transfer Equation which is the mathy equivalent of the previous paragraph. The temperature is calculated step by step numerically: The temperature at a certain point in space determines the flow of heat nearby – the heat transferred changes the temperature – the temperature in the next minute determines the flow – and on and on.

This mundane example should tell us something about a fundamental principle – an idea that explains why fields of a more abstract variety are so important in physics: Locality.

It would not violate the principle of the conservation of energy if a bucket of heat suddenly disappeared in once place and appeared in another, separated from the first one by a light year. Intuitively we know that this is not going to happen: Any disturbance or ripple is transported by impacting something nearby.

All sorts of field equations do reflect locality, and ‘unfortunately’ this is the reason why all fundamental equations in physics require calculus. Those equations describe in a formal way how small changes in time and small variations in space do affect each other. Consider the way a sudden displacement traverses a rope:

Propagation of a waveSound waves travelling through air are governed by local field equations. So are light rays or X-rays – electromagnetic waves – travelling through empty space. The term wave is really a specific instance of the more generic field.

An electromagnetic wave can be generated by shaking an electrical charge. The disturbance is a local variation in the electrical field which gives rises to a changing magnetic field which in turn gives rise a disturbance in the electrical field …


Electromagnetic fields are more interesting than temperature fields: Temperature, after all, is not fundamental – it can be traced back to wiggling of atoms. Sound waves are equivalent to periodic changes of pressure and velocity in a gas.

Quantum Field Theory, however, should finally cover fundamental phenomena. QFT tries to explain tangible matter only in terms of ethereal fields, no less. It does not make sense to ask what these fields actually are.

I have picked light waves deliberately because those are fundamental. Due to historical reasons we are rather familiar with the wavy nature of light – such as the colorful patterns we see on or CDs whose grooves act as a diffraction grating:

Michael Faraday had introduced the concept of fields in electromagnetism, mathematically fleshed out by James C. Maxwell. Depending on the experiment (that is: on the way your prod nature to give an answer to a specifically framed question) light may behave more like a particle, a little bullet, the photon – as stipulated by Einstein.

In Compton Scattering a photon partially transfers energy when colliding with an electron: The change in the photon’s frequency corresponds with its loss in energy. Based on the angle between the trajectories of the electron and the photon energy and momentum transfer can be calculated – using the same reasoning that can be applied to colliding billiard balls.

Compton Effect

We tend to consider electrons fundamental particles. But they give proof of their wave-like properties when beams of accelerated electrons are utilized in analyzing the microstructure of materials. In transmission electron microscopy diffraction patterns are generated that allow for identification of the underlying crystal lattice:

A complete quantum description of an electron or a photon does contain both the wave and particle aspects. Diffraction patterns like this can be interpreted as highlighting the regions where the probabilities to encounter a particle are maximum.

Schrödinger has given the world that famous equation named after him that does allow for calculating those probabilities. It is his equation that let us imagine point-shaped particles as blurred wave packets:

Schrödinger’s equation explains all of chemistry: It allows for calculating the shape of electrons’ orbitals. It explains the size of the hydrogen atom and it explains why electrons can inhabit stable ‘orbits’ at all – in contrast to the older picture of the orbiting point charge that would lose energy all  the time and finally fall into the nucleus.

But this so-called quantum mechanical picture does not explain essential phenomena though:

  • Pauli’s exclusion principle explains why matter is extended in space – particles need to put into different orbitals, different little volumes in space. But It is s a rule you fill in by hand, phenomenologically!
  • Schrödinger’s equations discribes single particles as blurry probability waves, but it still makes sense to call these the equivalents of well-defined single particles. It does not make sense anymore if we take into account special relativity.

Heisenberg’s uncertainty principle – a consequence of Schrödinger’s equation – dictates that we cannot know both position and momentum or both energy and time of a particle. For a very short period of time conservation of energy can be violated which means the energy associated with ‘a particle’ is allowed to fluctuate.

As per the most famous formula in the world energy is equivalent to mass. When the energy of ‘a particle’ fluctuates wildly virtual particles – whose energy is roughly equal to the allowed fluctuations – can pop into existence intermittently.

However, in order to make quantum mechanics needed to me made compatible with special relativity it was not sufficient to tweak Schrödinger’s equation just a bit.

Relativistically correct Quantum Field Theory is rather based on the concept of an underlying field pervading space. Particles are just ripples in this ur-stuff – I owe to Frank Wilczek for that metaphor. A different field is attributed to each variety of fundamental particles.

You need to take a quantum leap… It takes some mathematical rules to move from the classical description of the world to the quantum one, sometimes called quantization. Using a very crude analogy quantization is like making a beam of light dimmer and dimmer until it reveals its granular nature – turning the wavy ray of light into a cascade of photonic bullets.

In QFT you start from a classical field that should represent particles and then apply the machinery quantization to that field (which is called second quantization although you do not quantize twice.). Amazingly, the electron’s spin and Pauli’s principle are a natural consequence if you do it right. Paul Dirac‘s achievement in crafting the first relativistically correct equation for the electron cannot be overstated.

I found these fields the most difficult concepts to digest, but probably for technical reasons:

Historically  – and this includes some of those old text books I am so fond of – candidate versions of alleged quantum mechanical wave equations have been tested to no avail, such as the Klein-Gordon equation. However this equation turned out to make sense later – when re-interpreted as a classical field equation that still needs to be quantized.

It is hard to make sense of those fields intuitively. However, there is one field we are already familiar with: Photons are ripples arising from the electromagnetic field. Maxwell’s equations describing these fields had been compatible with special relativity – they predate the theory of relativity, and the speed of light shows up as a natural constant. No tweaks required!

I will work hard to turn the math of quantization into comprehensive explanations, risking epic failure. For now I hand over to MinutePhysics for an illustration of the correspondence of particles and fields:

Disclaimer – Bonus Track:

In this series I do not attempt to cover latest research on unified field theories, quantum gravity and the like. But since I started crafting this article, writing about locality when that article on an alleged simple way to replace field theoretical calculations went viral. The principle of locality may not hold anymore when things get really interesting – in the regime of tiny local dimensions and high energy.

Space Balls, Baywatch and the Geekiness of Classical Mechanics

This is the first post in my series about Quantum Field Theory. What a let-down: I will just discuss classical mechanics.

There is a quantum mechanics, and in contrast there is good old classical, Newtonian mechanics. The latter is a limiting case of the former. So there is some correspondence between the two, and there are rules that let you formulate the quantum laws from the classical laws.

But what are those classical laws?

Chances are high that classical mechanics reminds you of pulleys and levers, calculating torques of screws and Newton’s law F = ma: Force is equal to mass times acceleration.

I argue that classical dynamics is most underrated in terms of geek-factor and philosophical appeal.

[Space Balls]

The following picture might have been ingrained in your brain: A force is tugging at a physical object, such as earth’s gravity is attracting a little ball travelling in space. Now the ball moves – it falls. Actually the moon also falls in a sense when it is orbiting the earth.

Newton's cannon ball.

Cannon ball and gravity. If the initial velocity is too small the ball traverses a parabola and eventually reaches the ground (A, B). If the ball is just given the right momentum, it will fall forever and orbit the earth (C). If the velocity is too high, the ball will escape the gravitational field (E). (Wikimedia). Now I said it – ‘field’! – although I tried hard to avoid it in this post.

When bodies move their positions change. The strength of the gravitational force depends on the distance from the mass causing it, thus the force felt by the moving ball changes. This is why the three-body problem is hard: You need a computer for calculating the forces three or more planets exert on each other at every point of time.

So this is the traditional mental picture associated associated with classical mechanics. It follows these incremental calculations:
Force acts – things move – configuration changes – force depends on configuration – force changes.

In order to get this going you need to know the configuration at the beginning – the positions and the velocities of all planets involved.

So in summary we need:

  • the dependence of the force on the position of the masses.
  • the initial conditions – positions and velocities.
  • Newton’s law.

But there is an alternative description of classical dynamics, offering an alternative philosophy of mechanics so to speak. The description is mathematically equivalent, yet it feels unfamiliar.

In this case we trade the knowledge of positions and velocities for fixing the positions at a start time and an end time. Consider it a sort of game: You know where the planets are at time t1 and at time t2. Now figure out how they have moved / will move between t1 and t2. Instead of the force we consider another, probably more mysterious propert:.

It is called the action. The action has a dimension of [energy time], and – as the force – it has all information about the system.

The action is calculated by integrating…. I am reluctant to describe how the action is calculated. Action (or its field-y counterparts) will be considered the basic description of a system – something that is given, in the way had been forces had been considered given in the traditional picture. The important thing is: You attach a number to each imaginable trajectory, to each possible history.

The trajectory a particle traverses in time slot t1-t2 are determined by the Principle of Least Action (which ‘replaces’ Newton’s law): The action of the system is minimal for the actual trajectories. Any deviation – such as a planet travelling in strange loops – would increase the action.

Principle of Least Action.

Principle of least action. Given: The positions of the particle at start time t1 and end t2. Calculated: The path the particle traverse – by testing all possible paths and calculating their associated actions. Near the optimum (red) path the variation does hardly vary (Wikimedia).

This sounds probably awkward – why would you describe nature like this?
(Of course one answer is: this description will turn out useful in the long run – considering fields in 4D space-time. But this answer is not very helpful right now).

That type of logic is useful in other fields of physics: A related principle lets you calculate the trajectory of a beam of light: Given the start point and the end point a beam, light will pick the path that is traversed in minimum time (This rule is called Fermat’s principle).

This is obvious for a straight laser beam in empty space. But Fermat’s principle allows for picking the correct path in less intuitive scenarios, such as: What happens at the interface between different materials, say air and glass? Light is faster in air than in glass, thus is makes sense to add a kink to the path and utilize air as much as possible.


Richard Feynman used the following example: Consider you walk on the beach and hear a swimmer crying for help. Since this is a 1960s text book the swimmer is a beautiful girl. In order to reach her you have to: 1) Run some meters on the sandy beach and 2) swim some meters in the sea. You do an intuitive calculation about the ideal point of where to enter the water: You can run faster than you can swim. By using a little more intelligence we would realize that it would be advantageous to travel a little greater distance on land in order to decrease the distance in the water, because we go so much slower in the water (Source: Feynman’s Lecture Vol. 1 – available online since a few days!)

Refraction at the interface between air and water.

Refraction at the interface between air and water (Wikimedia). The trajectory of the beam has a kink thus the pole appears kinked.

Those laws are called variational principles: You consider all possible paths, and the path taken is indicated by an extremum, in these cases: a minimum.

Near a minimum stuff does not vary much – the first order derivative is zero at a minimum. Thus on varying paths a bit you actually feel when are close to the minimum – in the way you, as a car driver, would feel the bottom of a valley (It can only go up from here).

Doesn’t this description add a touch of spooky multiverses to classical mechanics already? It seems as if nature has a plan or as if we view anything that has ever or will ever happen from a vantage point outside of space-time.

Things get interesting when masses or charges become smeared out in space – when there is some small ‘infinitesimal’ mass at every point in space. Or generally: When something happens at every point in space. Instead of a point particle that can move in three different directions – three degrees of freedom in physics lingo – we need to deal with an infinite number of degrees of freedom.

Then we are entering the world of fields that I will cover in the next post.

Related posts: Are We All Newtonians? | Sniffing the Path (On the Fascination of Classical Mechanics)