Spheres in a Space with Trillions of Dimensions

I don’t venture into speculative science writing – this is just about classical statistical mechanics; actually about a special mathematical aspect. It was one of the things I found particularly intriguing in my first encounters with statistical mechanics and thermodynamics a long time ago – a curious feature of volumes.

I was mulling upon how to ‘briefly motivate’ the calculation below in a comprehensible way, a task I might have failed at years ago already, when I tried to use illustrations and metaphors (Here and here). When introducing the ‘kinetic theory’ in thermodynamics often the pressure of an ideal gas is calculated first, by considering averages over momenta transferred from particles hitting the wall of a container. This is rather easy to understand but still sort of an intermediate view – between phenomenological thermodynamics that does not explain the microscopic origin of properties like energy, and ‘true’ statistical mechanics. The latter makes use of a phase space with with dimensions the number of particles. One cubic meter of gas contains ~1025 molecules. Each possible state of the system is depicted as a point in so-called phase space: A point in this abstract space represents one possible system state. For each (point-like) particle 6 numbers are added to a gigantic vector – 3 for its position and 3 for its momentum (mass times velocity), so the space has ~6 x 1025 dimensions. Thermodynamic properties are averages taken over the state of one system watched for a long time or over a lot of ‘comparable’ systems starting from different initial conditions. At the heart of statistical mechanics are distributions functions that describe how a set of systems described by such gigantic vectors evolves. This function is like a density of an incompressible fluid in hydrodynamics. I resorted to using the metaphor of a jelly in hyperspace before.

Taking averages means to multiply the ‘mechanical’ property by the density function and integrate it over the space where these functions live. The volume of interest is a  generalized N-ball defined as the volume within a generalized sphere. A ‘sphere’ is the surface of all points in a certain distance (‘radius’ R) from an origin

x_1^2 + x_2^2 + ... + x_ {N}^2 = R^2

(x_n being the co-ordinates in phase space and assuming that all co-ordinates of the origin are zero). Why a sphere? Because states are ordered or defined by energy, and larger energy means a greater ‘radius’ in phase space. It’s all about rounded surfaces enclosing each other. The simplest example for this is the ellipse of the phase diagram of the harmonic oscillator – more energy means a larger amplitude and a larger maximum velocity.

And here is finally the curious fact I actually want to talk about: Nearly all the volume of an N-ball with so many dimensions is concentrated in an extremely thin shell beneath its surface. Then an integral over a thin shell can be extended over the full volume of the sphere without adding much, while making integration simpler.

This can be seen immediately from plotting the volume of a sphere over radius: The volume of an N-ball is always equal to some numerical factor, times the radius to the power of the number of dimensions. In three dimensions the volume is the traditional, honest volume proportional to r3, in two dimensions the ‘ball’ is a circle, and its ‘volume’ is its area. In a realistic thermodynamic system, the volume is then proportional to rN with a very large N.

The power function rN turn more and more into an L-shaped function with increasing exponent N. The volume increases enormously just by adding a small additional layer to the ball. In order to compare the function for different exponents, both ‘radius’ and ‘volume’ are shown in relation to the respective maximum value, R and RN.

The interesting layer ‘with all the volume’ is certainly much smaller than the radius R, but of course it must not be too small to contain something. How thick the substantial shell has to be can be found by investigating the volume in more detail – using a ‘trick’ that is needed often in statistical mechanics: Taylor expanding in the exponent.

A function can be replaced by its tangent if it is sufficiently ‘straight’ at this point. Mathematically it means: If dx is added to the argument x, then the function at the new target is f(x + dx), which can be approximated by f(x) + [the slope df/dx] * dx. The next – higher-order term would be proportional to the curvature, the second derivation – then the function is replaced by a 2nd order polynomial. Joseph Nebus has recently published a more comprehensible and detailed post about how this works.

So the first terms of this so-called Taylor expansion are:

f(x + dx) = f(x) + dx{\frac{df}{dx}} + {\frac{dx^2}{2}}{\frac{d^2f}{dx^2}} + ...

If dx is small higher-order terms can be neglected.

In the curious case of the ball in hyperspace we are interested in the ‘remaining volume’ V(r – dr). This should be small compared to V(r) = arN (a being the uninteresting constant numerical factor) after we remove a layer of thickness dr with the substantial ‘bulk of the volume’.

However, trying to expand the volume V(r – dr) = a(r – dr)N, we get:

V(r - dr) = V(r) - adrNr^{N-1} + a{\frac{dr^2}{2}}N(N-1)r^{N-2} + ...  = ar^N(1 - N{\frac{dr}{r}} + {\frac{N(N-1)}{2}}({\frac{dr}{r}})^2) + ...

But this is not exactly what we want: It is finally not an expansion, a polynomial, in (the small) ratio of dr/r, but in Ndr/r, and N is enormous.

So here’s the trick: 1) Apply the definition of the natural logarithm ln:

V(r - dr) = ae^{N\ln(r - dr)} = ae^{N\ln(r(1 - {\frac{dr}{r}}))} = ae^{N(\ln(r) + ln(1 - {\frac{dr}{r}}))} = ar^Ne^{\ln(1 - {\frac{dr}{r}}))} = V(r)e^{N(\ln(1 - {\frac{dr}{r}}))}

2) Spot a function that can be safely expanded in the exponent: The natural logarithm of 1 plus something small, dr/r. So we can expand near 1: The derivative of ln(x) is 1/x (thus equal to 1/1 near x=1) and ln(1) = 0. So ln(1 – x) is about -x for small x:

V(r - dr) = V(r)e^{N(0 - 1{\frac{dr}{r})}} \simeq V(r)e^{-N{\frac{dr}{r}}}

3) Re-arrange fractions …

V(r - dr) = V(r)e^{-\frac{dr}{(\frac{r}{N})}}

This is now the remaining volume, after the thin layer dr has been removed. It is small in comparison with V(r) if the exponential function is small, thus if {\frac{dr}{(\frac{r}{N})}} is large or if:

dr \gg \frac{r}{N}

Summarizing: The volume of the N-dimensional hyperball is contained mainly in a shell dr below the surface if the following inequalities hold:

{\frac{r}{N}} \ll dr \ll r

The second one is needed to state that the shell is thin – and allow for expansion in the exponent, the first one is needed to make the shell thick enough so that it contains something.

This might help to ‘visualize’ a closely related non-intuitive fact about large numbers, like eN: If you multiply such a number by a factor ‘it does not get that much bigger’ in a sense – even if the factor is itself a large number:

Assuming N is about 1025  then its natural logarithm is about 58 and…

Ne^N = e^{\ln(N)+N} = e^{58+10^{25}}

… 58 can be neglected compared to N itself. So a multiplicative factor becomes something to be neglected in a sum!

I used a plain number – base e – deliberately as I am obsessed with units. ‘r’ in phase space would be associated with a unit incorporating lots of lengths and momenta. Note that I use the term ‘dimensions’ in two slightly different, but related ways here: One is the mathematical dimension of (an abstract) space, the other is about cross-checking the physical units in case a ‘number’ is something that can be measured – like meters. The co-ordinate  numbers in the vector refer to measurable physical quantities. Applying the definition of the logarithm just to rN would result in dimensionless number N side-by-side with something that has dimensions of a logarithm of the unit.

Using r – a number with dimensions of length – as base, it has to be expressed as a plain number, a multiple of the unit length R_0 (like ‘1 meter’). So comparing the original volume of the ball a{(\frac{r}{R_0})}^N to one a factor of N bigger …

aN{(\frac{r}{R_0})}^N = ae^{\ln{(N)} + N\ln{(\frac{r}{R_0})}}

… then ln(N) can be neglected as long as \frac{r}{R_0} is not extreeeemely tiny. Using the same argument as for base e above, we are on the safe side (and can neglect factors) if r is of about the same order of magnitude as the ‘unit length’ R_0 . The argument about negligible factors is an argument about plain numbers – and those ‘don’t exist’ in the real world as one could always decide to measure the ‘radius’ in a units of, say, 10-30 ‘meters’, which would make the original absolute number small and thus the additional factor non-negligible. One might save the argument by saying that we would always use units that sort of match the typical dimensions (size) of a system.

Saying everything in another way: If the volume of a hyperball ~rN is multiplied by a factor, this corresponds to multiplying the radius r by a factor very, very close to 1 – the Nth root of the factor for the volume. Only because the number of dimensions is so large, the volume is increased so much by such a small increase in radius.

As the ‘bulk of the volume’ is contained in a thin shell, the total volume is about the product of the surface area and the thickness of the shell dr. The N-ball is bounded by a ‘sphere’ with one dimension less than the ball. Increasing the volume by a factor means that the surface area and/or the thickness have to be increased by factors so that the product of these factors yield the volume increase factor. dr scales with r, and does thus not change much – the two inequalities derived above do still hold. Most of the volume factor ‘goes into’ the factor for increasing the surface. ‘The surface becomes the volume’.

This was long-winded. My excuse: Also Richard Feynman took great pleasure in explaining the same phenomenon in different ways. In his lectures you can hear him speak to himself when he says something along the lines of: Now let’s see if we really understood this – let’s try to derive it in another way…

And above all, he says (in a lecture that is more about math than about physics)

Now you may ask, “What is mathematics doing in a physics lecture?” We have several possible excuses: first, of course, mathematics is an important tool, but that would only excuse us for giving the formula in two minutes. On the other hand, in theoretical physics we discover that all our laws can be written in mathematical form; and that this has a certain simplicity and beauty about it. So, ultimately, in order to understand nature it may be necessary to have a deeper understanding of mathematical relationships. But the real reason is that the subject is enjoyable, and although we humans cut nature up in different ways, and we have different courses in different departments, such compartmentalization is really artificial, and we should take our intellectual pleasures where we find them.

___________________________________

Further reading / sources: Any theoretical physics textbook on classical thermodynamics / statistical mechanics. I am just re-reading mine.

11 Comments Add yours

  1. Michelle says:

    I can’t believe that it took me so long to come back to this post. I got to read almost all of it when you first put it up, but couldn’t finish, and it has been something I’ve wanted to come back to all these months. In my course work, I always feel that the best math that I have encountered so far has been claimed by physics, and you remind me of this again. This year, I have an introductory mechanics class that is very nice (very enjoyable to be in the lab!), although I sometimes feel a bit confused by the way vectors are used here, compared to how I encountered motion through parameterized vector functions in calculus mathematics. I enjoyed this post very much; I am also intrigued and excited by the many different approaches one can take to solve a problem that is expressed by mathematics, and am often surprised by how one approach can be overly complicated but another approach gets to a solution in a way that seems almost too easy and obvious. The way you have explained the derivatives here is one of those “I can’t believe it is that straight-forward” approaches that is so delightful to encounter.

    1. elkement says:

      Thanks a lot for comment, Michelle! Nice to hear from you! You are now in the middle of your degree program, right?

      I have been intrigued by such different approaches ever since. My hero is Feynman who manages to explain in a single ‘simple / undergrad’ lecture: Newton’s Law, gravitation, differential equations, and how to solve them numerically: http://feynmanlectures.caltech.edu/I_09.html

      I also revere the Russian physicist Lev Landau (also a Noble Prize Winner). He for sure knew the rigorous math as the ‘Eastern’ model of education demanded that you learn the math in depth first, and only then the physics (or so I read). I am in favor of that model and I also learned it nearly that way – I am sad that rigorous mandatory Real Analysis and Linear Algebra classes for both physics and math majors seem to have been replaced my ‘Math for physicists’ or the like today.
      But in his Theoretical Physics course he uses the most ‘casual’ and ‘heuristic’ / ‘deceptively simple’ explanations that seem to sweep the details under the rug … but actually don’t if you read very carefully. But it’s a different language from theorem / proof though :-)

      It seems – from my reading of popular or very basic accounts of modern theories in fundamental physics – that the roles of math and physics have been ‘interchanged’ in recent years … or at least some people think so … now not yet (or never) testable mathematical models ‘are physics’. I’ve read things like ‘Physics has become to complicated for physicists!’. One of my favority theoretical physics bloggers, Sabine Hossenfelder, has written a somewhat related book to be released next year, titled: Lost in Math – How Beauty Leads Physics Astray… I am very much looking forward to reading it!

      Pardon my curiosity – but which field of math do you like best so far? What are you going to specialize in?

      1. Michelle says:

        Thank you for such a nice note and update. I miss our conversations, and can only say that I have little social interaction with anyone these days. I am about two-thirds done the degree I started, but have decided to plan for a graduate school (Masters) program, so after I complete the first program, I’ll need a year’s worth of preparation courses that we call an Honours program (done as part of the Bachelor’s degree requirements). I am actually in process right now of looking for a larger university for this component of my studies, and to that end, am expecting to be a visiting student at another school next term so that I may see what a larger school can offer. We have very few math majors where I am now; the math program is mainly supplemental to other faculties (a similar situation exists for physics as well).

        I am going to look into the lectures you’ve suggested, and the book by Sabine Hossenfelder when it is released (I remember being pointed to one of her posts once by your blog).

        I have noticed something which one graduate student exemplified very well in a presentation on solving equations when he said, “If I were in math, I’d just plug the numbers in here, but because we’re physics, we’ll refine the equation and do something more elegant.” I am often surprised at how this mathematical identity is becoming part of how physics sees itself. And yet, there are some younger students who become very frustrated with this attitude, feeling that they came to physics because they wanted to work more in the experimental science and not the abstraction of pure mathematics. (They probably will end up in applied math, then!) How strange are the boundaries of academic departments?

        As for your last question, I enjoy whenever physics intersects with the math that I’m doing. I seem to be developing a strong preference for analysis (and linear algebra), which I might take in the direction of analytic number theory, or functional analysis. And to be honest, the rigor of these courses are becoming less a priority even for math majors, and especially so here, as more and more universities in Western Canada focus on applied math, computing and engineering. As such, equations and methods take over (most students worry only about memorizing the computations and getting the right answers on tests), and because of the pressure they exert, it sometimes seems that theory becomes less a priority in the lectures (which is difficult to get right in tests, as it requires a deep understanding preceded by some hard thinking that takes a while to do). I’ve been lucky in that my instructors have given me additional reading and work (and time for discussing) theory and proofs and the ‘why’ questions versus simply learning the ‘how.’ After having done a bit of research now, I can’t see how new math will ever evolve if new students don’t embrace some theory, or try to step away from the known to find or create something new.

        1. elkement says:

          Impressive news – I wish you all the best for your advanced degree program! :-) You seem to found your true calling – I am very happy for you :-)

          The Feynman lecture will be very basic stuff for you – it’s targeted to first year physics students … I just mentioned it because of the unusual pedagogical approach. Feynman quotes Dirac in one of his lectures, about what it means to understand a (differential) equation in physics, and Dirac had said that you know what the solution looks like without actually solving the equation. Landau (and Russian physicists in general) is also in that camp, that quote had also been attributed to him. So you need to do those elegant tweaks and think ‘philosophically’ about an equation and/or apply some han waving heuristics to understand the equation. When I had been introduced to the simplest of all examples in physics by my late theoretical physics professor, the harmonic oscillator, I was hooked: Before introducing all the options for solving a 2nd order ODE he just explained how the curvature is proportional to the function itself, and how that necessarily leads to a wavy oscillating function if you start drawing some snippets of the curve on the blackboard. This was perhaps the old school approach (my professor was Heisenberg’s last graduate student) as in the old times you had no computers or resources were expensive. Landau was said to be so terse also because he tried to save paper.

          I had never expected that math majors would just be plugging numbers – interesting how things seem to change – in my Linear Algebra and Analysis classes the emphasis was on theorems and proofs; you had to do (only) derivations of theorems in your oral exams and explain the ‘why’. It seems most technical sciences are getting closer to each other, and all become sort of applied computer / data science. BTW my own ‘academic’ interests in the moment are related to computer science – after decades of self-taught programming on the job I thought it would be good idea to learn some theory from scratch, and I worked through the classic (book / lectures) Structure and Interpretation of Computer Programs … perhaps the best lecture / textbook I ever used, as at the same time philosophically deep and practically useful albeit it uses an unusual programming language and its goal is to teach fundamentals and the ‘how to think’. So I cannot agree more – you need to know your theory. That allegedly ‘theoretical’ stuff immediately boosted my programming skills, and I managed to e.g. speed up my numerical simulation of our heat pump system by a large factor.

          1. Michelle says:

            I am not surprised that you would take up an academic study of programming. I think I may have expected it, even. It is very good to keep feeding the mind with new concepts and structures, and it always has seemed to me that you need a lot more to keep you sustained than most people do. As for basic theory, I am finding that like my studies in philosophy and literature, using sources that are the origins, or at least very close to the origins of an idea, opens up concepts in ways that one cannot find in later source material. We read Jacob Bernoulli’s “The Art of Conjecturing” this summer, and you can feel (even in translation) the energy and excitement as he recognizes that he’s found a unique insight about finite series; we then read Euler’s paper that uses this insight to crack the infinite series problem. He is also very excited… and then this paper by Euler, given to Riemann by Gauss, goes on to help inspire analytic extension. So you never know what will happen!

            Feynman’s lectures are basic, but they have within them a sense that he’s just left a conversation with Newton or Leibniz, and is here now to show us all the subtle and beautiful things about calculus that struggles to exist in the machinery of institutional instruction. They remind me of Tom Apostol’s books on calculus, but which are a bit more advanced than introductory level reading. I had one professor last year who had studied with one of Apostol’s students (and it was from him that I was pointed toward analytic number theory); listening to him work through an idea was unlike anything I’ve ever experienced before. That class made me feel like I was standing in a room that I’ve passed through a thousand times before, but always in the dark… except now I was seeing it for the first time with the lights turned on; here I felt both delighted and surprised to see that everything was very close to how I imagined it, while also discovering that there was so much more!

            I think you have discovered exactly what is so exciting here: the compact economy of an idea, and the process of learning to open it up and discover everything about it. There is a deep delight in finding the right path into it.

            Thanks for the pleasure of this ‘chat’. Thank you for telling me about your latest intellectual pursuits, interesting as always.

            1. elkement says:

              Thanks also, Michelle!! I have always found our discussions very inspiring! They have really shaped the early stages of this blog … Nowadays most discussions triggered by my posts seem to happen ‘elsewhere’ / ‘offline’ / 1:1 for whatever reason – which had always been like that for our German, mainly ‘heat pump related’ blog … I realize now how helpful your many comments and questions have been when I discuss related things with people all over the world … I hope I will see Canada turn red on the WordPress Stats map now and then in the future ;-) – and I also will have an option to leave one more wall of text on your blog ;-)

            2. Michelle says:

              I may be gone for periods of time, but I don’t want to lose touch completely. I’m glad there have been positive outcomes for both of us from the time we spent here chatting with each other. I’ll keep you updated when there’s something new, and I do keep track of what you are doing, too, even if I can’t always get over here to say so. Take care for now.

  2. bert0001 says:

    reminds me of steam tables,
    didn’t I force myself to forget those 30 years ago?
    :-)

    1. elkement says:

      I remember how a professor told a story of a guy in a power plan he once admired – because that guy knew so many values of enthalpies by heart!

  3. I took great intellectual pleasure in this :-) wakes me up better than any substance I could imbibe

    1. elkement says:

      Thanks a lot :-)

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.