Infinite Loop: Theory and Practice Revisited.

I’ve unlocked a new achievement as a blogger, or a new milestone as a life-form. As a dinosaur telling the same old stories over and over again.

I started drafting a blog post, as I always do since a while: I do it in my mind only, twist and turn in for days or weeks – until I am ready to write it down in one go. Today I wanted to release a post called On Learning (2) or the like. I knew I had written an early post with a similar title, so I expected this to be a loosely related update. But then I checked the old On Learning post: I found not only the same general ideas but the same autobiographical anecdotes I wanted to use now – even  in the same order.

2014 I had looked back on being both a teacher and a student for the greater part of my professional life, and the patterns were always the same – be the field physics, engineering, or IT security. I had written this post after a major update of our software for analyzing measurement data. This update had required me to acquire new skills, which was a delightful learning experience. I tried to reconcile very different learning modes: ‘Book learning’ about so-called theory, including learning for the joy of learning, and solving problems hands-on based on the minimum knowledge absolutely required.

It seems I like to talk about the The Joys of Theory a lot – I have meta-posted about theoretical physics in general, more than oncegeneral relativity as an example, and about computer science. I searched for posts about hands-on learning now – there aren’t any. But every post about my own research and work chronicles this hands-on learning in a non-meta explicit way. These are the posts listed on the heat pump / engineering page,  the IT security / control page, and some of the physics posts about the calculations I used in my own simulations.

Now that I am wallowing in nostalgia and scrolling through my old posts I feel there is one possibly new insight: Whenever I used knowledge to achieve a result that I really needed to get some job done, I think about this knowledge as emerging from hands-on tinkering and from self-study. I once read that many seasoned software developers also said that in a survey about their background: They checked self-taught despite having university degrees or professional training.

This holds for the things I had learned theoretically – be it in a class room or via my morning routine of reading textbooks. I learned about differential equations, thermodynamics, numerical methods, heat pumps, and about object-oriented software development. Yet when I actually have to do all that, it is always like re-learning it again in a more pragmatic way, even if the ‘class’ was very ‘applied’, not much time had passed since learning only, and I had taken exams. This is even true for the archetype all self-studied disciplines – hacking. Doing it – like here  – white-hat-style 😉 – is always a self-learning exercise, and reading about pentesting and security happens in an alternate universe.

The difference between these learning modes is maybe not only in ‘the applied’ versus ‘the theoretical’, but it is your personal stake in the outcome that matters – Skin In The Game. A project done by a group of students for the final purpose of passing a grade is not equivalent to running this project for your client or for yourself. The point is not if the student project is done for a real-life client, or the task as such makes sense in the real world. The difference is whether it feels like an exercise in an gamified system, or whether the result will matter financially / ‘existentially’ as you might try to empress your future client or employer or use the project results to build your own business. The major difference is in weighing risks and rewards, efforts and long-term consequences. Even ‘applied hacking’ in Capture-the-Flag-like contests is different from real-life pentesting. It makes all the difference if you just loose ‘points’ and miss the ‘flag’, or if you inadvertently take down a production system and violate your contract.

So I wonder if the Joy of Theoretical Learning is to some extent due to its risk-free nature. As long as you just learn about all those super interesting things just because you want to know – it is innocent play. Only if you finally touch something in the real world and touching things has hard consequences – only then you know if you are truly ‘interested enough’.

Sorry, but I told you I will post stream-of-consciousness-style now and then 🙂

I think it is OK to re-use the image of my beloved pre-1900 physics book I used in the 2014 post:

Consequences of the Second Law of Thermodynamics

Why a Carnot process using a Van der Waals gas – or other fluid with uncommon equation of state – also runs at Carnot’s efficiency.

Textbooks often refer to an ideal gas when introducing Carnot’s cycle – it’s easy to calculate heat energies and work in this case. Perhaps this might imply that not only must the engine be ‘ideal’ – reversible – but also the working fluid has to be ‘ideal’ in some sense? No, it does not, as explicitly shown in this paper: The Carnot cycle with the Van der Waals equation of state.

In this post I am considering a class of substances which is more general than the Van der Waals gas, and I come to the same conclusion. Unsurprisingly. You only need to imagine Carnot’s cycle in a temperature-entropy (T-S) diagram: The process is represented by a rectangle for both ideal and Van der Waals gas. Heat energies and work needed to calculate efficiency can be read off, and the – universal – maximum efficiency can be calculated without integrating over potentially wiggly pressure-volume curves.

But the fact that we can use the T-S diagram or the fact that the concept of entropy makes sense is a consequence of the Second Law of Thermodynamics. It also states, that a Perpetuum Mobile of the Second Kind is not possible: You cannot build a machine that converts 100% of the heat energy in a temperature bath to mechanical energy. This statement sounds philosophical but it puts constraints on the way real materials can behave, and I think these constraints on the relations between physical properties are stronger than one might intuitively expect. If you pick an equation of state – the pressure as a function of volume and temperature, like the wavy Van der Waals curve, the behavior of specific heat is locked in. In a sense the functions describing the material’s properties have to conspire just in the right way to yield the simple rectangle in the T-S plane.

The efficiency of a perfectly reversible thermodynamic engine (converting heat to mechanical energy) has a maximum well below 100%. If the machine uses two temperature baths with constant temperatures T_1 and T_2, the heat energies exchanged between machine and baths Q_1 and Q_2 for an ideal reversible process are related by:

\frac{Q_1}{T_1} + \frac{Q_2}{T_2} = 0

(I wrote on the related proof by contradiction before – avoiding to use the notion of entropy at all costs). This ideal process and this ideal efficiency could also be used to actually define the thermodynamic temperature (as it emerges from statistical considerations; I have followed Landau and Lifshitz’s arguments in this post on statistical mechanics and entropy)

Any thermodynamic process using any type of substance can be imagined as being a combination of lots of Carnot engines operating between lots of temperature baths at different temperatures (see e.g. Feynman’s lecture). The area in the p-V diagram that is traced out in a cyclic process is being split into infinitely many Carnot processes. For each process small heat energies \delta Q are transferred. Summing up the contributions of all processes only the loop at the edge remains and thus …

\oint \frac{\delta Q}{T}

which means that for a reversible process \frac{\delta Q}{T} actually has to be a total differential of a function dS … that is called entropy. This argument used in thermodynamics textbooks is kind of a ‘reverse’ argument to the statistical one – which introduces  ‘entropy first’ and ‘temperature second’.

What I  need in the following derivations are the relations between differentials that represent a version of First and Second Law:

The First Law of Thermodynamics states that heat is a form of energy, so

dE = \delta Q - pdV

The minus is due to the fact that energy is increased on increasing volume (There might be other thermodynamics degrees of freedom like the magnetization of a magnetic substance – so other pairs of variables like p and V).

Inserting the definition of entropy S as the total differential we obtain this relation …

dS = \frac{dE + pdV}{T}

… from which follow lots of relations between thermodynamic properties!

I will derive one the them to show how strong the constraints are that the Second Law actually imposes on the physical properties of materials: When the so-called equation of state is given – the pressure as a function of volume and temperature p(V,T) – then you also know something about its specific heat. For an ideal gas pV is simply a constant times temperature.

S is a function of the state, so picking independent variables V and T entropy’s total differential is:

dS = (\frac{\partial S}{\partial T})_V dT + (\frac{\partial S}{\partial V})_T dV

On the other hand, from the definition of entropy / the combination of 1st and 2nd Law given above it follows that

dS = \frac{1}{T} \left \{ (\frac{\partial E }{\partial T})_V dT + \left [ (\frac{\partial E }{\partial V})_T + p \right ]dV \right \}

Comparing the coefficients of dT and dV the partial derivatives of entropy with respect to volume and temperature can be expressed as functions of energy and pressure. The order of partial derivation does not matter:

\left[\frac{\partial}{\partial V}\left(\frac{\partial S}{\partial T}\right)_V \right]_T = \left[\frac{\partial}{\partial T}\left(\frac{\partial S}{\partial V}\right)_T \right]_V

Thus differentiating each derivative of S once more with respect to the other variable yields:

[ \frac{\partial}{\partial V} \frac{1}{T} (\frac{\partial E }{\partial T})_V ]_T = [ \frac{\partial}{\partial T} \frac{1}{T} \left [ (\frac{\partial E }{\partial V})_T + p \right ] ]_V

What I actually want, is a result for the specific heat: (\frac{\partial E }{\partial T})_V – the energy you need to put in per degree Kelvin to heat up a substance at constant volume, usually called C_v. I keep going, hoping that something like this derivative will show up. The mixed derivative \frac{1}{T} \frac{\partial^2 E}{\partial V \partial T} shows up on both sides of the equation, and these terms cancel each other. Collecting the remaining terms:

0 = -\frac{1}{T^2} (\frac{\partial E }{\partial V})_T -\frac{1}{T^2} p + \frac{1}{T}(\frac{\partial p}{\partial T})_V

Multiplying by T^2 and re-arranging …

(\frac{\partial E }{\partial V})_T = -p +T(\frac{\partial p }{\partial T})_V = T^2(\frac{\partial}{\partial T}\frac{p}{T})_V

Again, noting that the order of derivations does not matter, we can use this result to check if the specific heat for constant volume – C_v = (\frac{\partial E }{\partial T})_V – depends on volume:

(\frac{\partial C_V}{\partial V})_T = \frac{\partial}{\partial V}[(\frac{\partial E }{\partial T})_V]_T = \frac{\partial}{\partial T}[(\frac{\partial E }{\partial V})_T]_V

But we know the last partial derivative already and insert the expression derived before – a function that is fully determined by the equation of state p(V,T):

(\frac{\partial C_V}{\partial V})_T= \frac{\partial}{\partial T}[(-p +T(\frac{\partial p }{\partial T})_V)]_V = -(\frac{\partial p}{\partial T})_V +  (\frac{\partial p}{\partial T})_V + T(\frac{\partial^2 p}{\partial T^2})_V = T(\frac{\partial^2 p}{\partial T^2})_V

So if the pressure depends e.g. only linearly on temperature the second derivative re T is zero and C_v does not depend on volume but only on temperature. The equation of state says something about specific heat.

The idealized Carnot process contains four distinct steps. In order to calculate efficiency for a certain machine and working fluid, you need to calculate the heat energies exchanged between machine and bath on each of these steps. Two steps are adiabatic – the machine is thermally insulated, thus no heat is exchanged. The other steps are isothermal, run at constant temperature – only these steps need to be considered to calculate the heat energies denoted Q_1 and Q_2:

Carnot-cycle-p-V-diagram

Carnot process for an ideal gas: A-B: Isothermal expansion, B-C: Adiabatic expansion, C-D: isothermal compression, D-A: adiabatic compression. (Wikimedia, public domain, see link for details).

I am using the First Law again and insert the result for (\frac{\partial E}{\partial V})_T which was obtained from the combination of both Laws – the goal is to express heat energy as a function of pressure and specific heat:

\delta Q= dE + p(T,V)dV = (\frac{\partial E}{\partial T})_V dT + (\frac{\partial E}{\partial V})_T dV + p(T,V)dV
= C_V(T,V) dT + [-p +T(\frac{\partial p(T,V)}{\partial T})_V] dV + p(T,V)dV = C_V(T,V)dT + T(\frac{\partial p(T,V)}{\partial T})_V dV

Heat Q is not a function of the state defined by V and T – that’s why the incomplete differential δQ is denoted by the Greek δ. The change in heat energy depends on how exactly you get from one state to another. But we know what the process should be in this case: It is isothermal, therefore dT is zero and heat energy is obtained by integrating over volume only.

We need p as a function of V and T. The equation of state for ideal gas says that pV is proportional to temperature. I am now considering a more general equation of state of the form …

p = f(V)T + g(V)

The Van der Waals equation of state takes into account that particles in the gas interact with each other and that they have a finite volume (Switching units, from capital volume V [m3] to small v [m3/kg] to use gas constant R [kJ/kgK] rather than absolute numbers of particles and to use the more common representation – so comparing to $latex pv = RT) :

p = \frac{RT}{v - b} - \frac{a}{v^2}

This equation also matches the general pattern.

Van der Waals isothmers (Waals3)

Van der Waals isotherms (curves of constant temperature) in the p-V plane: Depending on temperature, the functions show a more or less pronounced ‘wave’ with a maximum and a minimum, in contrast to the ideal-gas-like hyperbolas (p = RT/v) for high temperatures. (By Andrea insinga, Wikimedia, for details see link.)

In both cases pressure depends only linearly on temperature, and so (\frac{\partial C_V}{\partial V})_T is 0. Thus specific heat does not depend on volume, and I want to stress that this is a consequence of the fundamental Laws and the p(T,V) equation of state, not an arbitrary, additional assumption about this substance.

The isothermal heat energies are thus given by the following, integrating T(\frac{\partial p(T,V)}{\partial T})_V  = T f(V) over V:

Q_1 = T_1 \int_{V_A}^{V_B} f(V) dV
Q_2 = T_2 \int_{V_C}^{V_D} f(V) dV

(So if Q_1 is positive, Q_2 has to be negative.)

In the adiabatic processes δQ is zero, thus

C_V(T,V)dT = -T(\frac{\partial p(T,V)}{\partial T})_V dV = -T f(V) dV
\int \frac{C_V(T,V)}{T}dT = \int -f(V) dV

This is useful as we already know that specific heat only depends on temperature for the class of substances considered, so for each adiabatic process…

\int_{T_1}^{T_2} \frac{C_V(T)}{T}dT = \int_{V_B}^{V_C} -f(V) dV
\int_{T_2}^{T_1} \frac{C_V(T)}{T}dT = \int_{V_D}^{V_A} -f(V) dV

Adding these equations, the two integrals over temperature cancel and

\int_{V_B}^{V_C} f(V) = -\int_{V_D}^{V_A} f(V) dV

Carnot’s efficiency is work – the difference of the absolute values of the two heat energies – over the heat energy invested at higher temperature T_1 :

\eta = \frac {Q_1 - \left | Q_2 \right |}{Q_1} = 1 - \frac {\left | Q_2 \right |}{Q_1}
\eta = 1 - \frac {T_2}{T_1} \frac {\left | \int_{V_C}^{V_D} f(V) dV \right |}{\int_{V_A}^{V_B} f(V) dV}

The integral from A to B can replaced by an integral over the alternative path A-D-C-B (as the integral over the closed path is zero for a reversible process) and

\int_{A}^{B} = \int_{A}^{D} + \int_{D}^{C}+ \int_{C}^{B}

But the relation between the B-C and A-D integral derived from considering the adiabatic processes is equivalent to

-\int_{C}^{B} = \int_{B}^{C} = - \int_{D}^{A} = \int_{A}^{D}

Thus two terms in the alternative integral cancel and

\int_{A}^{B} = \int_{D}^{C}

… and finally the integrals in the efficiency cancel. What remains is Carnot’s efficiency:

\eta = \frac {T_1 - T_2}{T_1}

But what if the equation of state is more complex and specific heat would depends also on volume?

Yet another way to state the Second Law is to say that the efficiencies of all reversible processes has to be equal and equal to Carnot’s efficiency. Otherwise you get into a thicket of contradictions (as I highlighted here). The authors of the VdW paper say they are able to prove this for infinitesimal cycles which sounds of course plausible: As mentioned at the beginning, splitting up any reversible process into many processes that use only a tiny part of the co-ordinate space is the ‘standard textbook procedure’ (see e.g. Feynman’s lecture, especially figure 44-10).

But you could immediately see it without calculating anything by having a look at the process in a T-S diagram instead of the p-V representation. A process made up of two isothermal and two adiabatic processes is by definition (of entropy, see above) a rectangle no matter what the equation of state of the working substance is. Heat energy and work can easily been read off as the rectangles between or below the straight lines:

Carnot-cycle-T-S-diagram

Carnot process displayed in the entropy-temperature plane. No matter if the working fluid is an ideal gas following the pv = RT equation of state or if it is a Van der Waals gas that may show a ‘wave’ with a maximum and a minimum in a p-V diagram – in the T-S diagram all of this will look like rectangles and thus exhibit the maximum (Carnot’s) efficiency.

In the p-V diagram one might see curves of weird shape, but when calculating the relation between entropy and temperature the weirdness of the dependencies of specific heat and pressure of V and T compensate for each other. They are related because of the differential relation implied by the 2nd Law.

Computers, Science, and History Thereof

I am reading three online resources in parallel – on the history and the basics of computing, computer science, software engineering, and the related culture and ‘philosophy’. An accidental combination I find most enjoyable.

Joel on Software: Joel Spolsky’s blog – a collection of classic essays. What every developer needs to know about Unicode. New terms like Astronaut Architects and Leaky Abstractions. How to start a self-funded software company, how to figure out the price of software, how to write functional specifications. Bringing back memories of my first encounters with Microsoft VBA. He has the best examples – Martian Headsets to explain web standards.

The blog started in 1999 – rather shortly after I had entered the IT industry. So it is an interesting time capsule, capturing technologies and trends I was sort of part of – including the relationship with one large well-known software company.

Somewhere deep in Joel’s blog I found references to another classic; it was in an advice on how to show passion as an applicant for a software developer job. Tell them how reading this moved you to tears:

Structure and Interpretation of Computer Programs. I think I have found the equivalent to Feynman’s Physics Lectures in computer science! I have hardly ever read a textbook or attended a class that was both so philosophically insightful and useful in a hands-on, practical way. Using Scheme (Lisp) as an example, important concepts are introduced step-by-step, via examples, viewed from different perspectives.

It was amazing how far you can get with purely Functional Programming. I did not even notice that they had not used a single assignment (Data Mutation) until far into the course.

The quality of the resources made available for free is incredible – which holds for all the content I am praising in this post: Full textbook, video lectures with transcripts, slides with detailed comments. It is also good to know and reassuring that despite the allegedly fast paced changes of technology, basic concepts have not changed that much since decades.

But if you are already indulging in nostalgic thoughts why not catch up on the full history of computing?

Creatures of Thought. A sublime book-like blog on the history of computing – starting from with the history of telephone networks and telegraphs, covering computing machines – electro-mechanical or electronic, related and maybe unappreciated hardware components like the relay, and including biographic vignettes of the heroes involved.

The author’s PhD thesis (available for download on the About page) covers the ‘information utility’ vision that was ultimately superseded by the personal computer. This is an interesting time capsule for me as well, as this story ends about where my personal journey started – touching personal PCs in the late 1980s, but having been taught the basics of programming via sending my batch jobs to an ancient mainframe.

From such diligently done history of engineering I can only learn not to rush to any conclusions. There are no simple causes and effects, or unambiguous stories about who invented what and who was first. It’s all subtle evolution and meandering narratives, randomness and serendipity. Quoting from the post that indicates the beginning of the journey, on the origins of the electric telegraph:

Our physics textbooks have packaged up the messy past into a tidy collection of concepts and equations, eliding centuries of development and conflict between competing schools of thought. Ohm never wrote the formula V = IR, nor did Maxwell create Maxwell’s equations.

Though I will not attempt to explore all the twists and turns of the intellectual history of electricity, I will do my best to present ideas as they existed at the time, not as we retrospectively fit them into our modern categories.

~

Phone, 1970s, Austria

The kind of phone I used at the time when the video lectures for Structure and Interpretation of Computer Programs had been recorded and when I submitted my batch jobs of Fortran code to be compiled. I have revived the phone now and then.

 

Ploughing Through Theoretical Physics Textbooks Is Therapeutic

And finally science confirms it, in a sense.

Again and again, I’ve harped on this pet theory of mine – on this blog and elsewhere on the web: At the peak of my immersion in the so-called corporate world, as a super-busy bonus miles-collecting consultant, I turned to the only solace: Getting up (even) earlier, and starting to re-read all my old mathematics and physics textbooks and lecture notes.

The effect was two-fold: It made me more detached, perhaps more Stoic when facing the seemingly urgent challenges of the accelerated world. Maybe it already prepared me for a long and gradual withdrawal from that biosphere. But surprisingly, I felt it also made my work results (even ;-)) better: I clearly remember compiling documentation I wrote after setting up some security infrastructure with a client. Writing precise documentation was again more like casting scientific research results into stone, carefully picking each term and trying to be as succinct as possible.

As anybody else I enjoy reading about psychological research that confirms my biases one-datapoint-based research – and here it finally is. Thanks to Professor Gary for sharing it. Science says that Corporate-Speak Makes You Stupid. Haven’t we – Dilbert fans – always felt that this has to be true?

… I’ve met otherwise intelligent people, after working with management consultant, are convinced that infinitely-malleable concepts like “disruptive innovation,” “business ecosystem,” and “collaborative culture” have objective value.

In my post In Praise of Textbooks with Tons of Formulas I focused on possible positive explanations, like speeding up your rational System 2 ((c) Daniel Kahneman) – by getting accustomed to mathematics again. By training yourself to recognize patterns and to think out of the box when trying to find the clever twist to solve a physics problem. Re-reading this, I cringe though: Thinking out of the box has entered the corporate vocabulary already. Disclaimer: I am talking about ways to pick a mathematical approach, by drawing on other, slightly related problems intuitively – in the way Kahneman explains the so-called intuition of experts as pattern recognition.

But perhaps the explanation is really as simple as that we just need to shield ourselves from negative effects of certain ecosystems and cultures that are particularly intrusive and mind-bending. So this is my advice to physics and math graduates: Do not rely on your infamous analytical skills forever. First, using that phrase in a job application sounds like phony hollow BS (as unfortunately any self-advertising of social skills does). Second, these skills are real, but they will decay exponentially if you don’t hone them.

6 volumes on all of Theoretical Physics - 1960s self-consistent series by my late professor Wilhelm Macke

Learning General Relativity

Math blogger Joseph Nebus does another A – Z series of posts, explaining technical terms in mathematics. He asked readers for their favorite pick of things to be covered in this series, and I came up with General Covariance. Which he laid out in this post – in his signature style, using neither equations nor pop-science images like deformed rubber mattresses – but ‘just words’. As so often, he manages to explain things really well!

Actually, I asked for that term as I am in the middle of yet another physics (re-)learning project – in the spirit of my ventures into QFT a while back.

Since a while I have now tried (on this blog) to cover only the physics related to something I have both education in and hands-on experience with. Re General Relativity I have neither: My PhD was in applied condensed-matter physics – lasers, superconductors, optics – and this article by physicist Chad Orzel about What Math Do You Need For Physics? covers well what sort of math you need in that case. Quote:

I moved into the lab, and was concerned more with technical details of vacuum pumps and lasers and electronic circuits and computer data acquisition and analysis.

So I cannot find the remotest way to justify why I would need General Relativity on a daily basis – insider jokes about very peculiarly torus-shaped underground water/ice tanks for heat pumps aside.

My motivation is what I described in this post of mine: Math-heavy physics is – for me, that means a statistical sample of 1 – the best way of brazing myself for any type of tech / IT / engineering work. This positive effect is not even directly related to math/physics aspects of that work.

But I also noticed ‘on the internet’ that there is a community of science and math enthusiasts, who indulge in self-studying theoretical physics seriously as a hobby. Often these are physics majors who ended up in very different industry sectors or in management / ‘non-tech’ jobs and who want to reconnect with what they once learned.

For those fellow learners I’d like to publish links to my favorite learning resources.

There seem to be two ways to start a course or book on GR, and sometimes authors toggle between both modes. You can start from the ‘tangible’ physics of our flat space (spacetime) plus special relativity and then gradually ‘add a bit of curvature’ and related concepts. In this way the introduction sounds familiar, and less daunting. Or you could try to introduce the mathematical concepts at a most rigorous abstract level, and return to the actual physics of our 4D spacetime and matter as late as possible.

The latter makes a lot of sense as you better unlearn some things you took for granted about vector and tensor calculus in flat space. A vector must no longer be visualized as an arrow that can be moved around carelessly in space, and one must be very careful in visualizing what transforming coordinates really means.

For motivation or as an ‘upper level pop-sci intro’…

Richard Feynman’s lecture on curved space might be a very good primer. Feynman explains what curved space and curved spacetime actually mean. Yes, he is using that infamous beetle on a balloon, but he also gives some numbers obtained by back-of-the-envelope calculations that explain important concepts.

For learning about the mathematical foundations …

I cannot praise these Lectures given at the Heraeus International Winter School Gravity and Light 2015 enough. Award-winning lecturer Frederic P. Schuller goes to great lengths to introduce concepts carefully and precisely. His goal is to make all implicit assumptions explicit and avoid allusions to misguided ‘intuitions’ one might got have used to when working with vector analysis, tensors, gradients, derivatives etc. in our tangible 3D world – covered by what he calls ‘undergraduate analysis’. Only in lecture 9 the first connection is made back to Newtonian gravity. Then, back to math only for some more lectures, until finally our 4D spacetime is discussed in lecture 13.

Schuller mentions in passing that Einstein himself struggled with the advanced math of his own theory, e.g. in the sense of not yet distinguishing clearly between the mathematical structure that represents the real world (a topological manifold) and the multi-dimensional chart we project our world onto when using an atlas. It is interesting to pair these lectures with this paper on the history and philosophy of general relativity – a link Joseph Nebus has pointed to in his post on covariance.

Learning physics or math from videos you need to be much more disciplined than with plowing through textbooks – in the sense that you absolutely have to do every single step in a derivation on your own. It is easy to delude oneself that you understood something by following a derivation passively, without calculating anything yourself. So what makes these lectures so useful is that tutorial sessions have been recorded as well: Tutorial sheets and videos can be found here.
(Edit: The Youtube channel of the event has not all the recordings of the tutorial sessions, only this conference website has. It seems the former domain does not work any more, but the content is perserved at gravity-and-light.herokuapp.com)

You also find brief notes for these lectures here.

For a ‘physics-only’ introduction …

… I picked a classical, ‘legendary’ resource: Landau and Lifshitz give an introduction to General Relativity in the last third of the second volume in their Course of Theoretical Physics, The Classical Theory of Fields. Landau and Lifshitz’s text is terse, perhaps similar in style to Dirac’s classical introduction to quantum mechanics. No humor, but sublime and elegant.

Landau and Lifshitz don’t need manifolds nor tangent bundles, and they use the 3D curvature tensor of space a lot in addition to the metric tensor of 4D spacetime. They introduce concepts of differences in space and time right from the start, plus the notion of simultaneity. Mathematicians might be shocked by a somewhat handwaving, ‘typical physicist’s’ way to deal with differentials, the way vectors on different points in space are related, etc. – neglecting (at first sight, explore every footnote in detail!) the tower of mathematical structures you actually need to do this precisely.

But I would regard Lev Landau sort of a Richard Feynman of The East, so it takes his genius not make any silly mistakes by taking the seemingly intuitive notions too literally. And I recommend this book only when combined with a most rigorous introduction.

For additional reading and ‘bridging the gap’…

I recommend Sean Carroll’s  Lecture Notes on General Relativity from 1997 (precursor of his textbook), together with his short No-Nonsense Introduction to GR as a summary. Carroll switches between more intuitive physics and very formal math. He keeps his conversational tone – well known to readers of his popular physics books – which makes his lecture notes a pleasure to read.

Artist's concept of general relativity experiment (Public Domain, NASA, Wikimedia)

__________________________________

So this was a long-winded way to present just a bunch of links. This post should also serve as sort of an excuse that I haven’t been really active on social media or followed up closely on other blogs recently. It seems in winter I am secluding myself from the world in order to catch up on theoretical physics.

Random Things I Have Learned from My Web Development Project

It’s nearly done (previous episode here).

I have copied all the content from my personal websites, painstakingly disentangling snippets of different ‘posts’ that were physically contained in the same ‘web page’, re-assigning existing images to them, adding tags, consolidating information that was stored in different places. Raking the Virtual Zen Garden – again.

New website: A 'post.'

Draft of the layout, showing a ‘post’. Left and right pane vanish in responsive fashion if the screen gets too small.

… Nothing you have not seen in more elaborate fashion elsewhere. For me the pleasure is in creating the whole thing bottom up not using existing frameworks, content management systems or templates – requiring an FTP client and a text editor only.

I spent a lot of time on designing my redirect strategy. For historical reasons, all my sites use the same virtual web server. Different sites have been separated just by different virtual directories. So in order to display the e-stangl.at content as one stand-alone website, a viewer accessing e-stangl.at is redirected to e-stangl.at/e/. This means that entering [personal.at]/[business] would result in showing the business content at the personal URL. In order to prevent this, the main page generation script used checks for the virtual directory and redirects ‘bottom-up’ to [business.at]/[business].

In the future, I am going to use a new hostname for my website. In addition, I want to have the option to migrate only some applications while keeping the others tied to the old ASP scripts temporarily. This means more redirect logic, especially as I want to test all the redirects. I have a non-public test site on the same server, but I have never tested redirects as it means creating loads of test host names; but due to the complexity of redirects to come I added names like wwwdummy for every domain, redirecting to my new main test host name, in the same way as the www URLs would redirect to my new public host name.

And lest we forget I am obsessed with keeping old URLs working. I don’t like it if websites are migrated to a new content management system, changing all the URLs. As I mentioned before, I already use ASP.NET Routing for having nice URLs with the new site: A request for /en/2014/10/29/some-post-title does not access a physical folder but the ‘flat-file database engine’ I wrote from scratch will search for the proper content text file based on a SQL string handed to it, retrieve attributes from both file name and file content, and display HTML content and attributes like title and thumbnail image properly.

New website: Flat-file database.

Flat-file database: Two folders, ‘pages’ and ‘posts’. Post file names include creation date, short relative URL and category. Using the ascx extension (actually for .NET ‘user controls’ as the web server will not return these files directly but respond with 404. No need to tweak permissions.)

The top menu, the tag cloud, the yearly/monthly/daily archives, the list of posts on the Home page, XML RSS Feed and XML sitemap  are also created by querying these sets of files.

New web site: File / database entry

File representing a post: Upper half – meta tags and attributes, lower half – after attribute ‘content’: Actual content in plain HTML.

Now I want to redirect from the old .asp files (to be deleted from the server at some point in the future) to these nice URLs. My preferred solution for this class of redirects is using a rewrite map hard-coded in the web server’s config file. From my spreadsheet documentation of the 1:n relation of old ASP pages to new ‘posts’ I have automatically created the XML tags to be inserted in the ‘rewrite map’.

Now the boring part is over and I scared everybody off (But just in case you can find more technical information on the last update on the English version of all website, e.g. here) …

… I come up with my grand insights, click-bait X-Things-You-Need-To-Know-About-Seomthing-You-Should-Not-Do-and-Could-Not-Care-Less-Style:

It is sometimes painful to read really old content, like articles, manifestos and speeches from the last century. Yet I don’t hide or change anything.

After all, this is perhaps the point of such a website. I did not go online for the interaction (of social networks, clicks, likes, comments). Putting your thoughts out there, on the internet that does never forget, is like publishing a book you cannot un-publish. It is about holding yourself accountable and aiming at self-consistency.

I am not a visual person. If I would have been more courageous I’d use plain Courier New without formatting and images. Just for the fun of it, I tested adding dedicated images to each post and creating thumbnails from them – and I admit it adds to the content. Disturbing, that is!

I truly love software development. After a day of ‘professional’ software development (simulations re physics and engineering) I am still happy to plunge into this personal web development project. I realized programming is one of the few occupations that was part of any job I ever had. Years ago, soul-searching and preparing for the next career change, I rather figured the main common feature was teaching and know-how transfer – workshops and acedemic lectures etc. But I am relieved I gave that up; perhaps I just tried to live up to the expected ideal of the techie who will finally turn to a more managerial or at least ‘social’ role.

You can always find perfect rationales for irrational projects: Our web server had been hacked last year (ASP pages with spammy links put into some folders) and from backlinks in the network of spammy links I conclude that classical ASP pages had been targeted. My web server was then hosted on Windows 2003, as this time still fully supported. I made use of Parent Paths (../ relative URLs) which might have eased the hack. Now I am migrating to ASP.NET with the goal to turn off Classical ASP completely, and I already got rid of the Parent Paths requirement by editing the existing pages.

This website and my obsession with keeping the old stuff intact reflects my appreciation of The ExistingBeing Creative With What You Have. Re-using my old images and articles feels like re-using our cellar as a water tank. Both of which are passions I might not share with too many people.

My websites had been an experiment in compartmentalizing my thinking and writing – ‘Personal’, ‘Science’, ‘Weird’, at the very beginning the latter two were authored pseudonymously – briefly. My wordpress.com blog has been one quick shot at Grand Unified Theory of my Blogging, and I could not prevent my personal websites to become more an more intertwined, too, in the past years. So finally both do reflect my reluctance of separating my personal and professional self.

My website is self-indulgent – in content and in meta-content. I realize that the technical features I have added are exactly what I need to browse my own stuff for myself, not necessarily what readers might expect or what is considered standard practice. One example is my preference for a three-pane design, and for that infinite (no dropdown-menu) archive.

New website: Category page.

Nothing slows a website down like social media integration. My text file management is for sure not the epitome of efficient programming, but I was flabbergasted by how fast it was to display nearly 150 posts at once – compared to the endless sending back and forth questionable stuff between social networks, tracking, and ad sites (watch the status bar!).

However, this gives me some ideas about the purpose of this blog versus the purpose of my website. Here, on the WordPress.com blog, I feel more challenged to write self-contained, complete, edited, shareable (?) articles – often based on extensive research and consolidation of our original(*) data (OK, there are exceptions, such as this post), whereas the personal website is more of a container of drafts and personal announcements. This also explains why the technical sections of my personal websites contain rather collections of links than full articles.

(*)Which is why I totally use my subversive sense of humour and turn into a nitpicking furious submitter of copyright complaints if somebody steals my articles published here, on the blog. However, I wonder how I’d react if somebody infringed my rights as the ‘web artist’ featured on subversiv.at.

Since 15 years I spent a lot of time on (re-)organizing and categorizing my content. This blog has also been part of this initiative. That re-organization is what I like websites and blogs for – a place to play with structure and content, and their relationship. Again, doing this in public makes me holding myself accountable. Categories are weird – I believe they can only be done right with hindsight. Now all my websites, blogs, and social media profiles eventually use the same categories which have evolved naturally and are very unlike what I might have planned ‘theoretically’.

Structure should be light-weight. I started my websites with the idea of first and second level ‘menu’s and hardly any emphasis on time stamps. But your own persona and your ideas seem to be moving targets. I started commenting on my old articles, correcting or amending what I said (as I don’t delete, see above). subversiv.at has been my Art-from-the-Scrapyard-Weird-Experiments playground, before and in addition to the Art category here and over there I enjoyed commenting in English on German articles and vice versa. But the Temporal Structure, the Arrow of Time was stronger; so I finally made the structure more blog-like.

Curated lists … were most often just ‘posts’. I started collecting links, like resources for specific topics or my own posts written elsewhere, but after some time I did not considered them so useful any more. Perhaps somebody noticed that I have mothballed and hidden my Reading list and Physics Resources here (the latter moved to my ‘science site’ radices.net – URLs do still work of course). Again: The arrow of time wins!

I loved and I cursed the bilingual nature of all my sites. Cursed, because the old structure made it too obvious when the counter-part in the other language was ‘missing’; so it felt like a translation assignment. However, I don’t like translations. I am actually not even capable to really translate the spirit of my own posts. Sometimes I feel like writing in English, sometimes I feel like writing in German. Some days or weeks or months later I feel like reflecting in the same ideas, using the other language. Now I came up with that loose connection of an English and German article, referencing each other via a meta attribute, which results in an unobtrusive URL pointing to the other version.

Quantitative analysis helps to correct distorted views. I thought I wrote ‘so much’. But the tangle of posts and pages in the old sites obscured that actually the content translates to only 138 posts in German and 78 in English. Actually, I wrote in bursts, typically immediately before and after an important change, and the first main burst 2004/2005 was German-only. I think the numbers would have been higher had I given up on the menu-based approach earlier, and rather written a new, updated ‘post’ instead of adding infinitesimal amendments to the existing pseudo-static pages.

Analysing my own process of analysing puts me into this detached mode of thinking. I have shielded myself from social media timelines in the past weeks and tinkered with articles, content written long before somebody could have ‘shared’ it. I feel that it motivates me again to not care about things like word count (too long), target groups (weird mixture of armchair web psychology and technical content), and shareability.

My Flat-File Database

A brief update on my web programming project.

I have preferred to create online text by editing simple text files; so I only need a text editor and an FTP client as management tool. My ‘old’ personal and business web pages are currently created dynamically in the following way:
[Code for including a script (including other scripts)]
[Content of the article in plain HTML = inner HTML of content div]
[Code for writing footer]

The main script(s) create layout containers, meta tags, navigation menus etc.

Meta information about pages or about the whole site are kept in CSV text files. There are e.g. files with tables…

  • … listing all of pages in each site and their attributes – like title, key words, hover texts for navigation links or
  • … tabulating all main properties of all web sites – such as ‘tag lines’ or the name of the CSS file.

A bunch of CSV files / tables can be accessed like a database by defining the columns in a schema.ini file, and using a text driver (on my Windows web server). I am running SQL queries against these text files, and it would be simple to migrate my CSV files to a grown-up database. But I tacked on RSS feeds later; these XML files are hand-crafted and basically a parallel ‘database’.

This CSV file database is not yet what I mean by flat-file database: In my new site the content of a typical ‘article file’ should be plain text, free from code. All meta information will be included in each file, instead of putting it into the separate CSV files. A typical file would look like this:

title: Some really catchy title
headline: Some equally catchy, but a bit longer headline
date_created: 2015-09-15 11:42
date_changed: 2015-09-15 11:45
author: elkement
[more properties and meta tags]
content:
Text in plain HTML.

The logic for creating formatted pages with header, footer, menus etc. has to be contained in code separate from these files; and text files needs to be parsed for meta data and content. The set of files has effectively become ‘the database’, the plain text content being just one of many attributes of a page. Folder structure and file naming conventions are part of the ‘database logic’.

I figured this was all an unprofessional hack until I found many so-called flat-file / database-less content management systems on the internet, intended to be used with smaller sites. They comprise some folders with text files, to be named according to a pre-defined schema plus parsing code that will extract meta data from files’ contents.

Motivated by that find, I created the following structure in VB.NET from scratch:

  • Retrieving a set of text files based on a search criteria from the file system – e.g. for creating the menu from all pages, or for searching for one specific file that should represent the current page – current as per the URL the user entered.
  • Code for parsing a text file for lines having a [name]: [value] structure
  • Processing nice URL entered by the user to make the web server pick the correct text file.

Speaking about URLs, so-called ASP.NET Routing came in handy: Before, I had used a few folders whose default page redirects to an existing page (such as /heatpump/ redirecting to /somefolder/heatpump.asp). Otherwise my URLs all corresponded to existing single files.

I use a typical blogging platform’s schema with the new site: If users enters

/en/2015/09/15/some-cool-article/

the server accesses a text text file whose name contains language, year, such as:

2015-09-15_en_some-cool-article.txt

… and displays the content at the nice URL.

‘Language’ is part of the URL: If a user with a German browsers explicitly accesses an URL starting with /en/ , the language is effectively set to English. However, If the main page is hit, I detect the language from the header sent by the client.

I am not overly original: I use two categories of content – posts and pages – corresponding to text files organized in two different folders in the file system, and following different conventions for file names. Learning from my experience with hand-crafted menu pages in this this blog here, I added:

  • A summary text included in the file, to be displayed in a list of posts per category.
  • A list of posts in a single category, displayed on the category / menu page.

The category is assigned to the post simply as part of the file name; moving a post to another category is done by renaming it.

Since I found that having to add my Google+ posts to just a single Collection was a nice exercise I limit myself to one category per post deliberately.

Having built all the required search patterns and functions for creating lists of posts or menus or recent posts, or for extracting information from specific pages as the current or the corresponding page in the other language …  I realized that I needed a better and clear-cut separation of a high-level query for a bunch of attributes for any set of files meeting some criteria from the lower level doing the search, file retrieval, and parsing.

So why not using genuine SQL commands at the top level – to be translated to file searches and file content parsing on the lower level?

I envisaged building the menu of all pages e.g. by executing something like

SELECT title, url, headline from pages WHERE isMenu=TRUE

and creating the list of recent posts on the home page by running

SELECT * FROM posts WHERE date_created < [some date]

This would also allow for a smooth migration to an actual relational database system if the performance of file-based database would not be that great after all.

I underestimated the efforts of ‘building your own database engine’, but finally the main logic is done. My file system recordset class has this functionality (and I think I finally got the hang of classes and objects):

  • Parse a SQL string to check if it is well-formed.
  • Split it into pieces and translate pieces to names of tables (from FROM) and list of fields (from SELECT and WHERE).
  • For each field, check (against my schema) if the field should be encoded in the file’s name of if it was part of the name / value attributes in the file contents.
  • Build a file search pattern string with * at the right places from the file name attributes.
  • Get the list of files meeting this part of the WHERE criteria.
  • Parse the contents of each file and exclude those not meeting the ‘content fields’ criteria specified in the WHERE clause.
  • Stuff all attributes specified in the SELECT statement into a table-like structure (a dataTable in .NET) and return a recordset object –  that can be queried and handled like recordsets returned by standard database queries – that is: Check for End Of File, or MoveNext, return the value of a specific cell in a column with specific name.

Now I am (re-)creating all collections of pages and posts using my personal SQL engine, In parallel I am manually sifting through old content and turning my web pages into articles. To do: The tag cloud and handling tags in general, and the generation of the RSS XML file from the database.

The new site is not publicly available yet. At the time of writing of this post, all my sites still use the old schema.

Disclaimers:

  • I don’t claim this is the best way to build a web site / blog. It’s also a fun project for the sake of having fun with developing it, exploring the limits of flat-file databases, forcing myself to deal with potential performance issues.
  • It is a deliberate choice: My hosting space allows for picking from different well-known relational databases and I have done a lot of SQL Server programming in the past months in other projects.
  • I have a licence of Visual Studio. Using only a text editor instead is a deliberate choice, too.