Gödel, Escher, Bach, and Strange Loops: Nostalgia and Random Thoughts

I am curious – who read the book, too? Did you like it?

I read it nearly 30 years ago and I would also tag it one of the most influential books I read as a teenager.

[This might grow into a meandering and lengthy post with different (meta-)levels – given the subject of the post I think this is OK.]

In 1995 author Douglas Hofstadter said the following in an interview by Wired – and this also resembles similar statements in his book I am a Strange Loop published 2007. He utters frustration with the  effect of GEB on readers and on his reputation – although he won a Pulitzer Prize for his unusual debut book (published 1979).

From the Wired interview:

What Gödel, Escher, Bach was really about – and I thought I said it over and over again – was the word I. Consciousness. It was about how thinking emerges from well-hidden mechanisms, way down, that we hardly understand. How not just thinking, but our sense of self and our awareness of consciousness, sets us apart from other complicated things. How understanding self-reference could help explain consciousness so that someday we might recognize it inside very complicated structures such as computing machinery. I was trying to understand what makes for a self, and what makes for a soul. What makes consciousness come out of mere electrons coursing through wires.

There is nothing metaphysical in the way the term soul is used here. Having re-read GEB now I marvel at the level Hofstadter was able to provide an interpretation devoid of metaphysics – yet elegant and even poetic. Hofstadter is quoting Zen koans but he does not force “spirituality”  upon the subject – he calls Zen intellectual quicksand.

GEB is about the machinery of mind without catering to the AI enthusiasm shared by transhumanists. It has once been called a Bible of AI but maybe today it would not be considered optimistic enough in the nerdy sense. It is not about how new technology might exploit our (alleged) understanding of the mind – it is only about said understanding.

When I read the book nearly 30 years ago I enjoyed it for two main reasons: the allusions and references to language, metaphors and translation – especially as implemented in the whimsical Lewis-Carroll-style dialogues of Achilles, Mr. Tortoise and friends…

And yet many people treated the book as just some sort of big interdisciplinary romp whose point was simply to have fun. In fact, the fun was merely icing on the cake.

… and,  above all, that popular but yet mathy introduction to Gödel’s Incompleteness Theorem(s). Gödel’s theorem is presented as the analogue of oxymoronic statements such as I am a liar or This statement is false – translated to math. More precisely there are true statements about integers in sufficiently powerful formal systems that yet cannot be proven within those systems.

Originally, the book was purely about the way the proof of Gödel’s theorem kept cropping up in the middle of a fortress – Principia Mathematica by Bertrand Russell and Alfred North Whitehead – that was designed to keep it out. I thought, Here’s a structure that attempts to keep out self-knowledge, but when things get sufficiently complex and sufficiently tangled, all of a sudden – whammo! – it’s got self-representation in it. That to me was the trick that underlies consciousness.

I had considered Gödel the main part of the trio and I think I was sort of “right” due to this:

So, at first, there were no dialogs, no jokes, no wordplay, and no references to Escher or Bach. But as I typed the manuscript up in ’74, I decided it was written in an immature style. I decided to insert the dialogs and the Escher so that the playfulness became a kind of a secondary – but extremely important – part of the book. Many people focused on those things and treated the book as a big game-playing thing.

I am afraid, I did. I read the chapters dealing with a gradual introduction of the theorem more often than the parts about consciousness. Blending something abstract – that only hardcore nerds might appreciate – with wordplay, Escher drawings and musings on musical theory (pun not intended but obviously this is contagious) was a master piece of science writing. It seems this has widened the audience but not in an intended way.

But isn’t that the fate of nearly every real well-written science book transcending the boundaries of disciplines? Is there any philosopher-physicist writing about quantum mechanics who had not been quoted out-of-context by those who prefer to cook up metaphysical / emotionally appealing statements using scientifically sounding phrases as ingredients?

Anyway, focusing on the theorem: The gist of Hofstadter’s argument is that inherent contradictions were introduced directly to the very epitome of pristine rationality, Russell’s and Whitehead’s attempted to create. So we should not be surprised to find self-reference and emergent symbols in other systems built from boring little machine-like components. In a dialogue central to the idea of GEB his main protagonists discuss about holism and reductionism with a conscious ant hill – made up from dumb ants.

The meticulously expounded version of Gödel’s theorem is the heart and the pinnacle of the storyline of GEB in my point of view, and it is interesting to compare Hofstadter’s approach to the crisp explanation Scott Aaronson gives in Quantum Computing since Democritus. Scott Aaronson calls Gödel’s way to have formal statement talking about themselves an elaborate hack to program without programming. Aaronson makes the very convincing case that you could avoid all that talk about grand difficult math and numbering statements by starting from the notion of a computer, a Universal Turing machine.

Gödel’s Proof then turns into a triviality as a formal system envisaged by Russell would be equivalent to having found a solution to the halting problem. The philosophical implications are preserved but it sounds more down-to-earth and it takes about two orders of magnitude less pages.

As Hofstadter says implicitly and explicitly: Metaphors and context are essential. Starting from a proof involving a program that is fed its own code probably avoids unwanted metaphysical-mystical connotations – compared to cooking up a scheme for turning statements of propositional logic into numbers, framed with Zen Buddhism, molecular biology, and art. But no matter in which way I might prefer to think about Gödel’s proof I guess I missed the mark:

(From the Wired interview – continued)

I had been aiming to have the book reach philosophers, people who thought about the mind and consciousness, and a small number actually saw what I was getting at, but most people just saw the glitter. At the time, I felt I’d lost a great deal by writing a book like that so early in my career, because I was no longer taken seriously by anybody.

If you did not get the message either you are in good company. David Deutsch, says in his review of I am a Strange Loop:

Hofstadter … expresses disappointment that his 1979 masterpiece Gödel, Escher, Bach (one of my favourite books) was not recognized as explaining the true nature of consciousness, or “I”-ness. I have to confess that it never occurred to me that it was intended to do so. I thought it merely explained the problem, highlighting stark flaws in common-sense ideas about minds. It also surveyed the infinite depth and meaning that can exist in “mere” computer programs. One could only emerge from the book (or so I thought) concluding that brains must in essence be computers, and consciousness an attribute of certain programs – and that discovering exactly what attribute is an urgent problem for philosophy and computer science. Hofstadter agrees with the first two conclusions but not the third; he considers that problem solved.

I can’t comment on the problem of consciousness being a yet-to-clarified attribute / by-product of computing but I find the loopy part about brains that must in essence be computers convincing.

Accidentally I have now read three different refutations of the so-called Chinese Room argument against strong AI – by Hofstadter, Aaronson and Ray Kurzweil. A human being in an hypothetical room pretends to exchange messages (on paper) in Chinese with interrogators. They might believe the guy speaks Chinese though he does only lookup rules in a book and mindlessly shift papers.

But how could you not associate the whole room, the rule book, the (high-speed!) paper-shuffling process with what goes on the system of the brains’ neurons? The person does not speak Chinese but “speaking Chinese” is an emergent phenomenon of the whole setup. Mental images invoked by “rule book” and “paper” are called intuition pumps by Hofstadter (a term coined by his friend Daniel Dennett) – examples picked deliberately to invoke that sudden “self-evident” insight along the lines of: Of course the human mind does not follow a mere rulebook!.

[Pushing to the level of self-referential  navel-gazing now]

Re-reading the blurb of my old version of the book I am able to connect some dots: I had forgotten that Hofstadter actually has a PhD in physics – theoretical condensed matter physics – and not in computer science or cognitive science. So the fact that a PhD in physics could prepare you for a career / life of making connections between all kinds of hard sciences, arts and literature was certainly something that might have shaped my worldview. All the author heroes who had influenced me the most as a teenager were scientist-philosophers, such as Albert Einstein and Viktor Frankl.

If I go on like this, talking about the science books and classics I read as a child I might get the same feedback as Hofstadter (see amazon.com reviews for example): This is elitist and only about showing off his education etc.

I am not sure what Hofstadter should have been done to avoid this. Not writing the books at all? Focusing on a narrower niche in order to comply with common belief that talents in seemingly diverse fields have to be mutually exclusive?

Usually a healthy dose of self-irony mitigates the smarty effect. Throw in jokes about how your stereotype absent-mindedness prevents you from exchanging that clichéd light bulb. But Hofstadter’s audience is rather diverse – so zooming in on the right kind of humor could be tricky.


And now I do what is explained so virtuoso in GEB – having pushed and popped through various meta-levels I will not resolve the tension and return to the tonic of the story … a music pun, pathetically used out of context.


You might wonder why I did not include any Escher drawings. There are all copyrighted still since  less than 70 years have passed since Escher’s death. But there are some interesting DIY Projects on Youtube, bringing to life Escher’s structures – such as this one.


Further reading The Man Who Would Teach Machines to Think (The Atlantic)

Douglas Hofstadter, the Pulitzer Prize–winning author of Gödel, Escher, Bach, thinks we’ve lost sight of what artificial intelligence really means. His stubborn quest to replicate the human mind.

Scott Aaronson’s website, blog and papers  – a treasure trove! His book is not an easy read and probably unlike every so called science book you have ever read. It has been created from lecture notes. His tone is conversational and the book is incredibly witty – but nonetheless it is quite compressed information containing more than one course in math, quantum physics and computer science. And yet – this is exactly the kind of science book I want to read when trying to make myself familiar with a new field. One “warning”: it is about theory, not about how to build a quantum computer. Thanks to wavewatching.net for the pointer.

34 Comments Add yours

  1. bert0001 says:

    … when I bought GEB … 25 years ago, I was not equiped with the “tools” to understand it and I don’t think I am now. Taking it this time of the shelf for an unfragmented read? … probably not. I reached my own conclusions about the mind machine and feel it beyond my time scape to delve into this book and its successor “Fluid Concepts and Creative Analogies”.
    But it must have influenced me in more ways than one can imagine. Yes.
    The essence about consciousness never reached me. I had no idea this was to be found inside.
    But a very interesting article, you wrote!!!

    1. elkement says:

      Thanks a lot for reading that lengthy post – I know you have been very busy!
      I missed that great message about consciousness, too. I was fascinated by Gödel’s proof and I felt the book had reached a climax when that proof was done. The rest – about AI, Turing etc. – was interesting but I considered it more of a historical overview. Or it was just because ‘AI” back then was not really very impressive?
      Anyway – I agree with you: It is a book that sinks in to might impact you in unexpected ways.

  2. Quax says:

    Happy to see that I got you interested in reading some of Scott Aaronson’s material. I’ll bookmark this post, so next time he gets angry with me I can point out that I help him selling his book :-)

    I often disagree with him, but his writing is always thought provoking, well executed and a pleasure to read.

    Have you checked out his paper on Knightian Freedom?

    And if you haven’t seen this video yet, then you are in for a treat.

    1. elkement says:

      Hi Henning, as always thanks for the pointers – it seems I have missed this post of yours and thus the link to this paper!
      I seriously enjoy Scott Aaronson’s writings for exactly the reasons you mention.

  3. Hi Elke, I also read GEB about 30 years ago, first soft cover edition. When I pulled it off the shelf today, I was disappointed to see the paper had turned yellow and brittle. If I knew where I bought it, I’d consider asking for a refund or replacement.

    I couldn’t put it down at the time. It was one of the most exciting books I’d ever read and some of its metaphors have remained with me. In my post, Jung Nafs, I alluded to GEB when I described myself as a termite colony (I’d forgotten that it was actually an ant hill, but in retrospect, I think a termite colony was a better metaphor for my purposes). And invariably, whenever I rinse off the garden hose with the self same garden hose, I declare it as temporarily conscious. I tell myself jokes.

    The big news though, is that when I finished the book, I wrote Hofstadter a letter and he replied. I have it somewhere, fortunately not in my copy of the book, otherwise it would be yellow and brittle as well.

    I wrote him to carp about how I didn’t see Escher in the same league as G and B. If you want self reference or a loop in visual art, you’d be further ahead with Marcel Duchamp or Ad Reinhardt. His reply was very gracious and alas, we have agreed to disagree.

    1. elkement says:

      Thanks, Steve! As usual, I learn something about art from your comment! I think I can relate to your assessment of E versus G and B. Not sure if this is a valid argument, but Escher seems much easier to “get” than Bach and Gödel.
      Optical illusions are intriguing but of course you “just see what happens”. I guess you cannot value Bach’s fugues really without knowing something about music, and this is of course true for Gödel.

  4. Wow … the vast majority of all this is so far above me Elke that I hesitate to comment. I have read none of the texts you allude to. If, however, one small part of this expansive dissertation questions the value of computing in explorations of the nature of consciousness, then I think perhaps I can add at least my opinion. I think I have (as have others) reflected, elsewhere, on consciousness as being an emergent property of neural systems. If you look all along the evolutionary family tree you find that neural complexity, only as expressed among primates, is capable of what we refer to as consciousness. Sure, there are certain molluscs that are pretty darn good at playing games, but I don’t think there is any argument that they come anywhere near the human awareness of self. In any case, perhaps what we are seeing is the emergent property of a highly complex neural system. Once we get machines capable of receiving some nth level of (sensor) input, perhaps they too will be able (given enough processing power) to reflect upon themselves. Actually, to me, this doesn’t seem far off. Can’t machines now carry out internal diagnostics … which are self-healing such that some sort of state of disequilibrium is set right? If so, how much further can it be before the computer realizes that it exists? For what it’s worth. D

    1. elkement says:

      Thanks, Dave! I think you are quite qualified to comment on all that! What I did not mention in the post was that Hofstadter also touches on molecular biology in GEB – so all the disciplines are connected somehow. (His point was the “strange self-referential loop formed from entities working on copying of DNA while themselves being built from the information encoded in DNA…)

      I can fully relate to your approach – I would also just wait and see what happens when machines get more and more sophisticated. What AI enthusiasts seem to bemoan is that machines already got quite intelligent (including self-learning and self-healing)… but as soon as machines have taken the next level, be it Google search or whatever, then critics say something like “Well, that just search” (or just winning in chess…). So when machines can do something it is demoted to “just something (mundane).

  5. Joseph Nebus says:

    I did read GEB, and in circumstances that rather inconvenienced some friends, it happens.

    Back in college I was working on the weekly alternate paper, and one of the humor editors had brought in the book because he wanted to scan a picture from it for some reason or other. (Probably it was an Escher painting they wanted to use, because that’s the sort of thing you’d get out of GEB and couldn’t have on hand in those pre-Internet days.) I didn’t realize that, and just saw the fat, interesting-looking book sitting on one of the desks, and since I was near the end of whatever I was reading anyway, I took it and read it over the next couple days. They had no idea where it went, of course, and scrambled to replace whatever the point of the GEB photo would have been before the issue had to be sent to the printers. I think it was the next day they realized I’d had it in my book bag all the while, but to be fair to myself, nobody had mentioned in my presence that they were looking for it.

    I had less trouble swiping the hanging-around-the-office-for-some-reason copy of Gordon Dickson’s utterly insignificant novel The Forever Man.

    1. elkement says:

      Haha – thanks, Joseph, great story about serendipity! So you have always been a blogger in some sense :-) (‘weekly alternate paper’)

      1. Joseph Nebus says:

        Oh, maybe I have in some very loose sense. I did a lot of writing for that paper, but that’s because I had whatever defect of judgement made me think it was endlessly fascinating to write about the student government. I think at one point I worked out I’d written a quarter of a million words for the paper, which, over four years is less than it seems.

  6. In the late eighties I eagerly read Hans Moravec’s “Mind Children,” thinking it would clarify much of the confusion I had, by then built up. Alas it was not to be. If anything the book left me even more confused as it opened still more areas I had, as of then, not considered. But what an awesome ride through the thinking of the time!
    And thus it has remained. With each new decently-written book I have read on the whole topic of mind, learning or AI, the ideas have only served to better flesh out the problem rather than to offer a coherent theory of explanation. As I see it, that’s the best I can look forward to in my lifetime. Perhaps at some time in the future, but for now, stuck mostly in deterministic, binary-based systems I see little hope for the fresh, simple and above all–antifragile–solution that will be the structure on which a good theory will finally get built.

    1. elkement says:

      The question is if we need a very deep theoretical explanation upfront at all or if something like consciousness will “suddenly” arise if the underlying technology is complex enough … unless we propose some soul-like substance that will make a bunch of neurons – rather simple units – conscious.

      Probably the fresh approach – based on tinkering – is to build a supercomputer from artificial neurons and wait what happens? And then work on the theory afterwards? There are lots of engineering challenges to be solved – above all to run that system efficiently as the power consumption of the human brains is so amazingly low.

      1. That does sound reasonable. You are right, too about the power consumption. An 8 core processor and burn off 200 W and not come close to pulling off what just a few cc’s of brain matter could without hardly consuming a couple of W. But then again I imagine the neurons do their thing rather differently.

  7. danielmullin81 says:

    Excellent piece. I haven’t read GEB, but I’ve read bits and pieces of I Am A Strange Loop because it’s aimed at the so-called ‘hard problem’ of consciousness in philosophy of mind. I’m of two minds (no pun intended) on Hofstadter’s position.

    On the one hand, I like the nerdy AI implications of seeing consciousness as an emergent property, a feedback loop of a sufficiently complex physical system. On the other hand, I think Hofstadter and his allies (for example Dennett) move too quickly in declaring the ‘hard problem’ solved or not a problem at all.

    Although I’m attracted to the simplicity of Hofstadter’s view — if true, it would neatly tie up consciousness within a naturalistic paradigm — I have some sympathy for the other side of the debate (i.e. David Chalmers and Thomas Nagel) who argue that not only is consciousness a hard problem, but it’s one that the sciences are ill-equipped to solve in principle. It would take a very long comment to explain why I think there’s something to that claim, so I’ll leave it at that for now.

    You can always press me for answers in follow-up comments. ;)

    1. elkement says:

      Thanks a lot, Dan! I appreciate the philosopher’s feedback in particular!

      It is an interesting discussion, but I admit (as in my reply below to ‘Elagnel Exterminador’) that I probably don’t fully understand the Zombie discussion yet. I thought I understood it (from I am a Strange Loop), but based on a remark in Aaronson’s book – who agrees with Chalmers – I am not so sure anymore.

      But given your most recent Easter post on some sort of zombies – this is probably a topic to be covered in a future post of yours ;-)

      In addition I am also of two minds in general: I am interested in all this from the perspective of the scientist – just curious how to explain a phenomenon and if / how physics or math are useful. On the other I am asking myself if it finally would really matter to me – ‘really’ in the sense of: Thinking about my own life as a human being that cannot step out of the system and whose mind / soul / whatever will not work differently even if scientists came up with a compelling explanation that appeals to the scientist in me.

    2. Elke, sometimes your posts intrigue me but leave me in a position where I do not quite know what to say by way of comment. Like Dan, I, too am only slightly familiar with this work. I do know that any book that I have read in an effort to further my understanding of either AI or theory of mind has left me a bit cold. More importantly they have also left me (1) with the notion that the author is mostly, arrogantly putting forth a somewhat one-sided viewpoint that seems to simply not consider any other point but their own. Some of the names I have seen in this piece and in the comments are certainly guilty of that. The gurus, it seems, are bullies who are best at picking holes in others’ arguments and not on carefully examining their own rather flawed views. and (2) more and more convinced that much of our AI efforts seem to reduce down to deterministic systems when it’s still not clear whether a mind is deterministic or not. Witness the fervor with which the adherents to either side of that debate go after one another to see how much the arguments rest on emotion and not research!
      Simply put I am still firmly of the belief that we do not have a sufficiently robust theory of mind to provide good assurance that any efforts at AI do anything more than cleverly lie to us! That is AI is best at convincing us that it’s “like” intelligence without actually being intelligent.
      One more thing: once again you have presented me with a written work that I have to add to my list :-)

      1. M. Hatzel says:

        What came to mind early in your post was a comparison of early critiques of the Anglican church by writers like Laurence Stern, Jonathan Swift, John Donne, John Dryden. Sometimes the depth of their critique is difficult to measure, as open discussion of religion was not politically savvy at the time (i.e. might result in execution). Yet, they posed questions about consciousness and religion, and challenged the “wit and reason” debate which tried to see playful, poetic and metaphoric language as dangerous and less desirable to “reasonable” thinking. Hofstadter seems to have offered an updated version of the discussion. Now, the strong divisions between disciplines seems to be the dangerous line to cross, suggesting the thinking bodies have yet to full understand and collapse the binaries that persist in the wit/reason opposition.

        When I posted the link on philosophers in the news, Dave had proposed a discussion on the value of breaking down these divisions… which I have been wondering if this may be the place academics needs to go/return in future?

        Certainly, areas of the humanities have been invaded by business schools in the last decade or two, so as to offer marketing/communication degrees which are more highly valued in the work force than English degrees, for example. HR departments are now generally drawn from the same school of thought, which at one time were more open to humanities students. Of course, this is largely about manipulating funding dollars within the academic institutions, but what valuable insights are lost when such fields close down?

        To continue the conversation over from Dave’s blog about mirroring, humanity knew for a long time that imagination–the capacity to make connections between one thing and another (i.e. a metaphor)–was key to cultivating empathy and compassion, or “Christian charity.” It’s only a recent thing to know that indeed there are neurons in our bodies that function to form a connection between what we glean with our sensory observations and our own consciousness. Language is probably one of the best examples of the result of such a neural system. I am very curious about this book, especially in light of this statement: “…understanding self-reference could help explain consciousness so that someday we might recognize it inside very complicated structures such as computing machinery.”

        1. elkement says:

          Thanks, Michelle! I agree that the boundaries between different disciplines are a main issue here. Probably philosophy of mind / AI is maybe the worst battlefield in that respect.
          What intrigued me about Hofstadter was that he is not a nerdy technology advocate (at all – in some interview he said something like: I am not really interested in computers) but his worldview is physicalist nonetheless. Analogy, metaphor, and language seems most important to him in his research today – but this is probably an unusal “AI” perspective. Though he used the term “AI” in GEB he dropped it later.

          GEB is definitely playful and witty – as you said. Strange Loop is not so much, and there are lengthy sections that deal with refutations of critique. But as I said to Maurice in my “devil’s advocate’s” comment – I understand that people start attacking their opponents by dissecting counter-arguments after they might have been attacked in this way again and again.

          1. M. Hatzel says:

            I suspect there is a relevant connection between metaphor and intelligence. Linguists have pinpointed the explosion of ideas and the beginning of skills specialization and trade economies to the development of mirroring (a biological development in our neural networks). Mirroring is a process of imitative learning… and metaphors are imitative, when we are literally saying “this is LIKE that” to share our understanding.

      2. danielmullin81 says:

        I agree with you, Maurice. I don’t recall Hofstadter’s tone being hostile, but other eliminative materialists, like Daniel Dennett, Paul and Pat Churchland, and Alex Rosenberg can be downright arrogant and dismissive of other views. They tend to regard any non-eliminativist theory of consciousness as tantamount to magic.

        However, like Chalmers, I’m skeptical that neuroscience will be able to locate the qualia. This is not a failure of the scientific method, but a misapplication. The application of the scientific method — which is really good at explaining mechanistic phenomena — to mental phenomena led to the mind/body problem in philosophy in the 17th c. It’s a modern problem that philosophers didn’t worry about before Descartes. So applying the assumptions that created the problem is unlikely to solve the problem. You can’t get rid of the dust from under the rug by sweeping more dust under the rug. Most eliminativists, like the Churchlands and Dennett, know that science isn’t going to find ‘the mind’ or locate ‘qualia’, so they are forced to simply deny their existence. The mind, and mental phenomena, is illusion because the eliminativist’s ontology simply doesn’t have room for it. But this isn’t ‘consciousness explained’, as the title of Dennett’s book suggests, but consciousness explained away. The hard problem is still there.

        I also agree that computer science, which is often invoked as an analogy, actually demonstrates just how hard the problem is. Again, I have quite a bit of sympathy for Thomas Nagel’s view, and I think he has been unfairly maligned recently by the usual suspects who don’t like the fact that he’s pointing out some inconvenient truths.

        1. elkement says:

          Dan, I am curious if philosophers would accept or advocate an “experimental” approach. What if we would halt theorizing and fierce debates until we build a machine from units that resemble neurons in a reasonable way? Then the thing becomes conscious – or not. (Probably it does but we still can’t explain it.)

          1. danielmullin81 says:

            Elke, I think in general, yes most philosophers would advocate an empirical approach. Bear in mind that hardly philosopher — apart from those motivated by religious views — thinks that consciousness does not require some physical substrate, whether neurons or silicon. To say that each conscious state has a corresponding brain state doesn’t get you eliminative materialism. Emergent property dualists, for example, think that the mind is dependent on the brain, but they are not eliminativists. Again, an example would be someone like Chalmers. So if a computer became conscious, I’m not sure what that would prove. It wouldn’t necessarily entail eliminativism; it’s also compatible with property dualism. So we could still debate and construct theories about whether and why the computer is/appears conscious.

          2. These guys claim it won’t be that easy..


        2. Dan, I’m right there in line with you on this. It’s my belief that some of the names you mentioned have been causing many brilliant minds to waste time in search if the wrong problem in much the same way as did B F Skinner in the last century. It will only be after their shrill voices are a little more silent that a new crop of researchers will dare to try a different oath.
          Speaking of Chalmers I drew on his work decades ago while writing my Masters thesis on the Nature of Science, an area that has not received sufficient attention if Kate.

      3. elkement says:

        Thanks, Maurice! I don’t have a very strong opinion on AI or its evangelists – but I can’t resist to play devil’s advocate here.

        As I understood articles on the contemporary culture of discussions in philosophy (Dan, please correct) you get into rhetorical battlefields and “arguments for the sake of arguments” quite easily. So the question if somebody gets across as “arrogant” probably boils down to a game-theoretical issue: If you present your theory in a modest way and are attacked then by sophistry – you probably retort by “arrogant rhetorics” yourself.
        I also enjoy a good argument for the sake of it, and I like to “fight” over technical issues with other nerds … which is most perceived by a less nerdy audience in the way you describe.

        I think I am rather tolerant with respect to “arrogant” experts as long as they argue on the basis of reason. I can’t say for example that I “like” Kurzweil but, to be fair, he does deal with the arguments of his critics extensively. Even if I don’t argee to any argument, I am happy if there are at least arguments – in contrast to that abundance of appeal to emotions and images which I consider a much more worrying trend in this time and age (like that ethical blackmailing done by those who, say, share shocking images on social media)
        In the AI debate I feel (sic!) that many counter-arguments are based on intuitive assumptions about the value and superiority of (human) life, and the rejection boils down to something like “We can’t be mere simple machines!”.

        1. I certainly agree with your assertion that rational argument is significantly better than the alternatives you mentioned. :-)

  8. cavegirlmba says:

    Very good article and an excellent reminder about a book on my “to be read again” list. Rereading some books I first tackled at a young age, I sometimes wonder how I could have the notion of understanding everything (which I clearly remember having), when now I realize that I am lacking large parts, still only beginning to comprehend.
    What I remember most clearly about GEB are not single concepts, but getting more absorbed and lost in it than in any other non-fiction book up to that point. Looking up from time to time and being surprised that the world around me was still the same, even though I had just seen it change miraculously.
    Thanks for covering this book – it will accompany me onto a beach bed this summer (also great training for the arm muscles).

    1. elkement says:

      Thanks, CaveGirl! I wonder if the set of “Taleb followers” and the “Hofstadter followers” have a large intersection :-) My non-scientific guess is confirmed by your and Dan Mullin’s replies: They do.

      It is rather hard to remember what I thought back then. I definitely neglected the AI part and I didn’t like the physicalist interpretation. I guess my younger self would give me a hard time in philsophical discussions.

      My copy of GEB is literally fallen apart now. I would buy a Kindle version if there were any.

      1. cavegirlmba says:

        For sure I would not want to enter any discussion with my younger self. Too tough.
        And I love it when books more or less fall apart from being read so often.

  9. Interesting piece. Meanwhile, Max Tegmark caught us off guard by letting his multi-versian ontology to slip through arxiv’s corridors exposing us to the most materialistic statement about minds ever. Below lies his assertion on mind as the 8th state of matter. Behold “Perceptronium”!


    (As a matter of fact, it would be interesting to check whether such a bold rephrasement suffices to resolve the other famous “Golem”-“Zombie” problem.. http://en.wikipedia.org/wiki/Philosophical_zombie)

    1. elkement says:

      Thanks a lot for this pointer – Max Tegmark has definitely been on my reading list, so I probably start with this paper of his.

      As for the Zombie: Hofstadter has stated in I am a Strange Loop that he does not agree with his friend and former graduate student David Chalmer’s view.

      Honestly, I am not sure yet if I have fully understood the argumentation as Scott Aaronson also quotes (paraphrases) Chalmers, saying “if computers someday become able to emulate humans in every observable respect, then we’ll be compelled to regard them as conscious, for exactly the same reasons we regard other people as conscious.” I think this should be in line with Hofstadter’s argumentation, too, but probably I missed something.

  10. I’ve never even heard of this book, but now you’ve got me curious. Hope you have a good weekend, Elke.

    1. elkement says:

      Thanks, Andra! I think the book speaks to artists and writers (and you are geeky enough – you will get it anyway :-)).
      Though Hofstadter would probably not be happy with it – you can skip some of the more math-heavy parts and still learn a lot – or simply enjoy entertaining yourself by finding those hidden clues and metaphors in the dialogues.
      I wonder what kind of story you would write about thinking machines and conscious ant hills? I guess you would create a terrific historical novel about human-like robots – steampunk-style.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.