Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Monday, May 18, 2015

Book Review: “String Theory and the Scientific Method” by Richard Dawid

String Theory and the Scientific Method
By Richard Dawid
Cambridge University Press (2013)

“String Theory and the Scientific Method” is a very interesting and timely book by a philosopher trying to make sense out of trends in contemporary theoretical physics. Dawid has collected arguments that physicists have raised to demonstrate the promise of their theories, arguments that however are not supported by the scientific method as it is currently understood. He focuses on string theory, but some of his observations are more general than this.


There is for example that physicists rely on mathematical consistency as a guide, even though this is clearly not an experimental assessment. A theory that isn’t mathematically consistent in some regime where we do not have observations yet isn’t considered fundamentally valid. I have to admit it wouldn’t even have occurred to me to call this a “non-empirical assessment,” because our use of mathematics is clearly based on the observation that it works very well to describe nature.

The three arguments that Dawid has collected which are commonly raised by string theorists to support their belief that string theory is a promising theory of everything are:
  1. Meta-inductive inference: The trust in a theory is higher if its development is based on extending existing successful research programs.
  2. No-alternatives argument: The more time passes in which we fail to find a theory as successful as string theory in combining quantum field theory with general relativity the more likely it is that the one theory we have found is unique and correct.
  3. Argument of unexpected explanatory coherence: A finding is perceived more important if it wasn’t expected.
Dawid then argues basically that since a lot of physicists are de facto not relying on the scientific method any more maybe philosophers should face reality and come up with a better explanation that would alter the scientific method so that according to the new method the above arguments were scientific.

In the introduction Dawid writes explicitly that he only studies the philosophical aspects of the development and not the sociological ones. My main problem with the book is that I don’t think one can separate these two aspects clearly. Look at the arguments that he raises: The No Alternatives Argument and the Unexpected Explanatory Coherence are explicitly sociological. They are 1.) based on the observation that there exists a large research area which attracts much funding and many young people and 2.) that physicists trust their colleagues’ conclusions better if it wasn’t the conclusion they were looking for. How can you analyze the relevance of these arguments without taking into account sociological (and economic) considerations?

The other problem with Dawid’s argument is that he confuses the Scientific Method with the rest of the scientific process that happens in the communities. Science basically operates as a self-organized adaptive system, that is in the same class of systems as natural selection. For such systems to be able to self-optimize something – in the case of science the use of theories for the descriptions of nature – they must have a mechanism of variation and a mechanism for assessment of the variation followed by a feedback. In the case of natural selection the variation is genetic mixing and mutation, the assessment is whether the result survives, the feedback is another reproduction. In science the variation is a new theory and the assessment is whether it agrees with experimental test. The feedback is the revision or trashcanning of the theory. This assessment whether a theory describes observation is the defining part of science – you can’t change this assessment without changing what science does because it determines what we optimize for.

The assessments that Dawid, correctly, observes are a pre-selection that is meant to assure we spend time only on those theories (gene combinations) that are promising. To make a crude analogy, we clearly do some pre-selection in our choice of partners that determines which genetic combinations are ever put to test. These might be good choices or they might be bad choices and as long as their success hasn’t also been put to test, we have to be very careful whether we rely on them. It’s the same with the assessments that Dawid observes. Absent experimental test, we don’t know if using these arguments does us any good. In fact I would argue that if one takes into account sociological dynamics one presently has a lot of reasons to not trust researchers to be objective and unbiased which sheds much doubt on the use of these arguments.

Be that as it may, Dawid’s book has been very useful for me to clarify my thoughts about exactly what is going on in the community. I think his observations are largely correct, just that he draws the wrong conclusion. We clearly don’t need to update the scientific method, we need to apply it better, and we need to apply it in particular to better understand the process of knowledge discovery.

I might never again agree with David Gross on anything, but I do agree on his “pre-publication praise” on the cover. The book is very recommendable reading both for physicists and philosophers.

I wasn’t able to summarize the arguments in the book without drawing a lot of sketches, so I made a 15 mins slideshow with my summary and comments on the book. If you have the patience, enjoy :)

Thursday, January 08, 2015

Do we live in a computer simulation?

Some days I can almost get myself to believe that we live in a computer simulation, that all we see around us is a façade designed to mislead us. There would finally be a reason for all this, for the meaningless struggles, the injustice, for life, and death, and for Justin Bieber. There would even be a reason for dark matter and dark energy, though that reason might just be some alien’s bizarre sense of humor.

It seems perfectly possible to me to trick a conscious mind, at the level of that of humans, into believing a made-up reality. Ask the guy sitting on the sidewalk talking to the trash bin. Sure, we are presently far from creating artificial intelligence, but I do not see anything fundamental that stands in way of such creation. Let it be a thousand years or ten thousand years, eventually we’ll get there. And once you believe that it will one day be possible for us to build a supercomputer that hosts intelligent minds in a world whose laws of nature are our invention, you also have to ask yourself whether the laws of nature that we ourselves have found are somebody else’s invention.

If you just assume the simulation that we might live in has us perfectly fooled and we can never find out if there is any deeper level of reality, it becomes rather pointless to even think about it. In this case the belief in “somebody else” who has created our world and has the power to manipulate it at his or her will differs from belief in an omniscient god only by terminology. The relevant question though is whether it is possible to fool us entirely.

Nick Bostrum has a simulation argument that is neatly minimalistic, though he is guilty of using words that end on ism. He is saying basically that if there are many civilizations running simulations with many artificial intelligences, then you are more likely to be simulated than not. So either you live in a simulation, or our universe (multiverse, if you must) never goes on to produce many civilizations capable of running these simulations for one reason or the other. Pick your poison. I think I prefer the simulation.

Math-me has a general issue with these kinds of probability arguments (same as with the Doomsday argument) because they implicitly assume that the probability distribution of lives lived over time is uncorrelated, which is clearly not the case since our time-evolution is causal. But this is not what I want to get into today because there is something else about Bostrum’s argument that has been bugging Physics-me.

For his argument, Bostrum needs a way to estimate how much computing power is necessary to simulate something like the human mind perceiving something like the human environment. And in his estimate he assumes, crucially, that it is possible to significantly compress the information of our environment. Physics-me has been chewing on this point for some while. The relevant paragraphs are:

If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.

The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world.

Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. The paradigmatic case of this is a computer. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. This presents no problem, since our current computing power is negligible by posthuman standards.”
This assumption is immediately problematic because it isn’t as easy as saying that whenever a human wants to drill a hole into the Earth you quickly go and compute what he has to find there. You would have to track what all these simulated humans are doing to know whenever that becomes necessary. And then you’d have to make sure that this never leads to any inconsistencies. Or else, if it does, you’d have to remove the inconsistency, which will add even more computing power. To avoid the inconsistencies, you’ll have to carry on all results for all future measurements that humans could possibly make, the problem being you don’t know which measurements they will make because you haven’t yet done the simulation. Dizzy? Don’t leave, I’m not going to dwell on this.

The key observation that I want to pick on here is that there will be instances in which The Programmer really has to cramp up the resolution to avoid us from finding out we’re in a simulation. Let me refer to what we perceive as reality as level zero, and a possible reality of somebody running our simulation as level 1. There could be infinitely many levels in each direction, depending on how many simulators simulate simulations.

This idea that structures depend on the scale at which they are tested and that at low energies you’re not testing all that much detail is basically what effective field theories are all about. Indeed, as Bostrom asserts, for much of our daily life the single motion of each and every quark is unnecessary information, atoms or molecules are enough. This is all fine by Physics-me.

Then these humans they go and build the LHC and whenever the beams collide the simulation suddenly needs a considerably finer mesh, or else the humans will notice there is something funny with their laws of nature.

Now you might think of blasting the simulation by just demanding so much fine-structure information all at once that the computer running our simulation cannot deliver. In this case the LHC would serve to test the simulation hypothesis. But there is really no good reason why the LHC should just be the thing to reach whatever computation limit exists at level 1.

But there is a better way to test whether we live in a simulation: Build simulations ourselves, the more the better. The reason is that you can’t compress what is already maximally compressed. So if the level 1 computation wants to prevent us from finding out that we live in a simulation by creating simulations ourselves, they’ll have to cramp up computational efficiency for that part of our level 0 simulation that is going to inhabit our simulation at level -1.

Now we try to create simulations that will create a simulation will create a simulation and so on. Eventually, the level 1 simulation will not be able to deliver any more, regardless of how good their computer is, and the then lowest level will find some strange artifacts. Something that is clearly not compatible with the laws of nature they have found so far and believed to be correct. This breakdown gets read out by the computer one level above, and so on, until it reaches us and then whatever is the uppermost level (if there is one).

Unless you want to believe that I’m an exceptional anomaly in the multiverse, every reasonably intelligent species should have somebody who will come up with this sooner or later. Then they’ll set out to create simulations that will create a simulation. If one of their simulations doesn’t develop into the direction of creating more simulations, they’ll scrape it and try a different one because otherwise it’s not helpful to their end.

This leads to a situation much like Lee Smolin’s Cosmological Natural Selection in which black holes create new universes that create black holes create new universes and so on. The whole population of universes then is dominated by those universes that lead to the largest numbers of black holes - that have the most “offspring.” In Cosmological Natural Selection we are most likely to find ourselves in a universe that optimizes the number of black holes.

In the scenario I discussed above the reproduction doesn’t happen by black holes but by building computer simulations. In this case then anybody living in a simulation is most likely to be living in a simulation that will go on to create another simulation. Or, to look at this from a slightly different perspective, if you want our species to continue thriving and avoid that The Programmer pulls the plug, you better work on creating artificial intelligence because this is why we’re here. You asked what’s the purpose of life? There it is. You’re welcome.

This also means you could try to test the probability of the simulation hypothesis being correct by seeing whether our universe does indeed have the optimal conditions for the creation of computer simulations.

Brain hurting? Don’t worry, it’s probably not real.

Saturday, September 13, 2014

Is there a smallest length?

Good ideas start with a question. Great ideas start with a question that comes back to you. One such question that has haunted scientists and philosophers since thousands of years is whether there is a smallest unit of length, a shortest distance below which we cannot resolve structures. Can we look closer and always closer into space, time, and matter? Or is there a limit, and if so, what is the limit?

I picture our foreign ancestors sitting in their cave watching the world in amazement, wondering what the stones, the trees and they themselves are made of – and starving to death. Luckily, those smart enough to hunt down the occasional bear eventually gave rise to human civilization sheltered enough from the harshness of life to let the survivors get back to watching and wondering what we are made of. Science and philosophy in earnest is only a few thousand years old, but the question whether there is smallest unit has always been a driving force in our studies of the natural world.

The old Greeks invented atomism, the idea that there is an ultimate and smallest element of matter that everything is made of. Zeno’s famous paradoxa sought to shed light on the possibility of infinite divisibility. The question came back with the advent of quantum mechanics, with Heisenberg’s uncertainty principle that fundamentally limits the precision by which we can measure. It became only more pressing with the divergences in quantum field theory that are due to the inclusion of infinitely short distances.

It was in fact Heisenberg who first suggested that divergences in quantum field theory might be cured by the existence of a fundamentally minimal length, and he introduced it by making position operators non-commuting among themselves. Like the non-commutativity of momentum and position operators leads to an uncertainty principle, so does the non-commutativity of position operators limits how well distances can be measured.

Heisenberg’s main worry, which the minimal length was supposed to deal with, was the non-renormalizability of Fermi’s theory of beta-decay. This theory however turned out to be only an approximation to the renormalizable electro-weak interaction, so he had to worry no more. Heisenberg’s idea was forgotten for some decades, then picked up again and eventually grew into the area of non-commutative geometries. Meanwhile, the problem of quantizing gravity appeared on stage and with it, again, non-renormalizability.

In the mid 1960s Mead  reinvestigated Heisenberg’s microscope, the argument that lead to the uncertainty principle, with (unquantized) gravity taken into account. He showed that gravity amplifies the uncertainty so that it becomes impossible to measure distances below the Planck length, about 10-33 cm. Mead’s argument was forgotten, then rediscovered in the 1990s by string theorists who had noticed using strings to prevent divergences by avoiding point-interactions also implies a finite resolution, if in a technically somewhat different way than Mead’s.

Since then the idea that the Planck length may be a fundamental length beyond which there is nothing new to find, ever, appeared in other approaches towards quantum gravity, such as Loop Quantum Gravity or Asymptotically Safe Gravity. It has also been studied as an effective theory by modifying quantum field theory to include a minimal length from scratch, and often runs under the name “generalized uncertainty”.

One of the main difficulties with these theories is that a minimal length, if interpreted as the length of a ruler, is not invariant under Lorentz-transformations due to length contraction. This problem is easy to overcome in momentum space, where it is a maximal energy that has to be made Lorentz-invariant, because momentum space is not translationally invariant. In position space one either has to break Lorentz-invariance or deform it and give up locality, which has observable consequences, and not always desired ones. Personally, I think it is a mistake to interpret the minimal length as the length of a ruler (a component of a Lorentz-vector), and it should instead be interpreted as a Lorentz-invariant scalar to begin with, but opinions on that matter differ.

The science and history of the minimal length has now been covered in a recent book by Amit Hagar:

Amit is a philosopher but he certainly knows his math and physics. Indeed, I suspect the book would be quite hard to understand for a reader without at least some background knowledge in math and physics. Amit has made a considerable effort to address the topic of a fundamental length from as many perspectives as possible, and he covers a lot of scientific history and philosophical considerations that I had not previously been aware of. The book is also noteworthy for including a chapter on quantum gravity phenomenology.

My only complaint about the book is its title because the question of discrete vs continuous is not the same as the question of finite vs infinite resolution. One can have a continuous structure and yet be unable to resolve it beyond some limit (this is the case when the limit makes itself noticeable as a blur rather than a discretization). On the other hand, one can have a discrete structure that does not prevent arbitrarily sharp resolution (which can happen when localization on a single base-point of the discrete structure is possible).

(Amit’s book is admittedly quite pricey, so let me add that he said should sales numbers reach 500 Cambridge University Press will put a considerably less expensive paperback version on offer. So tell your library to get a copy and let’s hope we’ll make it to 500 so it becomes affordable for more of the interested readers.)

Every once in a while I think that there maybe is no fundamentally smallest unit of length, that all these arguments for its existence are wrong. I like to think that we can look infinitely close into structures and will never find a final theory, turtles upon turtles, or that structures are ultimately self-similar and repeat. Alas, it is hard to make sense of the romantic idea of universes in universes in universes mathematically, not that I didn’t try, and so the minimal length keeps coming back to me.

Many if not most endeavors to find observational evidence for quantum gravity today look for manifestations of a minimal length in one way or the other, such as modifications of the dispersion relation, modifications of the commutation-relations, or Bekenstein’s tabletop search for quantum gravity. The properties of these theories are today a very active research area. We’ve come a long way, but we’re still out to answer the same questions that people asked themselves thousands of years ago.


This post first appeared on Starts With a Bang with the title "The Smallest Possible Scale in the Universe" on August 12, 2014.

Saturday, July 12, 2014

Post-empirical science is an oxymoron.

Image illustrating a phenomenologist after
reading a philosopher go on about
empiricism.

3:AM has an interview with philosopher Richard Dawid who argues that physics, or at least parts of it, are about to enter an era of post-empirical science. By this he means that “theory confirmation” in physics will increasingly be sought by means other than observational evidence because it has become very hard to experimentally test new theories. He argues that the scientific method must be updated to adapt to this development.

The interview is a mixture of statements that everybody must agree on, followed by subtle linguistic shifts that turn these statements into much stronger claims. The most obvious of these shifts is that Dawid flips repeatedly between “theory confirmation” and “theory assessment”.

Theoretical physicists do of course assess their theories by means other than fitting data. Mathematical consistency clearly leads the list, followed by semi-objective criteria like simplicity or naturalness, and other mostly subjective criteria like elegance, beauty, and the popularity of people working on the topic. These criteria are used for assessment because some of them have proven useful to arrive at theories that are empirically successful. Other criteria are used because they have proven useful to arrive on a tenured position.

Theory confirmation on the other hand doesn’t exist. The expression is sometimes used in a sloppy way to mean that a theory has been useful to explain many observations. But you never confirm a theory. You just have theories that are more, and others that are less useful. The whole purpose of the natural sciences is to find those theories that are maximally useful to describe the world around us.

This brings me to the other shift that Dawid makes in his string (ha-ha-ha) of words, which is that he alters the meaning of “science” as he goes. To see what I mean we have to make a short linguistic excursion.

The German word for science (“Wissenschaft”) is much closer to the original Latin meaning, “scientia” as “knowledge”. Science, in German, includes the social and the natural sciences, computer science, mathematics, and even the arts and humanities. There is for example the science of religion (Religionswissenschaft), the science of art (Kunstwissenschaft), science of literature, and so on. Science in German is basically everything you can study at a university and for what I am concerned mathematics is of course a science. However, in stark contrast to this, the common English use of the word “science” refers exclusively to the natural sciences and does typically not even include mathematics. To avoid conflating these two different meanings, I will explicitly refer to the natural sciences as such.

Dawid sets out talking about the natural sciences, but then strings (ha-ha-ha) his argument along on the “insights” that string theory has lead to and the internal consistency that gives string theorists confidence their theory is a correct description of nature. This “non-empirical theory assessment”, while important, can however only be means to the end of an eventual empirical assessment. Without making contact to observation a theory isn’t useful to describe the natural world, not part of the natural sciences, and not physics. These “insights” that Dawid speaks of are thus not assessments that can ever validate an idea as being good to describe nature, and a theory based only on non-empirical assessment does not belong into the natural sciences.

Did that hurt? I hope it did. Because I am pretty sick and tired of people selling semi-mathematical speculations as theoretical physics and blocking jobs with their so-called theories of nothing specifically that lead nowhere in particular. And that while looking down on those who work on phenomenological models because those phenomenologists, they’re not speaking Real Truth, they’re not among the believers, and their models are, as one string theorist once so charmingly explained to me “way out there”.

Yeah, phenomenology is out there where science is done. To many of those who call themselves theoretical physicists today seem to have forgotten physics is all about building models. It’s not about proving convergence criteria in some Hilbert-space or classifying the topology of solutions of some equation in an arbitrary number of dimensions. Physics is not about finding Real Truth. Physics is about describing the world. That’s why I became a physicist – because I want to understand the world that we live in. And Dawid is certainly not helping to prevent more theoretical physicists get lost in math and philosophy when he attempts to validate their behavior claiming the scientific method has to be updated.

The scientific method is a misnomer. There really isn’t such a thing as a scientific method. Science operates as an adaptive system, much like natural selection. Ideas are produced, their usefulness is assessed, and the result of this assessment is fed back into the system, leading to selection and gradual improvement of these ideas.

What is normally referred to as “scientific method” are certain institutionalized procedures that scientists use because they have shown to be efficient to find the most promising ideas quickly. That includes peer review, double-blind studies, criteria for statistical significance, mathematical rigor, etc. The procedures and how stringent (ha-ha-ha) they are is somewhat field-dependent. Non-empirical theory assessment has been used in theoretical physics for a long time. But these procedures are not set in stone, they’re there as long as they seem to work and the scientific method certainly does not have to be changed. (I would even argue it can’t be changed.)

The question that we should ask instead, the question I think Dawid should have asked, is whether more non-empirical assessment is useful at the present moment. This is a relevant question because it requires one to ask “useful for what”? As I clarified above, I myself mean “useful to describe the real world”. I don’t know what “use” Dawid is after. Maybe he just wants to sell his book, that’s some use indeed.

It is not a simple question to answer how much theory assessment is good and how much is too much, or for how long one should pursue a theory trying to make contact to observation before giving up. I don’t have answers to this, and I don’t see that Dawid has.

Some argue that string theory has been assessed too much already, and that more than enough money has been invested into it. Maybe that is so, but I think the problem is not that too much effort has been put into non-empirical assessment, but that too little effort has been put into pursuing the possibility of empirical test. It’s not a question of absolute weight on any side, it’s a question of balance.

And yes, of course this is related to it becoming increasingly more difficult to experimentally test new theories. That together with self-supporting community dynamics that Lee so nicely called out as group-think. Not that loop quantum gravity is any better than string theory.

In summary, there’s no such thing as post-empirical physics. If it doesn’t describe nature, if it has nothing to say about any observation, if it doesn’t even aspire to this, it’s not physics. This leaves us with a nomenclature problem. How do you call a theory that has only non-empirical facts speaking for it and one that the mathematical physicists apparently don’t want either? How about mathematical philosophy, or philosophical mathematics? Or maybe we should call it Post-empirical Dawidism.

[Peter Woit also had a comment on the 3:AM interview with Richard Dawid.]

Thursday, April 17, 2014

The Problem of Now

[Image Source]

Einstein’s greatest blunder wasn’t the cosmological constant, and neither was it his conviction that god doesn’t throw dice. No, his greatest blunder was to speak to a philosopher named Carnap about the Now, with a capital.

“The problem of Now”, Carnap wrote in 1963, “worried Einstein seriously. He explained that the experience of the Now means something special for men, something different from the past and the future, but that this important difference does not and cannot occur within physics”

I call it Einstein’s greatest blunder because, unlike the cosmological constant and indeterminism, philosophers, and some physicists too, are still confused about this alleged “Problem of Now”.

The problem is often presented like this. Most of us experience a present moment, which is a special moment in time, unlike the past and unlike the future. If you write down the equations governing the motion of some particle through space, then this particle is described, mathematically, by a function. In the simplest case this is a curve in space-time, meaning the function is a map from the real numbers to a four-dimensional manifold. The particle changes its location with time. But regardless of whether you use an external definition of time (some coordinate system) or an internal definition (such as the length of the curve), every single instant on that curve is just some point in space-time. Which one, then, is “now”?

You could argue rightfully that as long as there’s just one particle moving on a straight line, nothing is happening, and so it’s not very surprising that no notion of change appears in the mathematical description. If the particle would scatter on some other particle, or take a sudden turn, then these instances can be identified as events in space-time. Alas, that still doesn’t tell you whether they happen to the particle “now” or at some other time.

Now what?

The cause for this problem is often assigned to the timeless-ness of mathematics itself. Mathematics deals in its core with truth values and the very point of using math to describe nature is that these truths do not change. Lee Smolin has written a whole book about the problem with the timeless math, you can read my review here.

It may or may not be that mathematics is able to describe all of our reality, but to solve the problem of now, excuse the heresy, you do not need to abandon a mathematical description of physical law. All you have to do is realize that the human experience of now is subjective. It can perfectly well be described by math, it’s just that humans are not elementary particles.

The decisive ability that allows us to experience the present moment as being unlike other moments is that we have a memory. We have a memory of events in the past, an imperfect one, and we do not have memory of events in the future. Memory is not in and by itself tied to consciousness, it is tied to the increase of entropy, or the arrow of time if you wish. Many materials show memory; every system with a path dependence like eg hysteresis does. If you get a perm the molecule chains in your hair remember the bonds, not your brain.

Memory has nothing to do with consciousness in particular which is good because it makes it much easier to find the flaw in the argument leading to the problem of now.

If we want to describe systems with memory we need at the very least two time parameters: t to parameterize the location of the particle and τ to parameterize the strength of memory of other times depending on its present location. This means there is a function f(t,τ) that encodes how strong is the memory of time τ at moment t. You need, in other words, at the very least a two-point function, a plain particle trajectory will not do.

That we experience a “now” means that the strength of memory peaks when both time parameters are identical, ie t-τ = 0. That we do not have any memory of the future means that the function vanishes when τ > t. For the past it must decay somehow, but the details don’t matter. This construction is already sufficient to explain why we have the subjective experience of the present moment being special. And it wasn’t that difficult, was it?

The origin of the problem is not in the mathematics, but in the failure to distinguish subjective experience of physical existence from objective truth. Einstein spoke about “the experience of the Now [that] means something special for men”. Yes, it means something special for men. This does not mean however, and does not necessitate, that there is a present moment which is objectively special in the mathematical description. In the above construction all moments are special in the same way, but in every moment that very moment is perceived as special. This is perfectly compatible with both our experience and the block universe of general relativity. So Einstein should not have worried.

I have a more detailed explanation of this argument – including a cartoon! – in a post from 2008. I was reminded of this now because Mermin had a comment in the recent issue of Nature magazine about the problem of now.

In his piece, Mermin elaborates on qbism, a subjective interpretation of quantum mechanics. I was destined to dislike this just because it’s a waste of time and paper to write about non-existent problems. Amazingly however, Mermin uses the subjectiveness of qbism to arrive at the right conclusion, namely that the problem of the now does not exist because our experiences are by its very nature subjective. However, he fails to point out that you don’t need to buy into fancy interpretations of quantum mechanics for this. All you have to do is watch your hair recall sulphur bonds.

The summary, please forgive me, is that Einstein was wrong and Mermin is right, but for the wrong reaons. It is possible to describe the human experience of the present moment with the “timeless” mathematics that we presently use for physical laws, it isn’t even difficult and you don’t have to give up the standard interpretation of quantum mechanics for this. There is no problem of Now and there is no problem with Tegmark’s mathematical universe either.

And Lee Smolin, well, he is neither wrong nor right, he just has a shaky motivation for his cosmological philosophy. It is correct, as he argues, that mathematics doesn’t objectively describe a present moment. However, it’s a non sequitur that the current approach to physics has reached its limits because this timeless math doesn’t constitute a conflict with our experience. observation.

Most people get a general feeling of uneasiness when they first realize that the block universe implies all the past and all the future is equally real as the present moment, that even though we experience the present moment as special, it is only subjectively so. But if you can combat your uneasiness for long enough, you might come to see the beauty in eternal mathematical truths that transcend the passage of time. We always have been, and always will be, children of the universe.

Thursday, January 02, 2014

10 Misconceptions about Free Will

If somebody talks about a “question that science cannot answer” what they really mean is a question they don’t want an answer to. Science can indeed be very disrespectful to people’s beliefs. I accept the wish to believe rather than know, but I get pissed off if somebody wraps their wishful thinking as an actual argument.

“Do humans have free will?” is a question I care deeply about. It lies at the heart of how we understand ourselves and arrange our living together. It also plays a central role for the foundations of quantum mechanics. In my darker moods I am convinced we’re not making any progress in quantum gravity because physicists aren’t able to abandon their belief in free will. And from the foundations of quantum mechanics the roadblock goes all the way up to neuroscience and politics.

Yes, I just blamed the missing rational discussion about free will for most of mankind’s problems, including quantum gravity.

Suggesting the absence of free will is apparently still an upsetter in the 21st century. You’re not supposed say it because allegedly just saying it makes other people immoral. Do you feel it already? How the immorality creeps from my blogpost into your veins? Aren’t you afraid to read on?

There’s no need to worry. This angst stems from a misunderstanding of what it means not to have free will. In this blogpost I address the most common misunderstandings, but before that let me explain why, to our best present knowledge of the laws of nature, you do not have free will. So, first the facts.

    Fact 1: Everything in the universe, including you and your brain, is composed of elementary particles. What these particles do is described by the fundamental laws of physics. Everything else follows from that, in principle.

    It follows in principle, but it is arguably not very practical to describe, say, human anatomy in terms of quarks and electrons. Instead, scientists of other disciplines use larger constituents and try to describe their behavior. This practical usefulness of increasingly larger scales, variables, and constituents, and the approximate accuracy of that procedure, is called “emergence”. All of these properties however derive from the fundamental description – in principle. That’s what is called reductionism.

    The idea that the emergent properties of large systems do not derive from the fundamental description is called “strong emergence”. Some people like to claim that just because a system (eg your brain) consists of many constituents it is somehow exempt from reductionism and something (free will) “strongly emerges”. But fact is, there exists no known example where this happens, and there exists no known theory – not even an untested one – for how strong emergence can work. It is entirely irrelevant if your large system has adjectives like open, chaotic, complex or self-aware. It’s still just a really large number of particles that obey the fundamental laws of nature. Presently, believing in strong emergence is on the same intellectual level as believing in an immortal soul or in ESP.

    Fact 2: All known fundamental laws of nature are either deterministic or random. To our best present knowledge, the universe evolves in a mixture of both, but just exactly how that mixture looks like will not be relevant in the following.

Having said that, I need to explain just exactly what I mean by the absence of free will:
    a) If your future decisions are determined by the past, you do not have free will.

    b) If your future decisions are random, meaning nothing can influence them, you do not have free will.

    c) If your decisions are any mixture of a) and b) you do not have free will either.
In the above, you can read “you” as “any subsystem of the universe”, the details don’t matter. It follows straight from Fact 1 and Fact 2 that according to the definition of the absence of free will in a), b), c) free will is incompatible with what we presently know about nature.

I acknowledge that there are other ways to define free will. Some people for example want to call a choice “free” if nobody else could have predicted it, but for what I am concerned this is just pseudo free will.

Right! I didn’t say anything about neurobiology, the consciousness or the subconsciousness or about people pushing buttons. I don’t have to. For free will to exist it is necessary that free will be allowed by the fundamental laws of physics. It is necessary, but not sufficient: If you could make free will compatible with the laws of physics, it might still be that neurobiology finds your brain can’t make use of that option. Physics cannot tell you that free will exists, but it can tell you that it doesn’t exist. And that’s what I just told you.

Note that I neither claim strong emergence does not exist, nor do I say that a fundamental law has to be a mixture of determinism and randomness. What I am saying is this: If you want to argue that free will exists because strong emergence works, or there is an escape from determinism or randomness, then I want to see an example for how this is supposed to work.

Then let me address the main misconceptions:

  1. If you do not have free will you cannot or do not have to make decisions.

    Regardless of whether you have free will or not, your brain performs evaluations and produces results and that’s what it means to make a decision. You cannot not make decisions. Just because your thought process is deterministic doesn’t mean the process doesn’t have to be executed in real time. The same is true if it has a random component.

    This misconception stems from a split-personality perspective: People picture themselves as trying to make a decision but being hindered by some evil free-will-defying law of nature. That is nonsense of course. You are whatever brain process works with whatever input you receive. If you don’t have free will, you’ve never had free will and so far you’ve lived just fine. You can continue to think the same way you’ve always thought. You’ll do that anyway.

  2. If you do not have free will you have no responsibility for your actions.

    This misconception also comes from the split-personality perspective. You are what makes the decisions (takes in information and processes it) and performs the actions (acts on the results). If your actions are problematic for other people, you are the source of the problem and they’ll take measures to solve that problem. It’s not like they have any choice… If the result of your brain processes makes other people’s lives difficult, it’s you who will be blamed, locked away, sent to psychotherapy or get kicked where it really hurts. It is entirely irrelevant that your faulty information processing was inscribed in the initial conditions of the universe, the relevant question is what your future will bring if others try to get rid of you. The word ‘responsibility’ is just a red herring because it’s both ill-defined and unnecessary.

  3. People should not be told they don’t have free will because that would undermine the rules of morally just societies.

    This misconception goes back to the first two and is based the idea that if people don’t have free will they don’t have any reason to reflect on their actions and to consider other people’s wellbeing. This is wrong of course. Evolution has endowed us with the ability to estimate the future impact of our actions and natural selection preferred those who acted so that others were supportive of their needs, or at least not outright aggressive towards them. If people don’t have free will they still have to make decisions and they still will be blamed for making other peoples’ lives miserable.

    Occasionally somebody refers me to this study which allegedly shows that “Encouraging a Belief in Determinism Increases Cheating.” This study also encourages misconception 2, so the finding is hardly surprising. I would like to see this repeated with the added explanation that the test subjects are of course still making decisions, regardless of whether the outcome was predetermined or not, and of course it matters what that outcome is.

  4. If you do not have free will your actions can be predicted.

    Even if your brain processes were predictable in principle it is highly questionable anybody could do this in practice. Besides, as I explained above, these processes might have a random component that is even in principle not predictable. It is presently not very well understood just exactly how relevant such a random component might be.

  5. If you do not have free will the future is determined by the past.

    Same misconception that underlies 4. Randomness is for all we presently know a component of the fundamental laws. In this case the future is not determined by the past, but neither do you have free will because nothing can influence this randomness.

  6. If we do not have free will we can derive human morals.

    I don’t know why people get so hung up on this. Morals and values are just thought patterns that humans use to make decisions. Their relevance stems from these thought patterns being shared by many in similar versions. If the fundamental laws of the universe are deterministic and if you were really good at computation, then you could in principle compute them. In practice nobody can do it.

    It is also not actually what people mean when they talk about ‘deriving morals’. What they actually mean is whether one can derive what humans “should do”. That however one can only do once a goal is defined – “should do” to achieve what? – and that just moves the question elsewere. Science can’t answer this question because it’s ill-defined. Science can’t tell what anybody should be doing because that’s a meaningless phrase. Science can, in the best case, just tell what they will be doing.

    More to the point is (as I explained in length in this earlier blogpost) that at any time there are questions that science cannot answer because the knowledge we have is insufficient. These are the questions we leave to political decision. All the “should do” questions are of this type.

  7. Free will is impossible.

    Not necessarily. As I explained here (paper here) it is possible to conceive of laws of nature that are neither deterministic nor random and that can plausibly be said to allow for free will. Alas, we do not presently have any evidence whatsoever that this is realized in nature and neither is it known whether this is even compatible with the laws of nature that we know. Send me a big enough paycheck and give me some years and I’ll find out.

  8. You need to be a neuroscientist to talk about free will.

    We associate free will with autonomous systems that make choices, with activation patterns in human brains, which is the realm of neurobiology. However, your brain as much as every other part of the universe obeys the fundamental laws of nature. That these fundamental laws allow for free will is a necessary condition for free will to exist, and these laws fall into the realm of physics.

  9. You need to be a philosopher to be allowed to talk about free will.

    If you want to know how everybody and their dog throughout the history of mankind defined free will, you had better read several thousand years’ worth of discussion on the issue. But I don’t like to waste time on definitions and I don’t see the merit in listing all variants of free will that somebody sometime has come up with. I told you above very clearly what I mean with ‘absence of free will’ and that is the core of the problem in two paragraphs. If you want to name this other than “free will”, I don’t care, it’s still the core of the problem.

  10. If we do not have free will we cannot do science.

    I added this misconception because this comes up every time I talk about superdeterminism in quantum mechanics. The basic reason we can do science is that our universe evolves so that we are able to extract regularities in that evolution. You need to be able to measure what happens to similar systems under similar conditions and find patterns in that. But just how these similar systems came about is entirely irrelevant. It does not matter, for example, whether the laboratory and all the detector settings were predetermined already at the beginning of the universe. All that matters is that there are similar systems, that detections can be done, and the results are processed by you (or some computer) to extract regularities.
Let me be very clear that I didn’t say free will doesn’t exist. I said it doesn’t exist according to our best present knowledge of how nature works. If you want to hang on to free will you better come up with a really good idea how to make that compatible with existing scientific knowledge. I want to see progress, I don’t just want to see smoke screens of “strong emergence” or “qualia” and other fantasies.

Friday, December 27, 2013

The finetuned cube

The standard model of parenthood.
The kids are almost three years now and I spend a lot of time picking up wooden building blocks. That’s good for your health in many ways, for example by the following brain gymnastics.

When I scan the floor under the couch for that missing cube, I don’t expect to find it balancing on a corner - would you? And in the strange event that you found it delicately balanced on a corner, would you not expect to also find something, or somebody, that explains this?

When physicists scanned the LHC data for that particle, that particle you’re not supposed to call the god-particle, they knew it would be balancing on a corner. The Higgs is too light, much too light, that much we knew already. And so, before the LHC most physicists expected that once they’d be able to see the Higgs, they’d also catch a glimpse of whatever it was that explained this delicate balance. But they didn’t.

It goes under the name ‘naturalness,’ the belief that a finely tuned balance requires additional explanation. “Naturally” is the physicist’s way of saying “of course”. Supersymmetry, neatified to Susy, was supposed to be the explanation for finetuning, but Susy has not shown up, and neither has anything else. The cube stands balanced on the corner, seemingly all by itself.

Of course those who built their career on Susy cross-sections are not happy. They are now about to discard naturalness, for this would mean Susy could hide everywhere or nowhere, as long as it’s not within reach of the LHC. And beyond the LHC there’s 16 orders of magnitude space for more papers. Peter Woit tells this tale of changing minds on his blog. The denial of pre-LHC arguments is so bold it deserves a book (hint, hint), but that’s a people-story and not mine to tell. Let me thus leave aside the psychological morass and the mud-throwing, and just look at the issue at hand: Naturalness, or its absence respectively.

I don’t believe in naturalness, the idea that finetuned parameter values require additional explanation. I recognize that it can be a useful guiding principle, and that apparent finetuning deserves a search for its cause, but it’s a suggestion rather than a requirement.

I don’t believe in naturalness because the definition of finetuning itself is unnatural in its focus on numerical parameters. The reason physicists focus on numbers is that numbers are easy to quantify - they are already quantified. The cosmological constant is 120 orders of magnitude too large, which is bad with countably many zeros. But the theories that we use are finetuned to describe our universe in many other ways. It’s just that physicists tend to forget how weird mathematics can be.

We work with manifolds of integer dimension that allow for a metric and a causal structure, we work with smooth and differentiable functions, we work with bounded Hamiltonians and hermitian operators and our fibre bundles are principal bundles. There is absolutely no reason why this has to be, other than that evidence shows it describes nature. That’s the difference between math and physics: In physics you take that part of math that is useful to explain what you observe. Differentiable functions, to pick my favorite example because it can be quantified, have measure zero in the space of all functions. That’s infinite finetuning. It’s just that nobody ever talks about it. Be wary whenever you meet the phrase “of course” in a scientific publication – infinity might hide behind it.

This finetuning of mathematical requirements appears in form of axioms of the theory – it’s a finetuning in theory space, and a selection is made based on evidence: differentiable manifolds with Lorentzian metric and hermitian operators work. But selecting the value of numerical parameters based on observational evidence is no different from selecting any other axiom. The existence of ‘multiverses’ in various areas of physics is similarly a consequence of the need to select axioms. Mathematical consistency is simply insufficient as a requirement to describe nature. Whenever you push your theory too far and ties to observation loosen too much, you get a multiverse.

My disbelief in naturalness used to be a fringe opinion and it’s gotten me funny looks on more than one occasion. But the world refused to be as particle physicists expected, naturalness rapidly loses popularity, and now it’s my turn to practice funny looks. The cube, it’s balancing on a tip and nobody knows why. In desparation they throw up their hands and say “anthropic principle”. Then they continue to produce scatter plots. But it’s a logical fallacy called ‘false dichotomy’, the claim that if it’s not natural it must be anthropic.

That I don’t believe in naturalness as a requirement doesn’t mean I think it a useless principle. If you have finetuned parameters, it will generally be fruitful to figure out the mechanism of finetuning. This mechanism will inevitably constitute another incidence of finetuning in one way or the other, either in parameter space or in theory space. But along the line you can learn something, while falling back on the anthropic principle doesn’t teach us anything. (In fact, we already know it doesn’t work.) So if you encounter finetuning, it’s a good idea to look for a mechanism. But don’t expect that mechanism to work without finetuning itself - because it won’t.

If that was too many words, watch this video:


It’s a cube that balances on a tip. If your resolution scale is the size of the cube, all you will find is that it’s mysteriously finetuned. The explanation for that finetuned balance you can only find if you look into the details, on scales much below the size of the cube. If you do, you’ll find an elaborate mechanism that keeps the cube balanced. So now you have an explanation for the balance. But that mechanism is finetuned itself, and you’ll wonder then just why that mechanism was there in the first place. That’s the finetuning in theory space.

Now in the example with the above video we know where the mechanism originated. Metaphors all have their shortcomings, so please don’t mistake me for advocating intelligent design. Let me just say that the origin of the mechanism was a complex multi-scale phenomenon that you’d not be able to extract in an effective field theory approach. In a similar way, it seems plausible to me that the unexplained values of parameters in the standard model can’t be derived from any UV completion by way of an effective field theory, at least not without finetuning. The often used example is that hundreds of years ago it was believed that the orbits of planets have to be explained by some fundamental principles (regular polygons stacked inside each other, etc). Today nobody would assign these numbers fundamental relevance.

Of course I didn’t find a cube balancing on a tip under the couch. I didn’t find the cube until I stepped on it the next morning. I did however quite literally find a missing puzzle piece – and that’s as much as a theoretical physicist can ask for.

Friday, July 19, 2013

You probably have no free will. But don’t worry about it.

Railroad tracks. Image source.
Anybody who believes in reductionism and that the standard model of particle physics is correct to excellent precision must come to the conclusion that free will is an illusion. Alas, denial of this conclusion is widely spread, documented in many attempts to redefine “free will” so that it can somehow be accommodated in our theories. I find it quite amusing to watch otherwise sensible physicists trying to wriggle out of the consequences of their own theories. We previously discussed Sean Carroll’s attempt, and now Carlo Rovelli has offered his thoughts on free will in the context of modern physics in a recent Edge essay.

Free will can only exist if there are different possible futures and you are able to influence which one becomes reality. This necessitates to begin with that there are different possible futures. In a deterministic theory, like all our classical theories, this just isn’t the case - there’s only one future, period. The realization that classically the future is fully determined by the presence goes back at least to Laplace and it’s still as true today as it was then.

Quantum mechanics in the standard interpretation has an indeterministic element that is a popular hiding place for free will. But quantum mechanical indeterminism is fundamentally random (as opposed to random by lack of knowledge). It doesn’t matter how you define “you” (in the simplest case, think of a subsystem of the universe), “you” won’t be able to influence the future because nothing can. Quantum indeterminism is not influenced by anything, and what kind of decision making is that?

Another popular hiding place for free will is chaos. Yes, many systems in nature are chaotic and possibly the human mind has chaotic elements to it. In chaotic systems, even smallest mistakes in knowledge about the present will lead to large errors in the future. These systems rapidly become unpredictable because in practice measurement always contains small mistakes and uncertainties. But in principle chaos is entirely deterministic. There’s still only one future. It’s just that chaotic behavior spoils predictability in practice.

That brings us to what seems to me like the most common free will mirage, the argument that it is difficult if not impossible to make predictions about human behavior. Free will, in this interpretation, is that nobody, possibly not even you yourself, can tell in advance what you will do. That sounds good but is intellectually deeply unsatisfactory.

To begin with, it isn’t at all clear that it’s impossible to predict human behavior, it’s just presently not possible. Since ten thousand years ago people couldn’t predict lunar eclipses, this would mean the moon must have had free will back then. And who or what makes the prediction anyway? If no human can predict your behavior, but a computer cluster of an advanced alien civilization could, would you have free will? Would it disappear if the computer is switched on?

And be that as it may, these distractions don’t change anything about the fact that “you” didn’t have any influence on what happens in the future whether or not somebody else knew what you’ll do. Your brain is still but a machine acting on input to produce output. The evaluation of different courses of action can plausibly be interpreted as “making a choice,” but there’s no freedom to the result. This internal computation that evaluates the results of actions might be impossible to predict indeed, but this is clearly an illusion of freedom.

To add another buzzword, it also doesn’t help to refer to free will as an “emergent property”. Free will almost certainly is an emergent property, unless you want to argue that elementary particles also have free will. But emergent properties of deterministic systems still behave deterministically. In principle, you could do without the “emergent” variables and use the fundamental ones, describing eg the human brain in terms of the standard model. It’s just not very practical. So appealing to emergence doesn’t help, it just adds a layer of confusion.

Rovelli in his essay now offers a new free will argument that is interesting.

First he argues that free will can be executed when external contraints on choice are absent. He doesn’t explain what he means with “external constraints” though and I’m left somewhat confused about this. Is for example, alcohol intoxication a constraint that’s “external” to your decision making unit? Is your DNA an external constraint? Is a past event that induced stress trauma an external constraint? Be that as it may, this part of Rovelli’s argument is a rephrasing of the idea that free will means it isn’t possible to predict what course of action a person will take from exclusively external observation. As we’ve seen above, this still doesn’t mean there are different future options for you to choose from, it just means prediction is difficult.

Then Rovelli alludes to the above mentioned idea that free will is “emergent”, but he does so with a new twist. He argues that “mental states” are macroscopic and can be realized by many different microscopic arrangements. If one just uses the information in the mental states - which is what you might experience as “yourself” - then the outcome of your decisions might not be fully determined. Indeed, that might be so. But it’s like saying if you describe a forest as a lot of trees and disregard the details, then you’ll fail to predict an impending pest infection. Which brings us to the question whether forests have free will. And once again, failure to predict by disregarding part of the degrees of freedom in the initial state doesn’t open any future options.

In summary, according to our best present theories of nature humans don’t have free will in the sense explained above, which in my opinion is the only sensible meaning of the phrase. Now you could dismiss this and claim there must be something about nature then that these theories don’t correctly describe and that’s where free will hides. But that’s like claiming god hides there. It might be possible to construct theories of nature that allow for free will, as I suggested here, but we presently have absolutely zero evidence that this is how nature works. For all we know, there just is no free will.

But don’t worry.

People are afraid of the absence of free will not because it’s an actual threat to well-being, but because it’s a thought alien to our self-perception. Most people experience the process of evaluating possible courses of action not as a computation, but as making a decision. This raises the fear that if they don’t have free will they can no longer make decisions. Of course that’s wrong.

To begin with, if there’s no free will, there has never been free will, and if you’ve had a pleasant and happy life so far there is really no reason why this should change. But besides this, you still make your decisions. In fact, you cannot not make decisions. Do you want to try?

And the often raised concern about moral hazard is plainly a red herring. There’s this idea that if people had no free will “they” could not be made responsible for their decisions. Scare quotes because this suggests there are two different parts of a person, one making a decision and the other one, blameless, not being able to affect that decision. People who commit crimes cause pain to other people, therefore we take actions to prevent and deter crime, for which we identify individuals who behave problematically and devise suitable reactions. But the problem is their behavior and that needs to be addressed regardless of whether “they” have a freedom in their decision.

I believe that instead of making life miserable accepting the absence of free will will improve our self-perception and with it mutual understanding and personal well-being. This acceptance lays a basis for curiosity about how the brain operates and what part of decision making is conscious. It raises awareness of the information that we receive and its effect on our thoughts and resulting actions. Accepting the absence of free will doesn’t change how you think, it changes how you think about how you think.

I hope that this made you think and wish you a nice weekend :o)

Monday, April 29, 2013

Book review: “Time Reborn” by Lee Smolin

Time Reborn: From the Crisis in Physics to the Future of the Universe
By Lee Smolin
Houghton Mifflin Harcourt (April 23, 2013)

This is a difficult review for me to write because I disagree with pretty much everything in Lee’s new book “Time Reborn,” except possibly the page numbers. To begin with there is no “Crisis in Physics” as the subtitle suggests. But then I’ve learned not to blame authors for title and subtitles.

Oddly enough however, I enjoyed reading the book. Not despite, but because I had something to complain about on every page. It made me question my opinions, and though I came out holding on to them, I learned quite something on the way.

In “Time Reborn” Lee takes on the seemingly puzzling fact that mathematical truth is eternal and timeless, while the world that physicists are trying to describe with that mathematics isn’t. The role of time in contemporary physics is an interesting topic, and gives opportunity to explain our present understanding of space and time, from Newton over Special and General Relativity to modern Cosmology, Quantum Mechanics and all the way to existing approaches to Quantum Gravity.

Lee argues that our present procedures must fail when we attempt to apply them to describe the whole universe. They fail because we’re presently treating the passing of time as emergent, but as emergent in a fundamentally timeless universe. Only if we abandon the conviction, held by the vast majority of physicists, that this is the correct procedure, then can we understand the fundamental nature of reality – and with it quantum gravity of course. Lee further summarizes a few recent developments that treat time as real, though the picture he presents remains incoherent, some loosely connected, maybe promising, recent ideas that you can find on the arXiv and I don’t want to promote here.

More interesting for me is that Lee doesn’t stop at quantum gravity, which for most people on the planet arguably does not rank very high among the pressing problems. Thinking about nature as fundamentally timeless, Lee argues, is cause of very worldly problems that we can only overcome if we believe that we ourselves are able to create the future:
“We need to see everything in nature, including ourselves and our technologies, as time-bound and part of a larger, ever evolving system. A world without time is a world with a fixed set of possibilities that cannot be transcended. If, on the other hand, time is real and everything is subject to it, then there is no fixed set of possibilities and no obstacle to the invention of genuinely novel ideas and solutions to it.”
I’ll leave my objections to Lee’s arguments for some other time. For now, let me just say that I explained in this earlier post that a deterministic time evolution doesn’t relieve us from making decisions, and it doesn’t prevent “genuinely novel ideas” in any sensible definition of the phrase.

In summary: Lee’s book is very thought provoking and it takes the reader on a trip through the most fundamental questions about nature. The book is well written and nicely embedded in the long history of mankind’s wonderment about the passing of time and the circle of life. You will almost certainly enjoy this book if you want to know what contemporary physics has to say, and not to say, about the nature of time. You will almost certainly hate this book if you're a string theorist, but then you already knew that.

Saturday, March 16, 2013

The Philosophie of Gaps

“And then there's the joke in which a young man told his mother he would become a Doctor of Philosophy and she said, “Wonderful! But what kind of disease is philosophy?”
~Steven Pinker in “The Blank Slate”

Philosophers and physicists, especially those working on fundamental questions of nature, have a difficult relationship. I know a lot of physicists who use the word philosophy as an insult, and even those who have sympathy for the quest of the philosopher tend to give them a hard time.

And understandably so. I’ve heard talks by philosophers about the “issue” of infinities in quantum field theory who had never heard of effective field theory. I’ve heard philosophers speaking about Einstein’s “hole argument” who didn’t know what a manifold is, and I’ve heard philosophers talking about laws of nature who didn’t know what a Hamiltonian evolution is.

But on the other hand, I’ve met remarkably sharp philosophers with the ability to strip away excess baggage that physicists like to decorate their theories with, and go straight to the heart of the problem. No wonder the relation between both sides can be uncomfortable.

This has left me wondering what is the role of philosophy in physics, or in modern science more general.

I will admit that I have a limited attention span for philosophical arguments. To begin with, philosophers (as apparently everybody in the humanities) have the annoying tendency to throw around names rather than proper definitions. The introduction of a cosmology paper in philosophy style would not contain the Friedmann equations, but instead two conflated paragraphs on the Friedmannian paradigm and its contextual appropriation of the cosmological principle, subsequently adapted as the concordance model.

Leaving aside the name-throwing and over-abundance of multi-syllable words, the issue of lacking definitions is a deep one for me. If somebody can’t write down a definition for expressions they are referring to, I lose interest. Because then their whole argument is in the end just empty words. I am interested in verbal arguments only to the point that they precede the construction of a mathematical model.

Having said that, here is where philosophy plays a role in physics: To develop these verbal arguments that have not yet been possible to cast in a more stringent form. This means though that when science progresses, when our knowledge expands, the room where philosophy is useful inevitably shrinks. The role of the observer in quantum mechanics, horizons in general relativity, or infinities in quantum field theory might once have been philosophical question. They no longer are. Presently popular topics for philosophers in physics seem to be the nature of time and the multiverse. Personally I think these are already topics that are close enough to existing theories that they can and should be cast into a mathematical language. Topics that are further off presently existing theories, and still more clearly playground for philosophers, are for example free will or the role of mathematics in science in general.

This tension between philosophers and scientists doesn’t only exist in physics. Another area where you find frequent displays of this confrontation is neuroscience. Consciousness used to be the field of the philosophers, but no longer so. Yet, philosophers are slow to get off the turf.

A recent display of this can be found in a NYT opinion piece that discusses “famous thought experiments” by philosophers. One of these famous arguments that philosophers discuss to make a living seems to be based on confusing the brain perceiving the color red as a result of photons of a certain wavelength hitting the retina, with the brain knowing about the process of perceiving the color. You might be forgiven for confusing knowledge about perception with the perception itself if you didn’t know anything about the brain, but in the last decade we have learned a lot about how the brain is wired and processes input. Or at least some of us have.

It seems clear to me that consciousness and self-awareness are areas that philosophers will have to clear in the soon future. That is correct: I don’t think there’s anything particularly mysterious about self-awareness, and nothing about it that we won’t be able to understand with some more research on complex systems and neural networks.

But what about science at large? Does this mean that we have a philosophy of the gaps much like we have a god of the gaps, filling in the spaces where currently knowledge is missing, but inevitably on the retreat?

For most of science this is a thorny question (previously discussed here), that being whether or not there is an end to the knowledge about nature that mankind can gather. It’s a question I don’t know how to answer.

But regardless of the answer to this question, for as long as there will be conscious beings thinking they will always be left with the question whether there are limits to what they can think of. And a more pragmatic, though related, question is how science works and how it progresses. These I believe are areas where philosophy will always play a role: to analyze the process of thought and inquiry, and its realization in the scientific endeavor. And as long as we have fundamental questions  about nature, it is good to keep philosophers around to catalyze the process of making soft science into hard science. Even if they are sometimes a little annoying.

Saturday, September 08, 2012

What are you, really?

Last month, I reviewed Jim Holt’s book “Why does the world exist?” This question immediately brings up another question: What exists anyway? Holt does not seem to be very sympathetic to the idea that mathematical objects exist, or at least he makes fun of the idea:
“A majority of contemporary mathematicians (a typical, though disputed, estimate is about two-thirds) believe in a kind of heaven – not a heaven of angels and saints, but one inhabited by the perfect and timeless objects they study: n-dimensional spheres, infinite numbers, the square root of -1, and the like. Moreover, they believe that they commune with this realm of timeless entities through a sort of extra-sensory perception.”
There’s no reference for the mentioned estimate, but what’s worse is that referring to mathematical objects as “timeless” implies a preconceived notion of time already. It makes perfect sense to think of time as a mathematical object itself, and to construct other mathematical objects that depend on that time. Maybe one could say that the whole of mathematics does not evolve in this time, and we have no evidence of it evolving in any other time, but just claiming that mathematics studies “timeless objects” is sloppy and misleading. Holt goes on:
“Mathematicians who buy into this fantasy are called “Platonists”… Geometers, Plato observed, talk about circles that are perfectly round and infinite lines that are perfectly straight. Yet such perfect entities are nowhere to be found in the world we perceive with our sense… Plato concluded that the objects contemplated by mathematicians must exist in another world, one that is eternal and transcendent.”
It is interesting that Holt in his book comes across as very open-minded to pretty much everything his interview partners confront him with, including parallel-worlds, retrocausation and panpsychism, but discards Platonism as a “phantasy.”

I’m not a Platonist myself, but it’s worth spending a paragraph on the misunderstanding that Holt has constructed because this isn’t the first time I’ve come across similar statements about circles and lines and so on. It is arguably true that you won’t find a perfect circle anywhere you look. Neither will you find perfectly straight lines. But the reason for this is simply that circles and perfectly straight lines are not objects that appear in the mathematical description of the world on scales that we see. Does it follow from that they don’t exist?

If you want to ask the question in a sensible way, you should ask instead about something that we presently believe is fundamental: What’s an elementary particle? Is it an element of a Hilbert space? Or is it described by an element of a Hilbert space? Or, to put the question differently: Is there anything about reality that cannot be described by mathematics? If you say no to this question, then mathematical objects are just as real as particles.

What Holt actually says is: “I’ve never seen any of the mathematical objects that I’ve heard about in school, thus they don’t exist and Platonism is a phantasy. “ Which is very different from saying “I know that our reality is not fundamentally mathematical.” With that misunderstanding, Holt goes one to explain Platonism by psychology:
“And today’s mathematical Platonists agree. Among the most distinguished of them is Alain Connes, holder of the Chair of Analysis and Geometry at the College de France, who has averred that “there exists, independently of the human mind, a raw and immutable mathematical reality.”… Platomism is understandably seductive to mathematicians. It means that the entities they study are no mere artifacts of the human mind: these entities are discovered, not invented… Many physicists also feel the allure of Plato’s vision.”
I don’t know if that’s actually true. Most of the physicists that I asked do not believe that reality is mathematics but rather that reality is described by mathematics. But it’s very possibly the case that the physicists in my sample have a tendency towards phenomenology and model building.

Most of them see mathematics as some sort of model space that is mapped to reality. I argued in this earlier post that this is actually not the case. We never map mathematics to reality. We map a simplified system to a more complicated one, using the language of mathematics. Think of a computer simulation to predict the solar cycle. It’s a map from one system (the computer) to another system (the sun). If you do a calculation on a sheet of paper and produce some numbers that you later match with measurements, you’re likewise mapping one system (your brain) to another (your measurement), not some mathematical world to a real one. Mathematics is just a language that you use, a procedure that adds rigor and has proved useful.

I don’t believe, like Max Tegmark does, that fundamentally the world is mathematics. It seems quite implausible to me that we humans should at this point in our evolution already have come up with the best way to describe nature. I used to refer to this as the “Principle of Finite Imagination”: Just because we cannot imagine it (here: something better than mathematics) doesn’t mean it doesn’t exist. I learned from Holt’s book that my Principle of Finite Imagination is more commonly known as the Philosopher’s Fallacy.
“[T]he philosopher’s fallacy: a tendency to mistake a failure of the imagination for an insight into the way reality has to be.”
Though Googling "philopher's fallacy" brings up some different variants, so maybe it's better to stick with my nomenclature.

Anyway, this has been discussed since some thousand years and I have nothing really new to add. But there’s always somebody for whom these thoughts are new, as they once were for me. And so this one is for you.
xkcd: Lucky 10000.

Sunday, August 19, 2012

Book review: “Why does the world exist?” by Jim Holt

Why Does the World Exist?: An Existential Detective Story
By Jim Holt
Liveright (July 16, 2012)

Yes, I do sometimes wonder why the world exists. I believe however it is not among the questions that I am well suited to find an answer to, and thus my enthusiasm is limited. While I am not disinterested in philosophy in principle, I get easily frustrated with people who use words as if they had any meaning that’s not a human construct, words that are simply ill-defined unless the humans themselves and their language are explained too.

I don’t seem to agree with Max Tegmark on many points, but I agree that you can’t build fundamental insights on words that are empty unless one already has these fundamental insights - or wants to take the anthropic path. In other words, if you want to understand nature, you have to do it with a self-referential language like mathematics, not with English. Thus my conviction that if anybody is to understand the nature of reality, it will be a mathematician or a theoretical physicist.

For these reasons I’d never have bought Jim Holt’s book. I was however offered a free copy by the editor. And, thinking that I should broaden my horizon when it comes to the origin of the universe and the existence or absence of final explanations, I read it.

Holt’s book is essentially a summary of thoughts on the question why there isn’t nothing, covering the history of the question as well as the opinions of currently living thinkers. The narrative of the book is Holt’s own quest for understanding that lead him to visit and talk to several philosophers, physicists and other intellectuals, including Steven Weinberg, Alan Guth and David Deutsch. Many others are mentioned or cited, such as Stephen Hawking, Max Tegmark and Roger Penrose.

The book is very well written, though Holt has a tendency to list exactly what he ate and drank when and where which takes up more space than it deserves. There are more bottles of wine and more deaths on the pages of his book than I had expected, though that is balanced with a good sense of humor. Since Holt arranges his narrative along his travel rather than by topic, the book is sometimes repetitive when he reminds the reader of something (eg the “landscape”) that was already introduced earlier.

I am very impressed by Holt’s interviews. He has clearly done a lot of own thinking about the question. His explanations are open-minded and radiate well-meaning, but he is sharp and often critical. In many cases what he says is much more insightful than what his interview partners have to offer.

Holt’s book is good summary of just how bizarre the world is. The only person quoted in this book who made perfect sense to me is Woody Allen. On the very opposite end is a philosopher named Derek Parfit who hates the “scientizing” of philosophy, and some of his colleagues who believe in “panpsychism”, undeterred by the total lack of scientific evidence.

The reader of the book is also confronted with John Updike who belabors the miserable state of string theory “This whole string theory business… There’s never any evidence, right? There are men spending their whole careers working on a theory of something that might not even exist”, and Alex Vilenkin who has his own definition of “nothing,” which, if you ask me, is a good way to answer the question.

Towards the end of the book Jim Holt also puts forward his own solution to the problem of why there is something rather than nothing. Let me give you a flavor of that proof:
“Reality cannot be perfectly full and perfectly empty at the same time. Nor can it be ethically the best and causally the most orderly at the same time (since the occasional miracle could make reality better). And it certainly can’t be the ethically best and the most evil at the same time.”
Where to even begin? Every second word in this “proof” is undefined. How can one attempt to make an argument along these lines without explaining “ethically best” in terms that are not taken out of the universe whose existence is supposed to be explained? Not to mention that all along his travel, nobody seems to have told Holt that, shockingly, there isn’t only system of logic, but a whole selection of them.

This book has been very educational for me indeed. Now I know the names of many ism’s that I do not want to know more about. I hate the idea that I’d have missed this book if it hadn't been for the free copy in my mail box. That having been said, to get anything out of this book you need to come with an interest in the question already. Do not expect the book to create this interest. But if you come with this interest, you’ll almost surely enjoy reading it.

Wednesday, February 22, 2012

Pragmatic Paradigms

I used to consider myself a pragmatist. But some months ago I learned that pragmatism is an American school of thought, which threw me into an identity crisis. Germany is after all "das Land der Dichter und Denker," the country of poets and thinkers. I'm not living up my ancestry. Clearly, I have to reinvent myself. The Free Will Function is testimony to my try. There doesn't seem to be much that is less pragmatic than debating the existence of free will. Except possibly the multiverse.

My attitude towards the landscape problem had been based on pragmatic neglect. I can't figure out what this discussion is good for, so why bother? The landscape problem, in one sentence, is that a supposedly fundamental theory does not only deliver the description of the one universe we inhabit but of many, maybe infinitely many, universes in addition. The collection of all these universes is often called the multiverse.

There are many versions of such multiverses, Max Tegmark has layered them in 4 levels and Brian Greene has written a book about them. String theory infamously won't let its followers ignore the inelegant universes, but everybody else can still ignore the followers. At least that was my way to deal with the issue. Until I heard a talk by Keith Dienes.

Dienes has been working on making probabilistic statements about properties of possible string theory vacua, and is one of the initiators and participants of the "string vacuum project."Basically, he and his collaborators have been random sampling models and looked how often they fulfilled certain properties, like how often did one get the standard model gauge groups or chiral fermions, and where these features statistically correlated. I can't recall the details of that talk, you can either watch it here or read the paper here. But what I recall is the sincerity with which Dienes expressed his belief that, if the landscape is real, then in the end probabilistic statements might be the only thing we can do. There won't be no other answer to our questions. Call it a paradigm change.

Dienes might be wrong of course. String theory might be wrong and its landscape a blip in the history of physics. But that made me realize that I, as many other physicists, favor a particular mode of thinking, and the landscape problem just doesn't fit in. So what if he's right, I thought, would I just reject the idea because I've been educated under an outdated paradigm?

Now, realizing that I'm getting old didn't make me a multiverse enthusiast. As I argued in this earlier post, looking for a right measure in the landscape, one according to which we live in a likely place, isn't much different from looking for some other principle according to which the values of parameters we measure are optimal in some sense. If that works, it's fine with me, but I don't really see the intellectual advantage of believing in the reality of the whole parameter space.

So while I remain skeptic of the use of the multiverse, I had to wonder if not Dienes is right, and I am stuck with old-fashioned, pragmatic paradigms.

I was trying to continue to ignore string theorists and their problems. Just that, after trying for some while, I had to admit that I think Tegmark and Greene are right. The landscape isn't a problem of string theory alone.

As I've argued in this post, every theory that we currently know has a landscape problem because we always have to make some assumptions about what constitutes the theory to begin with. We have to identify mathematical objects with reality. Without these assumptions, in the end the only requirement that is left is mathematical consistency, and that is not sufficient to explain why we see what we see; there is too much that is mathematically consistent which does not describe our observation. All theories have that problem, it's just more apparent with some than with others.

Normally I just wouldn't care but, if you recall, I was trying not to be so pragmatic. This then leaves me two options. I can either believe in the landscape. Or I believe that mathematics isn't fundamentally the right language to describe nature.

While I was mulling over German pragmatism and the mathematical effectiveness of reason, Lee Smolin wrote a paper on the landscape problem

The paper excels in the use of lists and bullet points, and argues a lot with principles and fallacies and paradigms. So how could I not read it?

Lee writes we're stuck with the Newtonian paradigm, a theme that I've heard Paul Davies deliver too. We've found it handy to deal with a space of states and an evolution law acting on it, but that procedure won't work for the universe itself. If you believe Lee, the best way out is cosmological natural selection. He argues that his approach to explain the parameters in the standard model is preferable because it conforms to Leibniz' principle of sufficient reason:
    Principle of Sufficient Reason.
    For every property of nature which might be otherwise, there must be a rational reason which is sufficient to explain that choice.

That reason cannot be one of logical conclusion, otherwise one wouldn't need the principle. Leibniz explains that his principle of sufficient reason is necessary "in order to proceed from mathematics to physics."

Lee then argues basically that Leibniz's principle favors some theories over others. I think he's both right and wrong. He is right in that Leibniz's principle favors some theories over others. But he's wrong in thinking that there is sufficient reason to apply the principle to begin with. The principle of sufficient reason itself has a landscape problem, and it is strangely anthropocentric in addition.

As Leibniz points out the "sufficient reason" cannot be a strictly logical conclusion. For that one doesn't need his principle. The sufficient reason can eventually only be a social construct, based on past observation and experience, and it will be one that's convincing for human scientists in particular. It doesn't help to require the sufficient reason to be "rational," this is just another undefined adjective.

Take as an example the existence of singularities. We like to think that a theory that results in singularities is unphysical, and thus cannot fundamentally be a correct description of nature. For many physicists, singularities or infinite results are "sufficient reason" to discard a theory. It's unphysical, it can't be real: That is not a logical conclusion, and exactly the sort of argument that Leibniz is after. But, needless to say, scientists don't always agree on when a reason is "sufficient." Do we have sufficient reason to believe that gravity has to be quantized? Do we have sufficient reason to believe that black holes bounce and create baby universes? Do we have sufficient reason to require that the Leibniz cookie has exactly 52 teeth?

Do we have any reason to believe that a human must be able to come up with a rational reason for a correct law of nature?

The only way to remove the ambiguity in the principle of sufficient reason would be to find an objective measure for "sufficient" and then we're back to scratch: We have no way to prefer one "sufficiency" over the other, except that some work better than others. As Popper taught us, one can't verify a theory. One can just not falsify it and gain confidence. Yet how much confidence is "sufficient" to make a reason "rational" is a sociological question.

So in the end, one could read Leibniz principle as one of pragmatism.

That way reassured in my German pragmatism, I thought going through this argument might not have been very useful, but at least it will make a good blogpost.