Showing posts with label Random Thoughts. Show all posts
Showing posts with label Random Thoughts. Show all posts

Friday, June 12, 2015

Where are we on the road to quantum gravity?

Damned if I know! But I got to ask some questions to Lee Smolin which he kindly replied to, and you can read his answers over at Starts with a Bang. If you’re a string theorist you don’t have to read it of course because we already know you’ll hate it.

But I would be acting out of character if not having an answer to the question posed in the title did prevent me from going on and distributing opinions, so here we go. On my postdoctoral path through institutions I’ve passed by string theory and loop quantum gravity, and after some closer inspection stayed at a distance from both because I wanted to do physics and not math. I wanted to describe something in the real world and not spend my days proving convergence theorems or doing stability analyses of imaginary things. I wanted to do something meaningful with my life, and I was – still am – deeply disturbed by how detached quantum gravity is from experiment. So detached in fact one has to wonder if it’s science at all.

That’s why I’ve worked for years on quantum gravity phenomenology. The recent developments in string theory to apply the AdS/CFT duality to the description of strongly coupled systems are another way to make this contact to reality, but then we were talking about quantum gravity.

For me the most interesting theoretical developments in quantum gravity are the ones Lee hasn’t mentioned. There are various emergent gravity scenarios and though I don’t find any of them too convincing, there might be something to the idea that gravity is a statistical effect. And then there is Achim Kempf’s spectral geometry that for all I can see would just fit together very nicely with causal sets. But yeah, there are like two people in the world working on this and they’re flying below the pop sci radar. So you’d probably never have heard of them if it wasn’t for my awesome blog, so listen: Have an eye on Achim Kempf and Raffael Sorkin, they’re both brilliant and their work is totally underappreciated.

Personally, I am not so secretly convinced that the actual reason we haven’t yet figured out which theory of quantum gravity describes our universe is that we haven’t understood quantization. The so-called “problem of time”, the past hypothesis, the measurement problem, the cosmological constant – all this signals to me the problem isn’t gravity, the problem is the quantization prescription itself. And what a strange procedure this is, to take a classical theory and then quantize and second quantize it to obtain something more fundamental. How do we know this procedure isn’t scale dependent? How do we know it works the same at the Planck scale as in our labs? We don’t. Unfortunately, this topic rests at the intersection of quantum gravity and quantum foundations and is dismissed by both sides, unless you count my own small contribution. It’s a research area with only one paper!

Having said that, I found Lee’s answers interesting because I understand better now the optimism behind the quote from his 2001 book, that predicted we’d know the theory of quantum gravity by 2015.

I originally studied mathematics, and it just so happened that the first journal club I ever attended, in '97 or '98, was held by a professor for mathematical physics on the topic of Ashtekar’s variables. I knew some General Relativity and was just taking a class on quantum field theory, and this fit in nicely. It was somewhat over my head but basically the same math and not too difficult to follow. And it all seemed to make much sense! I switched from math to physics and in fact for several years to come I lived under the impression that gravity had been quantized and it wouldn’t take long until somebody calculated exactly what is inside a black hole and how the big bang works. That, however, never happened. And here we are in 2015, still looking to answer the same questions.

I’ll restrain from making a prediction because predicting when we’ll know the theory for quantum gravity is more difficult than finding it in the first place ;o)

Saturday, May 30, 2015

String theory advances philosophy. No, really.

I have a soft side, and I don’t mean my Snoopy pants, though there is that. I mean I have a liking for philosophy because there are so many questions that physics can’t answer. I never get far with my philosophical ambitions though because the only reason I can see for leaving a question to philosophers is that the question itself is the problem. Take for example the question “What is real?” What does that really mean?

Most scientists are realists and believe that the world exists independent of them. On the very opposite end there is solipsism, the belief that one can only be sure that one’s own mind exists. And then there’s a large spectrum of isms in the middle. Philosophers have debated the nature of reality for thousands of years, and you might rightfully conclude that it just isn’t possible to make headway on the issue. But you’d be wrong! As I learned on a recent conference where I gave a talk about dualities in physics, string theory indeed helped philosophers to make progress in this ancient debate. However, I couldn’t make much sense of the interest in dualities that my talk got until I read Richard Dawid’s book which put things into perspective.

I’d call myself a pragmatic realist and an opportunistic solipsist, which is to say that I sometimes like to challenge people to prove me they’re not a figment of my imagination. So far nobody has succeeded. It’s not so much self-focus that makes me contemplate solipsism, but a deep mistrust in the reliability of human perception and memory, especially my own, because who knows if you exist at all. Solipsism never was very popular, which might be because it makes you personally responsible for all that is wrong with the world. It is also the possibly most unproductive mindset you can have if you want to get research done, but I find it quite useful to deal with the more bizarre comments that I get.

My biggest problem with the question what is real though isn’t that I evidently sometimes talk to myself, but that I don’t know what “real” even means, which is also why most discussions about the reality of time or the multiverse seem void of content to me. The only way I ever managed to make sense of reality is in a layer of equivalence classes, so let me introduce you to my personal reality.

Equivalence classes are what mathematicians use to collect things with similar properties. It’s basically a weaker form of equality, often denoted with a tilde ~. For example all natural numbers that divide evenly by seven are in the same equivalence class, so while 7 ≠ 21, it is 7 ~ 21. They’re not the same numbers, but they share a common property. The good thing about using equivalence classes is that once defined one can derive relations for them. They play an essential role in topology, but I digress, so back to reality.

Equivalence classes help because while I can’t make sense of the question what is real, the question what is “as real as” makes sense. The number seven isn’t “as real as” my shoe, and the reason I’m saying this is because of the physical interaction I can have with my shoe but not with seven. That’s why, you won’t be surprised to hear, I want to argue here the best way to think about reality is to think about physics first.

As I laid out in an earlier post, in physics we talk about direct and indirect measurements, but the line that separates these is fuzzy. Roughly speaking, the more effort is necessary to infer the properties of the object measured, the more indirect the measurement. A particle that hits a detector is often said to be directly measured. A particle whose existence has to be inferred from decay products that hit the detector is said to be indirectly measured. But of course there are many other layers of inference in the measurement. To begin with there are assumptions about the interactions within the detector that eventually produce a number on a screen, then there are photons that travel to your retina, and finally the brain activity resulting from these photons.

The reason we don’t normally mention all these many assumptions is that we assign them an extremely high confidence level. Reality then, in my perspective, has confidence levels like our measurements do, from very direct to very indirect. The most direct measurement, the first layer of reality, is what originates in your own brain. The second layer is direct sensory input: It’s a photon, it’s the fabric touching your skin, the pressure fluctuations in the air perceived as sound. The next layer is the origin of these signals, say, the screen emitting the photon. Then the next layer is whatever processor gave rise to that photon, and so on. Depending on how solipsisitic you feel you can imagine these layers extending outside or inside.

The more layers there are, the harder it becomes to reconstruct the origin of a signal and the less real the origin appears. A person appears much more real if they are stepping on your feet, rather than sending an image of a shoe. Also, as optical illusions tell us, the signal reconstruction can be quite difficult which twists our perception of reality. And let us not even start with special relativistic image distortions that require quite some processing to get right.

Our assessment of how direct or indirect a measurement is, and of how real the object measured appears, is not fixed and may change over time with technological advances. It was historically for example much topic of debate whether atoms can be considered real if they cannot be seen by eye. But modern electron microscopes now can produce images of single atoms, a much more direct measurement than inferring the existence of atoms from chemical reactions. As the saying goes “seeing is believing.” Seeing photos from the surface of Mars likewise has moved Mars into another equivalence class of reality, one that is much closer to our sensory input. Doesn’t Mars seem so much more real now?

[Surface of Mars. Image Source: Wikipedia]


Quarks have posed a particular problem for the question of reality since they cannot be directly measured due to confinement. In fact many people in the early days of the quark model, Gell-Mann himself included, didn’t believe in quarks being real, but where thinking of them as calculational devices. I don’t really see the difference. We infer their properties through various layers of reasoning. Quarks are not in a reality class that is anywhere close to direct sensory input, but they have certainly become more real to us as our confidence in the theory necessary to extract information from the data has increased. These theories are now so well established that quarks are considered as real as other particles that are easier to measure, fapp - for all practical physicists.

It’s about at the advent of quantum field theory that the case of scientific realism starts getting complicated. Philosophers separate in two major camps, ontological realism and structural realism. The former believes that the objects of our theories are somehow real, the latter that it’s the structure of the theory instead. Effective field theories basically tell you that ontological realism makes only sense in layers, because you might have different objects depending on the scale of resolution. But even then, with seas of virtual particles, and different bases in the Hilbert space, and different pictures of time-evolution, the objects that should be at the core of ontological realism seem ill-defined. And that’s not even taking into account that the notion of a particle also depends on the observer.

For what I can extract from Dawid’s book it hasn’t been looking good for ontological realism for some while, but it’s an ongoing debate and it’s here where string theory became relevant.

Some dualities between different theories have been known for a long time. A duality can relate theories that have a different field content and different symmetries. That by itself is a death spell to anything ontological, for if you have two different fields by which you can describe the same physics, what is the rationale for calling one more real than the other? Dawid writes:
“dualities… are thoroughly incompatible with ontological scientific realism.”
String theory now not only has popularized the existence of dualities and forced philosophers to deal with that, it has also served to demonstrate that theories can be dual to each other that are structurally very different, such as a string theory in one space and a gauge-theory in a space of lower dimension. So one is now similarly at a loss to decide which structure is more real than the other.

To address this, Dawid suggests to instead think of “consistent structure realism” by which he seem to mean we need to take the full “consistent structure” (ie, string theory) and interpret this as being the fundamentally “real” thing.

For what I am concerned, both sides of a duality are equally real, or equally unreal, depending on how convincing you think the inference of either theory from existing data is. They’re both in the same equivalence class; in fact the duality itself provides the equivalence relation. So suppose you have convincing evidence for some string-theory-derived duality to be a good description of nature, does that mean the whole multiverse is equally real? No, because the rest of the multiverse only follows through an even longer chain of reasoning. You either must come up with a mechanism that produces the other universes (as in eternal inflation or the many worlds interpretation) and then find support for that, or the multiverse moves to the same class of reality as the number seven, somewhere behind Snoopy and the Yeti.

So the property of being real is not binary, but rather it is infinitely layered. It is also relative and changes over time for the effort that you must make to reconstruct a concept or an image isn’t the same I might have to make. Quarks become more real the better we understand quantum chromo dynamics in the same way that you are more real to yourself than you are to me.

I still don’t know if strings as the fundamental building blocks of elementary particles can ever reach a reality level comparable to quarks, or if there is any conceivable measurement at all, no matter how indirect. Though one could rightfully argue that in some people’s mind strings already exist beyond any doubt. And if you’re a brain in a jar, that’s all that matters, really.




Thursday, January 08, 2015

Do we live in a computer simulation?

Some days I can almost get myself to believe that we live in a computer simulation, that all we see around us is a façade designed to mislead us. There would finally be a reason for all this, for the meaningless struggles, the injustice, for life, and death, and for Justin Bieber. There would even be a reason for dark matter and dark energy, though that reason might just be some alien’s bizarre sense of humor.

It seems perfectly possible to me to trick a conscious mind, at the level of that of humans, into believing a made-up reality. Ask the guy sitting on the sidewalk talking to the trash bin. Sure, we are presently far from creating artificial intelligence, but I do not see anything fundamental that stands in way of such creation. Let it be a thousand years or ten thousand years, eventually we’ll get there. And once you believe that it will one day be possible for us to build a supercomputer that hosts intelligent minds in a world whose laws of nature are our invention, you also have to ask yourself whether the laws of nature that we ourselves have found are somebody else’s invention.

If you just assume the simulation that we might live in has us perfectly fooled and we can never find out if there is any deeper level of reality, it becomes rather pointless to even think about it. In this case the belief in “somebody else” who has created our world and has the power to manipulate it at his or her will differs from belief in an omniscient god only by terminology. The relevant question though is whether it is possible to fool us entirely.

Nick Bostrum has a simulation argument that is neatly minimalistic, though he is guilty of using words that end on ism. He is saying basically that if there are many civilizations running simulations with many artificial intelligences, then you are more likely to be simulated than not. So either you live in a simulation, or our universe (multiverse, if you must) never goes on to produce many civilizations capable of running these simulations for one reason or the other. Pick your poison. I think I prefer the simulation.

Math-me has a general issue with these kinds of probability arguments (same as with the Doomsday argument) because they implicitly assume that the probability distribution of lives lived over time is uncorrelated, which is clearly not the case since our time-evolution is causal. But this is not what I want to get into today because there is something else about Bostrum’s argument that has been bugging Physics-me.

For his argument, Bostrum needs a way to estimate how much computing power is necessary to simulate something like the human mind perceiving something like the human environment. And in his estimate he assumes, crucially, that it is possible to significantly compress the information of our environment. Physics-me has been chewing on this point for some while. The relevant paragraphs are:

If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities.

The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world.

Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. The paradigmatic case of this is a computer. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. This presents no problem, since our current computing power is negligible by posthuman standards.”
This assumption is immediately problematic because it isn’t as easy as saying that whenever a human wants to drill a hole into the Earth you quickly go and compute what he has to find there. You would have to track what all these simulated humans are doing to know whenever that becomes necessary. And then you’d have to make sure that this never leads to any inconsistencies. Or else, if it does, you’d have to remove the inconsistency, which will add even more computing power. To avoid the inconsistencies, you’ll have to carry on all results for all future measurements that humans could possibly make, the problem being you don’t know which measurements they will make because you haven’t yet done the simulation. Dizzy? Don’t leave, I’m not going to dwell on this.

The key observation that I want to pick on here is that there will be instances in which The Programmer really has to cramp up the resolution to avoid us from finding out we’re in a simulation. Let me refer to what we perceive as reality as level zero, and a possible reality of somebody running our simulation as level 1. There could be infinitely many levels in each direction, depending on how many simulators simulate simulations.

This idea that structures depend on the scale at which they are tested and that at low energies you’re not testing all that much detail is basically what effective field theories are all about. Indeed, as Bostrom asserts, for much of our daily life the single motion of each and every quark is unnecessary information, atoms or molecules are enough. This is all fine by Physics-me.

Then these humans they go and build the LHC and whenever the beams collide the simulation suddenly needs a considerably finer mesh, or else the humans will notice there is something funny with their laws of nature.

Now you might think of blasting the simulation by just demanding so much fine-structure information all at once that the computer running our simulation cannot deliver. In this case the LHC would serve to test the simulation hypothesis. But there is really no good reason why the LHC should just be the thing to reach whatever computation limit exists at level 1.

But there is a better way to test whether we live in a simulation: Build simulations ourselves, the more the better. The reason is that you can’t compress what is already maximally compressed. So if the level 1 computation wants to prevent us from finding out that we live in a simulation by creating simulations ourselves, they’ll have to cramp up computational efficiency for that part of our level 0 simulation that is going to inhabit our simulation at level -1.

Now we try to create simulations that will create a simulation will create a simulation and so on. Eventually, the level 1 simulation will not be able to deliver any more, regardless of how good their computer is, and the then lowest level will find some strange artifacts. Something that is clearly not compatible with the laws of nature they have found so far and believed to be correct. This breakdown gets read out by the computer one level above, and so on, until it reaches us and then whatever is the uppermost level (if there is one).

Unless you want to believe that I’m an exceptional anomaly in the multiverse, every reasonably intelligent species should have somebody who will come up with this sooner or later. Then they’ll set out to create simulations that will create a simulation. If one of their simulations doesn’t develop into the direction of creating more simulations, they’ll scrape it and try a different one because otherwise it’s not helpful to their end.

This leads to a situation much like Lee Smolin’s Cosmological Natural Selection in which black holes create new universes that create black holes create new universes and so on. The whole population of universes then is dominated by those universes that lead to the largest numbers of black holes - that have the most “offspring.” In Cosmological Natural Selection we are most likely to find ourselves in a universe that optimizes the number of black holes.

In the scenario I discussed above the reproduction doesn’t happen by black holes but by building computer simulations. In this case then anybody living in a simulation is most likely to be living in a simulation that will go on to create another simulation. Or, to look at this from a slightly different perspective, if you want our species to continue thriving and avoid that The Programmer pulls the plug, you better work on creating artificial intelligence because this is why we’re here. You asked what’s the purpose of life? There it is. You’re welcome.

This also means you could try to test the probability of the simulation hypothesis being correct by seeing whether our universe does indeed have the optimal conditions for the creation of computer simulations.

Brain hurting? Don’t worry, it’s probably not real.

Saturday, January 03, 2015

Your g is my e – Has time come for a physics notation standard?

Standards make sure the nuts fit the bolts.
[Image Source: nutsandbolts.mit.edu]

The German Institute for Standardization, the “Deutsches Institut für Normung” (DIN), has standardized German life since 1917. DIN 18065 sets the standard for the height of staircase railings, DIN 18065 the surface of school bags to be covered with reflective stripes, and DIN 8270-2 the length of the hands of a clock. The Germans have a standard for pretty much everything from toilets to sleeping bags to funeral service.

Many of the German standards are now identical to European Standards, EN, and/or International Standards, ISO. According to DIN ISO 8610 for example the International Standard Day begins on Monday and the week has seven days. DIN EN 1400-1 certifies that a pacifier has two holes so that baby can still breathe if it manages to suck the pacifier into its mouth (it happens). The international standard DIN EN ISO 20126 assures that every bristle of your toothbrush can withhold a pull of at least 15 Newton (“Büschelauszugskraftprüfung” bristle-pull-off-force-test as the Germans call it). A lot of standards are dedicated to hardware supply and electronic appliances; they make sure that the nuts fit the bolts, the plugs fit the outlets, and the fuses blow when they should.

DIN EN 45020 is the European Standard for standards.

Where standards are lacking, life becomes cumbersome. Imagine every time you bought envelopes or folders you’d have to check they will actually fit to the paper you have. The Swedes have a different standard for paper punching than the Germans, neither of which is identical to the US American one. Filing cross-country taxes is painful for many reasons, but the punch issue is the straw that makes my camel go nuts. And let me not even get started about certain nations who don’t even use the ISO paper sizes because international is just the rest of the world.

Standards are important for consumer safety and convenience, but they have another important role which is to benefit the economic infrastructure by making reuse and adaptation dramatically easier. The mechanical engineers have figured that out a century ago, why haven’t the physicists?

During the summer I read a textbook on in-medium electrodynamics, a topic I was honestly hoping I’d never again have anything to do with, but unfortunately it was relevant for my recent paper. I went and flipped over the first 6 chapters or so because they covered the basics that I thought I know, just to then find that the later chapters didn’t make any sense. They gradually started making sense after I figured out that q wasn’t the charge and η not the viscosity.

Anybody who often works with physics textbooks will have encountered this problem before. Even after adjusting for unit and sign conventions, each author has their own notation.

Needless to say this isn’t a problem of textbooks only. I quite frequently read papers that are not directly in my research area, and it is terribly annoying having to waste time trying to decode the nomenclature. In one instance I recall being very confused about an astrophysics paper until it occurred to me that M probably wasn’t the mass of the galaxy. Yeah, haha, how funny.

I’m one of these terrible referees who will insist that every variable, constant, and parameter is introduced in the text. If you write p, I expect you to explain that it’s the momentum. (Or is it a pressure?) If you write g, I expect you to explain it’s the metric determinant. (Or is it a coupling constant? And what again is your sign convention?) If you write S, I expect you to explain it’s the action. (Or is it the entropy?)

I’m doing this mostly because if you read papers dating back to the turn of the last century it is very apparent that what was common notation then isn’t common notation any more. If somebody in a hundred years downloads today’s papers, I still want them to be able to figure out what the papers are about. Another reason I insist on this is that not explaining the notation can add substantial interpretational fog. One of my pet peeves is to ask whether x denotes a position operator or a coordinate. You can build whole theories of mixing these up.

You may wnat to dsicard this as some German maknig am eelphnat out of a muose, but think twice. You almots certainly have seen tihs adn smiliar memes that supposedly show how amazingly well the human brain is at sense-making and error correction. If we can do this, certainly we are able to sort out the nomenclature used in scientific papers. Yes, we are able to do this like you are able to decipher my garbled up English. But would you want to raed a whoel essay liek this?

The extra effort it takes to figure out somebody else’s nomenclature, even if it isn’t all that big a hurdle, creates friction that makes interdisciplinary work, even collaboration within one discipline, harder and thus discourages it. Researchers within one area often settle on a common or at least similar nomenclature, but this happens typically within groups that are already very specialized, and the nomenclature hurdle further supports this overspecialization. Imagine how much easier it would be to learn about a new subject if each paper used a standard notation or at least had a list of used notation added at the end, or in a supplement.

There aren’t all that many letters in the alphabets we commonly use, and we’d run out of letters quickly would we try to keep them all different. But they don’t need to be all different – more practical would be palettes for certain disciplines. And of course one doesn’t really have to fix each and every twiddle or index if it is explained in the text. Just the most important variables, constants, and observables would already be a great improvement. Say, that T that you are using there, does or doesn’t that include complex conjugation? And the D, is that the number of spatial coordinates only, or does it include the time-coordinate? Oh, and N isn’t a normalization but an integer, how stupid of me.

In fact, I think that the benefit, especially for students who haven’t yet seen all that many papers, would be so large that we will almost certainly sooner or later see such a nomenclature standard. And all it really takes is for somebody to set up a wiki and collect entries, then for authors to add a note that they used a certain notation standard. This might be a good starting point.

Of course a physics notation standard will only work if sufficient people come to see the benefit. I don’t think we’re quite there yet, but I am pretty sure that the day will come when some nation expects a certain standard for lecture notes and textbooks, and that day isn’t too far into the future.

Tuesday, October 21, 2014

We talk too much.

Image Source: Loom Love.

If I had one word to explain human culture at the dawn of the 21st century it would be “viral”. Everybody, it seems, is either afraid of or trying to make something go viral. And as mother of two toddlers in Kindergarten, I am of course well qualified to comment on the issue of spreading diseases, like pinkeye, lice, goat memes, black hole firewalls, and other social infections.

Today’s disease is called rainbow loom. It spreads via wrist bands that you are supposed to crochet together from rubber rings. Our daughters are too young to crochet, but that doesn’t prevent them from dragging around piles of tiny rubber bands which they put on their fingers, toes, clothes, toys, bed posts, door knobs and pretty much everything else. I spend a significant amount of my waking hours picking up these rubber bands. The other day I found some in the cereal box. Sooner or later, we’ll accidentally eat one.

But most of the infections the kids bring home are words and ideas. As of recently, they call me “little fart” or “old witch” and, leaving aside the possibility that this is my husband’s vocabulary when I am away, they probably trade these expressions at Kindergarten. I’ll give you two witches for one fart, deal? Lara, amusingly enough, sometimes confuses the words “ass” and “men” – “Arch” and “Mench” in German with her toddler’s lisp. You’re not supposed to laugh, you’re supposed to correct them. It’s “Arsch,” Lara, “SCH, not CH, Arsch.”

Man, as Aristotle put it, is a zoon politicon, she lives in communities, she is social, she shares, she spreads ideas and viruses. He does too. I pass through Frankfurt international airport on the average once per week. Research shows that the more often you are exposed to a topic the more important do you think it is, regardless of what the source is. It’s the repeated exposure that does it. Once you have a word in your head marked as relevant, your brain keeps pushing it around and hands it back to you to look for further information. Have I said Ebola yet?

Yes, words and ideas, news and memes, go viral, spread, mutate and affect the way we think. And the more connected we are, the more we share, the more we become alike. We see the same things and talk about the same things. Because if you don’t talk about what everybody else talks about would you even listen to yourself?

Not so surprisingly then, it has become fashionable to declare the end of individualism also in science, pointing towards larger and larger collaborations, and increasing co-author networks, the need to share, and the success of sharing. According to this NYT headline, the “ERA OF BIG SCIENCE DIMINISHES ROLE OF LONELY GENIUS”. We can read there
“Born out of the complexity of modern technology, the era of the vast, big-budget research team came into its own with its scientific achievements of 1984.”
Yes, that’s right, this headline dates back 30 years.

There lonely genius of course has always been a myth. Science is and has always been a community enterprise. We’re standing on the shoulders of giants. Most of them are dead, ok, but we’re still standing, standing on these dead people’s shoulders and we’re still talking and talking and talking. We’re all talking way too much. It’s hard not to have this impression after attending 5 conferences more or less in a row.

Collaboration is very en vogue today, or “trending” as we now say. Nature recently had an article about the measurement of the gravitational constant, G. Not a topic I care deeply about, but the article has an interesting quote:
“Until now, scientists measuring G have competed; everyone necessarily believes in their own value, says Stephan Schlamminger, an experimental physicist at NIST. “A lot of these people have pretty big egos, so it may be difficult,” he says. “I think when people agree which experiment to do, everyone wants their idea put forward. But in the end it will be a compromise, and we are all adults so we can probably agree.” 
Working together could even be a stress reliever, says Jens Gundlach, an experimental physicist at the University of Washington in Seattle. Getting a result that differs from the literature is very uncomfortable, he says. “You think day and night, ‘Did I do everything right?’”
And here I was thinking that worrying day and night about whether you did everything right is the essence of science. But apparently that’s too much stress. It’s clearly better we all work together to make this stressful thinking somebody else’s problem. Can you have a look at my notes and find that missing sign?

The Chinese, as you have almost certainly read, are about to overtake the world, and in that effort they now reform their science research system. Nature magazine informs us that the idea of this reform is “to encourage scientists to collaborate on fewer, large problems, rather than to churn out marginal advances in disparate projects that can be used to seek multiple grants. “Teamwork is the key word,” says Mu-Ming Poo, director of the CAS Institute of Neuroscience in Shanghai.” Essentially, it seems, they’re giving out salary increases for scientists to think the same as their colleagues.

I’m a miserable cook. My mode of operation is taking whatever is in the fridge, throwing it into a pan with loads of butter, making sure it’s really dead, and then pouring salt over it. (So you don’t notice the rubber bands.) Yes, I’m a miserable cook. But I know one thing about cooking: if you cook it for too long or stir too much, all you get is mush. It’s the same with ideas. We’re better off with various individual approaches than one collaborative one. Too much systemic risk in putting all your eggs in the same journal.

The kids, they also bring home sand-bathed gummy bears that I am supposed to wash, their friend’s socks, and stacks of millimeter paper glued together because GLUE! Apparently some store donated cubic meters of this paper to the Kindergarten because nobody buys it anymore. I recall having to draw my error bars on this paper, always trying not to use an eraser because the grid would rub away with the pencil. Those were the days.

We speak about ideas going viral, but we never speak about what happens after this. We get immune. The first time I heard about the Stückelberg mechanism I thought it was the greatest thing ever. Now it’s on the daily increasing list of oh-yeah-this-thing. I’ve always liked the myth of the lonely genius. I have a new office mate. She is very quiet.

Friday, October 03, 2014

Is the next supercollider a good investment?

The relevance of basic research is difficult to communicate to politicians who only care about their next term and who don’t want to invest in what might take decades to pay off. But it is even more difficult to decide which research is the best to invest into, and how much it is worth, in numbers.

Whether a next supercollider is worth the billions of Euro that it will eat up is a very involved question. I find it partly annoying, partly disturbing, that many of my physics colleagues regard the answer as obvious. Clearly we need a new supercollider! To measure the details of this, and the decay channels of that, to get a cleaner signal of something and a better precision for whatever. And I am sure they will come up with an argument for why Susy, our invisible friend, is still just around the corner.

To me this superficial argumentation is just another way of demonstrating they don’t care about communicating the relevance of their research. Of course they want a next collider - they make their living writing papers about that.

The most common argument that I hear in favor of the next collider is that much more money is wasted on the war in Afghanistan (if you ask an American) or rebuilding the Greek economy (if you ask a German), and I am sure similar remarks are uttered worldwide. The logic here seems to be that a lot of money is wasted anyway, so what does it matter to spend some billions on a collider. Maybe this sounds convincing if you have a PhD in high energy physics, but I don’t know who else is supposed to buy this.

The next argument I keep hearing is that the worldwide web was invented at CERN which also hosts the LHC right now. If anything, this argument is even more stupid than the war-also-wastes-money argument. Yes, Tim Berners-Lee happened to work at CERN when he developed hypertext. The environment was certainly conductive to his invention, but the standard model of particle physics had otherwise very little to do with it. You could equally well argue we should build leaning towers to advance research on general relativity.

I just finished reading John Moffat’s book “Cracking the Particle Code of the Universe”. I can’t post the review here until it has appeared in print due to copyright issues, sorry, but by and large it’s a good book. No, he doesn’t use it to advertise his own theories. He mentions them of course, but most of the book is more generally dedicated to the history, achievements, and shortcomings of the standard model.

His argument for the relevance of particle colliders amounts to the following paragraph:
“As Guido Altarelli mused after my talk at CERN in 2008, can governments be persuaded to spend ever greater sums of money, amounting to many billions of dollars, on ever larger and higher energy accelerators than the LHC if they suspect that the new machines will also come up with nothing new beyond the Higgs boson? Of course, to put this in perspective, one should realize that the $9 billion spend on an accelerator would not run a contemporary war such as the Afghanistan war for more than five weeks. Rather than killing people, building and operating these large machines has practical and beneficial spinoffs for technology and for training scientists. Thus, even if the accelerators continued to find no new particles, they might still produce significant benefits for society. The Worldwide Web, after all, was invented at CERN.”

~ John Moffat, Cracking the Particle Code of the Universe, p. 78
Well, running a war also has practical and beneficial spinoffs for technology and training scientists. Sorry John, but that was disappointing. To be fair, the whole book itself makes a pretty good case for why understanding the laws of nature is important business. But what war doesn’t do for your country and what investing in basic research does is building a base for sustainable progress. Without new discoveries and fundamentally new insights, applied science must eventually run dry.

There is no doubt in my mind that society invests its billions well if it invests in theoretical physics. Whether that investment should go into particle colliders though is a different question. I don’t have a good answer to that, and I don’t see that the question is seriously being discussed. Is it a worthy cause?

Last year, Fermilab’s Symmetry Magazine ran a video contest on the topic “Why particle physics matters”. Ironically most of the answers have nothing to do with particle physics in particular: “could bring about a revolution,” “a wonderful model of successful international collaboration,” “explore the frontiers and boundaries of our universe,” “engages and sharpens the mind”, “captures the imagination of bright minds”. You could use literally the same arguments for cosmology, quantum information or high precision measurements. Indeed, I personally find the high precision frontier presently more promising than ramping up energy and luminosity.

I am happy of course if China will go ahead and build the next supercollider. After all it’s not my taxes and still better than spending money on diamond necklaces that your 16 year old can show off on facebook. I can’t quite shake the impression though that this plan is more the result of wanting to appear competitive than the result of a careful deliberation about return on investment.

Monday, August 25, 2014

Name that Þing

[Image credits Ria Novosti, source]
As teenager I switched between the fantasy and science fiction aisle of the local library, but in the end it was science fiction that won me over.

The main difference between the genres seemed the extent to which authors bothered to come up with explanations. The science fiction authors, they bent and broke the laws of Nature but did so consistently, or at least tried to. Fantasy writers on the other hand were just too lazy to work out the rules to begin with.

You could convert Harry Potter into a science fiction novel easily enough. Leaving aside gimmicks such as moving photos that are really yesterday’s future, call the floo network a transmitter, the truth serum a nanobot liquid, and the invisibility cloak a shield. Add some electric buzz, quantum vocabulary, and alien species to it. Make that wooden wand a light saber and that broom an X-wing starfighter, and the rest is a fairly standard story of the Other World, the Secret Clan, and the Chosen One learning the rules of the game and the laws of the trade, of good and evil, of friendship and love.

The one thing that most of the fantasy literature has which science fiction doesn’t have, and which has always fascinated me, is the idea of an Old Language, the idea that there is a true name for every thing and every place, and if you know the true name you have power over it. Speaking in the Old Language always tells the truth. If you speak the Old Language, you make it real.

This idea of the Old Language almost certainly goes back to our ancestor’s fights with an often hostile and unpredictable nature threatening their survival. The names, the stories, the gods and godzillas, they were their way of understanding and managing the environment. They were also the precursor to what would become science. And don’t we in physics today still try to find the true name of some thing so we have power over it?

Aren’t we still looking for the right words and the right language? Aren’t we still looking for the names to speak truth to power, to command that what threatens us and frightens us, to understand where we belong, where we came from, and where we go to? We call it dark energy and we call it dark matter, but these are not their true names. We call them waves and we call them particles, but these are not their true names. Some call the thing a string, some call it a graph, some call it a bit, but as Lee Smolin put it so nicely, none of these words quite has a “ring of truth” to it. These are not the real names.

Neil Gaiman’s recent fantasy novel “The Ocean at the End of the Road” also draws on the idea of an Old Language, of a truth below the surface, a theory of everything which the average human cannot fathom because they do not speak the right words. In Michael Ende’s “Neverending Story” that what does not have a true name dies and decays to nothing. (And of course Ende has a Chosen One saving the world from that no-thing.) It all starts and it all ends with our ability to name that what we are part of.

You don’t get a universe from nothing of course. You can get a universe from math, but the mathematical universe doesn’t come from nothing either, it comes from Max Tegmark, that is to say some human (for all I can tell) trying to find the right words to describe, well, everything - no point trying to be modest about it. Tegmark, incidentally, also seems to speak at least ten different languages or so, maybe that’s not a coincidence.

The evolution of language has long fascinated historians and neurologists alike. Language is more than assigning a sound to things and things you do with things. Language is a way to organize thought patterns and to classify relations, if in a way that is frequently inconsistent and often confusing. But the oldest language of all is neither Sindarin nor Old Norse, it is, for all we can tell, the language of math in which the universe was written. You can call it temperature anisostropy, or tropospheric ozone precursors, you can call it neurofibrillary tangle or reverse transcriptase, you can call them Bárðarbunga or Eyjafjallajökull - in the end their true names were written in math.

Tuesday, August 12, 2014

Do we write too many papers?

Every Tuesday, when the weekend submissions appear on the arXiv, I think we’re all writing too many papers. Not to mention that we work too often on weekends. Every Friday, when another week has passed in which nobody solved my problems for me, I think we’re not writing enough papers.

The Guardian recently published an essay by Timo Hannay, titled “Stop the deluge of science research”, though the URL suggests the original title was “Why we should publish less Scientific Research.” Hannay argues that the literature has become unmanageable and that we need better tools to structure and filter it so that researchers can find what they are looking for. Ie, he doesn’t actually say we should publish less. Of course we all want better boats to stay afloat on the information ocean, but there are other aspects to the question whether we publish too many papers that Hannay didn’t touch upon.

Here, I use “too much” to mean that the amount of papers hinders scientific progress and no longer benefits it. The actual number depends very much on the field and its scientific culture and doesn’t matter all that much. Below I’ve collected some arguments that speak for or against the “too much papers” hypothesis.

Yes, we publish too many papers!
  • Too much to read, even with the best filter. The world doesn’t need to know about all these incremental steps, most of which never lead anywhere anyway.
  • Wastes the time of scientists who could be doing research instead. Publishing several short papers instead of one long one adds the time necessary to write several introductions and conclusions, adapt the paper to different journals styles, fight with various sets of referees, just to then submit the paper to another journal and start all over again.
  • Just not reading them isn’t an option because one needs to know what’s going on. That creates a lot of headache, especially for newcomers. Better only publish what’s really essential knowledge.
  • Wastes the time of editors and referees. Editors and referees typically don’t have access to reports on manuscripts that follow-up works are based on.
No, we don’t publish too many papers!
  • If you think it’s too much, then just don’t read it.
  • If you think it’s too much, you’re doing it wrong. It’s all a matter of tagging, keywords, and search tools.
  • It’s good to know what everybody is doing and to always be up to date.
  • Journals make money with publishing our papers, so don’t worry about wasting their time.
  • Who really wants to write a referee report for one of these 30 pages manuscripts anyway?
Possible reasons that push researchers to publish more than is good for progress:
  • Results pressure. Scientists need published papers to demonstrate outcome of research they received grants for.
  • CV boosting. Lots of papers looks like lots of ideas, at least if one doesn’t look too closely. (Especially young postdocs often believe they don’t have enough papers, so let me add a word of caution. Having too many papers can also work against you because it creates the appearance that your work is superficial. Aim at quality, not quantity.)
  • Scooping angst. In fields which are overpopulated, like for example hep-th, researchers publish anything that might go through just to have a time-stamp that documents they were first.
  • Culture. Researchers adapt the publishing norms of their peers and want to live up to their expectations. (That however might also have the result that they publish less than is good for progress, depending on the prevailing culture of the field.)  
  • PhD production machinery. It’s becoming the norm at least in physics that PhD students already have several publications, typically with their PhD supervisor. Much of this is to make it easier for the students to find a good postdoc position, which again falls back positively on the supervisor. This all makes the hamster wheel turn faster and faster.
All together I don’t have a strong opinion on whether we’re publishing too much or not. What I do find worrisome though is that all these measures for scientific success reduce our tolerance for individuality. Some people write a lot, some less so. Some pay a lot of attention to detail, some rely more on intuition. Some like to discuss and get feedback early to sort out their thoughts, some like to keep their thoughts private until they’ve sorted them out themselves. I think everybody should do their research the way it suits them best, but unfortunately we’re all increasingly forced to publish at rates close to the field average. And who said that the average is the ideal?

Tuesday, July 29, 2014

Can you touch your nose?

Yeah, but can you? Believe it or not, it’s a question philosophers have plagued themselves with for thousands of years, and it keeps reappearing in my feeds!

Best source I could find for this image: IFLS.



My first reaction was of course: It’s nonsense – a superficial play on the words “you” and “touch”. “You touch” whatever triggers the nerves in your skin. There, look, I’ve solved a thousand year’s old problem in a matter of 3 seconds.

Then it occurred to me that with this notion of “touch” my shoes never touch the ground. Maybe I’m not a genius after all. Let me get back to that cartoon then. Certainly deep thoughts went into it that I must unravel.

The average size of an atom is an Angstrom, 10-10 m. The typical interatomar distance in molecules is a nanometer, 10-9 meter, or let that be a few nanometers if you wish. At room temperature and normal atmospheric pressure, electrostatic repulsion prevents you from pushing atoms any closer together. So the 10-8 meter in the cartoon seem about correct.

But it’s not so simple...

To begin with it isn’t just electrostatic repulsion that prevents atoms from getting close, it is more importantly the Pauli exclusion principle which forces the electrons and quarks that make up the atom to arrange in shells rather than to sit on top of each other.

If you could turn off the Pauli exclusion principle, all electrons from the higher shells would drop into the ground state, releasing energy. The same would happen with the quarks in the nucleus which arrange in similar levels. Since nuclear energy scales are higher than atomic scales by several orders of magnitude, the nuclear collapse causes the bulk of the emitted energy. How much is it?

The typical nuclear level splitting is some 100 keV, that is a few 10-14 Joule. Most of the Earth is made up of silicon, iron and oxygen, ie atomic numbers of the order of 15 or so on the average. This gives about 10-12 Joule per atom, that is 1011 Joule per mol, or 1kTon TNT per kg.

This back-of-the envelope gives pretty much exactly the maximal yield of a nuclear weapon. The difference is though that turning off the Pauli exclusion principle would convert every kg of Earthly matter into a nuclear bomb. Since our home planet has a relatively small gravitational pull, I guess it would just blast apart. I saw everybody die, again, see that’s how it happens. But I digress; let me get back to the question of touch.

So it’s not just electrostatics but also the Pauli exclusion principle that prevents you from falling through the cracks. Not only do the electrons in your shoes don’t want to touch the ground, the electrons in your shoes don’t want to touch the other electrons in your shoes either. Electrons, or fermions generally, just don’t like each other.

The 10-8 meter actually seem quite optimistic because surfaces are not perfectly even, they have a roughness to them, which means that the average distance between two solids is typically much larger than the interatomic spacing that one has in crystals. Moreover, the human body is not a solid and the skin normally covered by a thin layer of fluids. So you never touch anything just because you’re separated by a layer of grease from the world.

To be fair, grease isn’t why the Greeks were scratching their heads back then, but a guy called Zeno. Zeno’s most famous paradox divides a distance into halves indefinitely to then conclude then that because it consists of an infinite number of steps, the full distance can never be crossed. You cannot, thus, touch your nose, spoke Zeno, or ram an arrow into it respectively. The paradox resolved once it was established that infinite series can converge to finite values; the nose was in the business again, but Zeno would come back to haunt the thinkers of the day centuries later.

The issue reappeared with the advance of the mathematical field of topology in the 19th century. Back then, math, physics, and philosophy had not yet split apart, and the bright minds of the times, Descarte, Euler, Bolzano and the like, they wanted to know, using their new methods, what does it mean for any two objects to touch? And their objects were as abstract as it gets. Any object was supposed to occupy space and cover a topological set in that space. So far so good, but what kind of set?

In the space of the real numbers, sets can be open or closed or a combination thereof. Roughly speaking, if the boundary of the set is part of the set, the set is closed. If the boundary is missing the set is open. Zeno constructed an infinite series of steps that converges to a finite value and we meet these series again in topology. Iff the limiting value (of any such series) is part of the set, the set is closed. (It’s the same as the open and closed intervals you’ve been dealing with in school, just generalized to more dimensions.) The topologists then went on to reason that objects can either occupy open sets or closed sets, and at any point in space there can be only one object.

Sounds simple enough, but here’s the conundrum. If you have two open sets that do not overlap, they will always be separated by the boundary that isn’t part of either of them. And if you have two closed sets that touch, the boundary is part of both, meaning they also overlap. In neither case can the objects touch without overlapping. Now what? This puzzle was so important to them that Bolzano went on to suggest that objects may occupy sets that are partially open and partially closed. While technically possible, it’s hard to see why they would, in more than 1 spatial dimension, always arrange so as to make sure one’s object closed surface touches the other’s open patches.

More time went by and on the stage of science appeared the notion of fields that mediate interactions between things. Now objects could interact without touching, awesome. But if they don’t repel what happens when they get closer? Do or don’t they touch eventually? Or does interacting via a field means they touch already? Before anybody started worrying about this, science moved on and we learned that the field is quantized and the interaction really just mediated by the particles that make up the field. So how do we even phrase now the question whether two objects touch?

We can approach this by specifying that we mean with an “object” a bound state of many atoms. The short distance interaction of these objects will (at room temperature, normal atmospheric pressure, non-relativistically, etc) take place primarily by exchanging (virtual) photons. The photons do in no sensible way belong to any one of the objects, so it seems fair to say that the objects don’t touch. They don’t touch, in one sentence, because there is no four-fermion interaction in the standard model of particle physics.

Alas, tying touch to photon exchange in general doesn’t make much sense when we think about the way we normally use the word. It does for example not have any qualifier about the distance. A more sensible definition would make use of the probability of an interaction. Two objects touch (in some region) if their probability of interaction (in that region) is large, whether or not it was mediated by a messenger particle. This neatly solves the topologists’ problem because in quantum mechanics two objects can indeed overlap.

What one means with “large probability” of interaction is somewhat arbitrary of course, but quantum mechanics being as awkward as it is there’s always the possibility that your finger tunnels through your brain when you try to hit your nose, so we need a quantifier because nothing is ever absolutely certain. And then, after all, you can touch your nose! You already knew that, right?

But if you think this settles it, let me add...

Yes, no, maybe, wtf.
There is a non-vanishing probability that when you touch (attempt to touch?) something you actually exchange electrons with it. This opens a new can of worms because now we have to ask what is “you”? Are “you” the collection of fermions that you are made up of and do “you” change if I remove one electron and replace it with an identical electron? Or should we in that case better say that you just touched something else? Or are “you” instead the information contained in a certain arrangement of elementary particles, irrespective of the particles themselves? But in this case, “you” can never touch anything just because you are not material to begin with. I will leave that to you to ponder.

And so, after having spent an hour staring at that cartoon in my facebook feed, I came to the conclusion that the question isn’t whether we can touch something, but what we mean with “some thing”. I think I had been looking for some thing else though…

Sunday, July 06, 2014

You’re not a donut. And not a mug either.

A topologist, as the joke goes, is somebody who can’t tell a mug from a donut.

Topology is a field of mathematics concerned with the properties of spaces and their invariants. One of these invariants is the number of ways you can cut out slices of an object without it falling apart, known as the “genus”. You can cut a donut and it becomes an open ring, yet it is still one piece, and you can cut the handle of a mug and it won’t fall off. Thus, they’re topologically the same.

The genus counts essentially the number of holes though that can be slightly misleading. A representative survey among our household members for example revealed that the majority of people count four holes in a T-shirt, while its genus is actually 3. (Make it a tank top, cut open shoulders and down the front. If you cut any more, it will fall apart.)

Every now and then I read that humans are topologically donuts, with anus, excuse me, genus one. Yes, that is obviously wrong, and I know you’ve all been waiting for me to count the holes in your body.

To begin with the surface of the human body, as any other non-mathematical surface, is not impenetrable, and how many holes it has is a matter of resolution. For a neutrino for example you’re pretty much all holes.

Leaving aside subatomic physics and marching on to the molecular level, the human body possesses an intricate network of transport routes for essential nutrients, proteins, bacteria and cells, and what went in one location can leave pretty much anywhere else. You can for example absorb some things through your lungs and get rid of them in your sweat, and you can absorb some medications through your skin. Not to mention that the fluid you ingest passes through some cellular layers and eventually leaves though yet another hole.

But even above the molecular level, the human body has more than one hole. One of the most unfortunate evolutionary heritage we have is that our airways are conjoined with the foodways. As you might have figured out when you were 4 years old, you can drink through your nose, and since you have two nostrils that brings you up to genus three.

Next, the human eyes sit pretty loosely in their sockets and the nasal cavities are connected to the eye sockets in various ways. I can blow out air through my eyes, so I count up to genus 5 then. Alas, people tend to find this a rather strange skill, so I’ll leave it to you whether you want to count your eyes to uppen your holyness. And while we are speaking of personal oddities, should you have any body piercings, these will pimp up your genus further. I have piercings in my ears, so that brings my counting to genus 7.

Finally, for the ladies, the fallopian tubes are not sealed off by the ovaries. The egg that is released during ovulation has to first make it to the tube. It is known to happen occasionally that an egg travels to the fallopian tube on the other side, meaning the tubes are connected through the abdominal cavity, forming a loop that adds one to the genus.

This brings my counting to 5 for the guys, 6 for the ladies, plus any piercings that you may have.

And if you have trouble imagining a genus 6 surface, below some visual aid.

Genus 0.

Genus 1.

Genus 2.

Genus 3.

Genus 4.

Genus 5.

Genus 6.

Homework.

Sunday, May 18, 2014

10 Things I wish I had known 20 years ago – Science Edition

The blogosphere thrives with advice for your younger self. Leaving aside lottery numbers and such, the older selves know this haircut was a really bad idea, you’ll eternally regret cheating on the nice guy, and you will never be that young again. This made me wonder which scientific knowledge I wish I had had already as a teenager. Leaving aside the scientific equivalent of sending lottery numbers back in time and recommend, say, that I have a close look at those type Ia supernovae, here’s my top 10:

  1. The fundamental theorems of welfare economics and Arrow’s impossibility theorem.

    I was absolutely disinterested in economics and sociology as a teenager. After reading some books on microeconomics, welfare economics, and social choice theory, the world made dramatically more sense to me. That’s how the hamster wheel works, and that’s the root of most of the quarrels in politics. Now my problem is that I don’t understand why most people don’t understand this...

  2. Exoplanets!

    Are much more common than anybody expected when I was a teenager. This has really changed the way I perceive our place in the universe, and I guess that this topic gets so much coverage in the media because this is the case for many people.

  3. Medicine is not a science.

    I was only after I read about the ‘recent’ field of ‘evidence based medicine’ that I realized I had falsely assumed medical practice is rooted in scientific evidence. Truth is, for the most part it’s not. Medicine isn’t a science, it’s a handcraft, and this is only slowly changing. You are well advised to check the literature for yourself.

  4. Most drugs are not tested on women.

    Pharma companies often don’t test drugs on women because changing hormone levels make it more difficult to find statistically significant effects. The result is that little is known about how the female body reacts differently to drugs than the male body. In many cases the recommended doses of certain medicines tend to be way too high for me, and had I known this earlier I would have trusted my body, not the label.

  5. Capsaicin isn’t water soluble.

    The stuff that makes Chili spicy doesn’t wash off with water, it takes alcohol or fat to get it off your tongue. Yes, this did make my life much better...

  6. Genetics.

    I wish I had known back then what I know today about genetic predispositions, eg that introversion, pain tolerance, response to training, and body odor have genetic factors, and I wish I had had a chance to have my DNA sampled 20 years ago.

    The default assumption that I, and I think most people, bring is that other people’s experiences are similar to our own. It never occurred to me, for example, that the other kids weren’t overdramatizing, they were really hurting more. Just by looking at my daughters I would bet that Lara got my pain tolerance while Gloria didn’t, and I can tell that Lara doesn’t mean to hurt Gloria, she just doesn’t believe it hurts as much as Gloria screams. And after reading Cain’s book that covers the correlation between introversion and a neurological trait called ‘high sensitivity’ I could finally stop wondering what is wrong with me.

  7. You probably have no free will, but it’s no reason to worry.

    Took me two decades to wrap my mind around this. Tough one.

  8. Most people talk to themselves.

    Psychologists call it the ‘internal monologue’. How was I supposed to know that pretty much everybody does that?

  9. Adaptive Systems.

    Adaptive systems are basically a generalization of the process of mutation and natural selection. This was really helpful to understand much of the changes in institutions and organizations, and all the talk about incentives. It also reveals that many of our problems stem from our inability to adapt. This is basically what gave rise to my this year’s FQXi essay.

  10. That guy really smells good.

    It was long believed that humans do not detect pheromones because the respective nerve is missing. Alas, MRI imaging settled the dispute in the 90s. The nerve, now called 'Cranial Nerve Zero' does exist. But note that while both, the olfactory and the zero nerve, end in the nostrils, the olfactory nerve does not detect pheromones and the nerves wire to different areas of the brain. Exactly what influence pheromones have on humans is still an active subject of study.

Monday, May 12, 2014

A Thousand Words

Have you noticed that paragraphs have gotten shorter?

We are reading more and more text displayed on screens, in landscape rather than portrait, or on tiny handheld devices. This hasn’t only affected the layout and typesetting, it has altered the way we write.

Short paragraphs and lists are now often used to break up blocks of text, and so are images. There is hardly any writing on the internet not decorated with an image. Besides reasons of layout there is the image grab of sharing apps that insists you need to have a picture. If none is provided this often comes out to be some advertisement or a commenter’s avatar. Adding a default image avoids this.

A picture, as they say, is worth a thousand words, but these thousand words are specifics that are often uncalled for, in the best case distracting in the worst case misleading. Think of “scientist” or “academia”. What image to pick that will not propagate a stereotype or single out a discipline? You may want to use a female scientist just to avoid criticism, but then isn’t your image misleading? And to make sure everybody understand she’s a s-c-i-e-n-t-i-s-t, even though she’s got lipstick on, you need a visual identification marker, a lab coat maybe, or a microscope, or at least a blackboard with equations. And now you’ve got a Latino woman in a lab coat looking into a microscope when all you meant was “scientist”.

FQXi launched a video contest “Show Me the Physics!” and in the accompanying visualization you’ll find me representing “scientist”, think bubble included (0:22). I’m very flattered that I’ve been promoted to a stereotype killer. Do you feel aptly represented? (Really, do not take pictures of yourself within 5 minutes of waking up. You never know, they might end up being your most popular ones.)

But if a picture adds a thousand words worth of detail, then a word calls upon a thousand pictures. The word is a generalization and abstraction that encompasses whole classes.

When my two year old daughter had spaghetti the first time, she excitedly proclaimed “Hair!” Humans are by nature good at classification, generalization and abstraction and this expresses in our language. That’s why we understand metaphors and analogies, and that’s where much of our humor roots.


This generalization is why we are so good at recognizing patterns, devising theories and, yes, at building stereotypes. Show me an image that captures all the richness, all the associations, all the analogies and connotations that come with the words “life” or “hope” or “yesterday”.

What are we doing then by drowning readers in unwanted and often unnecessary information? Sometimes I wonder if not the well-intended image works against the writer’s intent of making the text more accessible.

I love music, almost all kinds, but if anyhow possible I avoid music videos. I actually don’t want to know how the band looks like and I don’t want to know their interpretation of the lyrics. I want to make up my own story. Images are powerful. They stick. This video ruined David Ghetta’s Titanium for me.

This made me wonder if not this fear of the abstract, the word all by itself, is the same fear that leads science writers to shy away from equations. If a word calls upon a thousand images, an equation calls upon a thousand words. Think of exponential growth, or the wave equation, or the second law of thermodynamics. Did you just think of stirring milk into your coffee? Verbal explanations add details that are as uncalled for and can be as misleading as adding an image to illustrate a word. An analogy, a metaphor or a witty example does not convey what makes these equations so relevant: Their broad applicability and the ability to describe very diverse phenomena.

Recall these word problems from 8th class? The verbal description is supposed to make the math more accessible, but finding the equation is the real challenge. Science isn’t so much about solving equations. It’s about finding the equations to begin with. It’s about finding the underlying laws amidst all the clutter, the laws that are worth a thousand words.

Sometimes I wonder if I’d not rather be an abstract “scientist” for you, instead of a married middle-European mother of two, and I wonder what are the thousand words that my profile image speaks to you. And I fear that, by adding all these visual details, we are limiting the reader’s ability to extract and appreciate abstract ideas, that by adding all these verbal details to science writing, we are ultimately limiting the reader’s ability to appreciate science - in all its abstract glory. Hear my words...

Thursday, April 17, 2014

The Problem of Now

[Image Source]

Einstein’s greatest blunder wasn’t the cosmological constant, and neither was it his conviction that god doesn’t throw dice. No, his greatest blunder was to speak to a philosopher named Carnap about the Now, with a capital.

“The problem of Now”, Carnap wrote in 1963, “worried Einstein seriously. He explained that the experience of the Now means something special for men, something different from the past and the future, but that this important difference does not and cannot occur within physics”

I call it Einstein’s greatest blunder because, unlike the cosmological constant and indeterminism, philosophers, and some physicists too, are still confused about this alleged “Problem of Now”.

The problem is often presented like this. Most of us experience a present moment, which is a special moment in time, unlike the past and unlike the future. If you write down the equations governing the motion of some particle through space, then this particle is described, mathematically, by a function. In the simplest case this is a curve in space-time, meaning the function is a map from the real numbers to a four-dimensional manifold. The particle changes its location with time. But regardless of whether you use an external definition of time (some coordinate system) or an internal definition (such as the length of the curve), every single instant on that curve is just some point in space-time. Which one, then, is “now”?

You could argue rightfully that as long as there’s just one particle moving on a straight line, nothing is happening, and so it’s not very surprising that no notion of change appears in the mathematical description. If the particle would scatter on some other particle, or take a sudden turn, then these instances can be identified as events in space-time. Alas, that still doesn’t tell you whether they happen to the particle “now” or at some other time.

Now what?

The cause for this problem is often assigned to the timeless-ness of mathematics itself. Mathematics deals in its core with truth values and the very point of using math to describe nature is that these truths do not change. Lee Smolin has written a whole book about the problem with the timeless math, you can read my review here.

It may or may not be that mathematics is able to describe all of our reality, but to solve the problem of now, excuse the heresy, you do not need to abandon a mathematical description of physical law. All you have to do is realize that the human experience of now is subjective. It can perfectly well be described by math, it’s just that humans are not elementary particles.

The decisive ability that allows us to experience the present moment as being unlike other moments is that we have a memory. We have a memory of events in the past, an imperfect one, and we do not have memory of events in the future. Memory is not in and by itself tied to consciousness, it is tied to the increase of entropy, or the arrow of time if you wish. Many materials show memory; every system with a path dependence like eg hysteresis does. If you get a perm the molecule chains in your hair remember the bonds, not your brain.

Memory has nothing to do with consciousness in particular which is good because it makes it much easier to find the flaw in the argument leading to the problem of now.

If we want to describe systems with memory we need at the very least two time parameters: t to parameterize the location of the particle and τ to parameterize the strength of memory of other times depending on its present location. This means there is a function f(t,τ) that encodes how strong is the memory of time τ at moment t. You need, in other words, at the very least a two-point function, a plain particle trajectory will not do.

That we experience a “now” means that the strength of memory peaks when both time parameters are identical, ie t-τ = 0. That we do not have any memory of the future means that the function vanishes when τ > t. For the past it must decay somehow, but the details don’t matter. This construction is already sufficient to explain why we have the subjective experience of the present moment being special. And it wasn’t that difficult, was it?

The origin of the problem is not in the mathematics, but in the failure to distinguish subjective experience of physical existence from objective truth. Einstein spoke about “the experience of the Now [that] means something special for men”. Yes, it means something special for men. This does not mean however, and does not necessitate, that there is a present moment which is objectively special in the mathematical description. In the above construction all moments are special in the same way, but in every moment that very moment is perceived as special. This is perfectly compatible with both our experience and the block universe of general relativity. So Einstein should not have worried.

I have a more detailed explanation of this argument – including a cartoon! – in a post from 2008. I was reminded of this now because Mermin had a comment in the recent issue of Nature magazine about the problem of now.

In his piece, Mermin elaborates on qbism, a subjective interpretation of quantum mechanics. I was destined to dislike this just because it’s a waste of time and paper to write about non-existent problems. Amazingly however, Mermin uses the subjectiveness of qbism to arrive at the right conclusion, namely that the problem of the now does not exist because our experiences are by its very nature subjective. However, he fails to point out that you don’t need to buy into fancy interpretations of quantum mechanics for this. All you have to do is watch your hair recall sulphur bonds.

The summary, please forgive me, is that Einstein was wrong and Mermin is right, but for the wrong reaons. It is possible to describe the human experience of the present moment with the “timeless” mathematics that we presently use for physical laws, it isn’t even difficult and you don’t have to give up the standard interpretation of quantum mechanics for this. There is no problem of Now and there is no problem with Tegmark’s mathematical universe either.

And Lee Smolin, well, he is neither wrong nor right, he just has a shaky motivation for his cosmological philosophy. It is correct, as he argues, that mathematics doesn’t objectively describe a present moment. However, it’s a non sequitur that the current approach to physics has reached its limits because this timeless math doesn’t constitute a conflict with our experience. observation.

Most people get a general feeling of uneasiness when they first realize that the block universe implies all the past and all the future is equally real as the present moment, that even though we experience the present moment as special, it is only subjectively so. But if you can combat your uneasiness for long enough, you might come to see the beauty in eternal mathematical truths that transcend the passage of time. We always have been, and always will be, children of the universe.

Monday, April 07, 2014

Will the social sciences ever become hard sciences?

The term “hard science” as opposed to “soft science” has no clear definition. But roughly speaking, the less the predictive power and the smaller the statistical significance, the softer the science. Physics, without doubt, is the hard core of the sciences, followed by the other natural sciences and the life sciences. The higher the complexity of the systems a research area is dealing with, the softer it tends to be. The social sciences are at the soft end of the spectrum.

To me the very purpose of research is making science increasingly harder. If you don’t want to improve on predictive power, what’s the point of science to begin with? The social sciences are soft mainly because data that quantifies the behavior of social, political, and economic systems is hard to come by: it’s huge amounts, difficult to obtain and even more difficult to handle. Historically, these research areas therefore worked with narratives relating plausible causal relations. Needless to say, as computing power skyrockets, increasingly larger data sets can be handled. So the social sciences are finally on the track to become useful. Or so you’d think if you’re a physicist.

But interestingly, there is a large opposition to this trend of hardening the social sciences, and this opposition is particularly pronounced towards physicists who take their knowledge to work on data about social systems. You can see this opposition in the comment section to every popular science article on the topic. “Social engineering!” they will yell accusingly.

It isn’t so surprising that social scientists themselves are unhappy because the boat of inadequate skills is sinking in the data sea and physics envy won’t keep it afloat. More interesting than the paddling social scientists is the public opposition to the idea that the behavior of social systems can be modeled, understood, and predicted. This opposition is an echo of the desperate belief in free will that ignores all evidence to the contrary. The desperation in both cases is based on unfounded fears, but unfortunately it results in a forward defense.

And so the world is full with people who argue that they must have free will because they believe they have free will, the ultimate confirmation bias. And when it comes to social systems they’ll snort at the physicists “People are not elementary particles”. That worries me, worries me more than their clinging to the belief in free will, because the only way we can solve the problems that mankind faces today – the global problems in highly connected and multi-layered political, social, economic and ecological networks – is to better understand and learn how to improve the systems that govern our lives.

That people are not elementary particles is not a particularly deep insight, but it collects several valid points of criticism:

  1. People are too difficult. You can’t predict them.

    Humans are made of a many elementary particles and even though you don’t have to know the exact motion of every single one of these particles, a person still has an awful lot of degrees of freedom and needs to be described by a lot of parameters. That’s a complicated way of saying people can do more things than electrons, and it isn’t always clear exactly why they do what they do.

    That is correct of course, but this objection fails to take into account that not all possible courses of action are always relevant. If it was true that people have too many possible ways to act to gather any useful knowledge about their behavior our world would be entirely dysfunctional. Our societies work only because people are to a large degree predictable.

    If you go shopping you expect certain behaviors of other people. You expect them to be dressed, you expect them to walk forwards, you expect them to read labels and put things into a cart. There, I’ve made a prediction about human behavior! Yawn, you say, I could have told you that. Sure you could, because making predictions about other people’s behavior is pretty much what we do all day. Modeling social systems is just a scientific version of this.

    This objection that people are just too complicated is also weak because, as a matter of fact, humans can and have been modeled with quite simple systems. This is particularly effective in situations when intuitive reaction trumps conscious deliberation. Existing examples are traffic flows or the density of crowds when they have to pass through narrow passages.

    So, yes, people are difficult and they can do strange things, more things than any model can presently capture. But modeling a system is always an oversimplification. The only way to find out whether that simplification works is to actually test it with data.

  2. People have free will. You cannot predict what they will do.

    To begin with it is highly questionable that people have free will. But leaving this aside for a moment, this objection confuses the predictability of individual behavior with the statistical trend of large numbers of people. Maybe you don’t feel like going to work tomorrow, but most people will go. Maybe you like to take walks in the pouring rain, but most people don’t. The existence of free will is in no conflict with discovering correlations between certain types of behavior or preferences in groups. It’s the same difference that doesn’t allow you to tell when your children will speak the first word or make the first step, but that almost certainly by the age of three they’ll have mastered it.

  3. People can understand the models and this knowledge makes predictions useless.

    This objection always stuns me. If that was true, why then isn’t obesity cured by telling people it will remain a problem? Why are the highways still clogged at 5pm if I predict they will be clogged? Why will people drink more beer if it’s free even though they know it’s free to make them drink more? Because the fact that a prediction exists in most cases doesn’t constitute any good reason to change behavior. I can predict that you will almost certainly still be alive when you finish reading this blogpost because I know this prediction is exceedingly unlikely to make you want to prove it wrong.

    Yes, there are cases when people’s knowledge of a prediction changes their behavior – self-fulfilling prophecies are the best-known examples of this. But this is the exception rather than the rule. In an earlier blogpost, I referred to this as societal fixed points. These are configurations in which the backreaction of the model into the system does not change the prediction. The simplest example is a model whose predictions few people know or care about.

  4. Effects don’t scale and don’t transfer.

    This objection is the most subtle one. It posits that the social sciences aren’t really sciences until you can do and reproduce the outcome of “experiments”, which may be designed or naturally occurring. The typical social experiment that lends itself to analysis will be in relatively small and well-controlled communities (say, testing the implementation of a new policy). But then you have to extrapolate from this how the results will be in larger and potentially very different communities. Increasing the size of the system might bring in entirely new effects that you didn’t even know of (doesn’t scale), and there are a lot of cultural variables that your experimental outcome might have depended on that you didn’t know of and thus cannot adjust for (doesn’t transfer). As a consequence, repeating the experiment elsewhere will not reproduce the outcome.

    Indeed, this is likely to happen and I think it is the major challenge in this type of research. For complex relations it will take a long time to identify the relevant environmental parameters and to learn how to account for their variation. The more parameters there are and the more relevant they are, the less the predictive value of a model will be. If there are too many parameters that have to be accounted for it basically means doing experiments is the only thing we can ever do. It seems plausible to me, even likely, that there are types of social behavior that fall into this category, and that will leave us with questions that we just cannot answer.

    However, whether or not a certain trend can or cannot be modeled we will only know by trying. We know that there are cases where it can be done. Geoffry West’s city theory I find a beautiful example where quite simple laws can be found in the midst of all these cultural and contextual differences.
In summary.

The social sciences will never be as “hard” as the natural sciences because there is much more variation among people than among particles and among cities than among molecules. But the social sciences have become harder already and there is no reason why this trend shouldn’t continue. I certainly hope it will continue because we need this knowledge to collectively solve the problems we have collectively created.