Showing posts with label Rant. Show all posts
Showing posts with label Rant. Show all posts

Sunday, December 27, 2015

Dear Dr B: Is string theory science?

This question was asked by Scientific American, hovering over an article by Davide Castelvecchi.

They should have asked Ethan Siegel. Because a few days ago he strayed from the path of awesome news about the universe to inform his readership that “String Theory is not Science.” Unlike Davide however, Ethan has not yet learned the fine art of not expressing opinions that marks the true science writer. And so Ethan dismayed Peter Woit, Lubos Motl, and me in one sweep. That’s a noteworthy achievement, Ethan!

Upon my inquiry (essentially a polite version of “wtf?”) Ethan clarified that he meant string theory has no scientific evidence speaking for it and changed the title to “Why String Theory Is Not A Scientific Theory.” (See URL for original title.)

Now, Ethan is wrong with believing that string theory doesn’t have evidence speaking for it and I’ll come to this in a minute. But the main reason for his misleading title, even after the correction, is a self-induced problem of US science communicators. In reaction to an often raised Creationist’s claim that Darwinian natural selection is “just a theory,” they have bent over backwards trying to convince the public that scientists use the word “theory” to mean an explanation that has been confirmed by evidence to high accuracy. Unfortunately, that’s not how scientists actually use the word, have never used it, and will probably never use it.

Scientists don’t name their research programs following certain rules. Instead, which expression sticks is mostly coincidence. Brans-Dicke theory, Scalar-Tensor theory, terror management theory, or recapitulation theory, are but a few examples of “theories” that have little or no evidence speaking in their favor. Maybe that shouldn’t be so. Maybe “theory” should be a title reserved only for explanations widely accepted in the scientific community. But looking up definitions before assigning names isn’t how language works. Peanuts also aren’t nuts (they are legumes), and neither are Cashews (they are seeds). But, really, who gives a damn?

Speaking of nuts, the sensible reaction to the “just a theory” claim is not to conjure up rules according to which scientists allegedly use one word or the other, but to point out that any consistent explanation is better than a collection of 2000 years old fairy tales that are neither internally consistent nor consistent with observation, and thus an entirely useless waste of time.

And science really is all about finding useful explanations for observations, where “useful” means that they increase our understanding of the world around us and/or allow us to shape nature to our benefits. To find these useful explanations, scientists employ the often-quoted method of proposing hypotheses and subsequently testing them. The role of theory development in this is to identify the hypotheses which are most promising and thus deserve being put to test.

This pre-selection of hypotheses is a step often left out in the description of the scientific method, but it is highly relevant, and its relevance has only increased in the last decades. We cannot possibly test all randomly produced hypotheses – we neither have the time nor the resources. All fields of science therefore have tight quality controls for which hypotheses are worth paying attention to. The more costly experimental test of new hypotheses becomes, the more relevant is this hypotheses pre-selection. And it is in this step where non-empirical theory assessment enters.

Non-empirical theory assessment was topic of the workshop that Davide Castelvecchi’s SciAm article reported on. (For more information about the workshop, see also Natalie Wolchover’s summary in Quanta, and my summary on Starts with a Bang.) Non-empirical theory assessment is the use of criteria that scientists draw upon to judge on the promise of a theory before it can be put to experimental test.

This isn’t new. Theoretical physicists have always used non-empirical assessment. What is new is that in foundational physics it has remained the only assessment for decades, which hugely inflates the potential impact of even smallest mistakes. As long as we have frequent empirical assessment, faulty non-empirical assessment cannot lead theorists far astray. But take away the empirical test, and non-empirical assessment requires utmost objectivity in judgement or we will end up in a completely wrong place.

Richard Dawid, one of the organizers of the Munich workshop, has, in a recent book, summarized some non-empirical criteria that practitioners list in favor of string theory. It is an interesting book, but of little practical use because it doesn’t also assess other theories (so the scientist complains about the philosopher).

String theory arguably has empirical evidence speaking for it because it is compatible with the theories that we know, the standard model and general relativity. The problem is though that, for what the evidence is concerned, string theory so far isn’t any better than the existing theories. There isn’t a single piece of data that string theory explains which the standard model or general relativity doesn’t explain.

The reason many theoretical physicists prefer string theory over the existing theories are purely non-empirical. They consider it a better theory because it unifies all known interactions in a common framework and is believed to solve consistency problems in the existing theories, like the black hole information loss problem and the formation of singularities in general relativity. Whether it is actually correct as a unified theory of all interactions is still unknown. And short of a uniqueness proof, no non-empirical argument will change anything about this.

What is known however is that string theory is intimately related to quantum field theories and gravity, both of which are well-confirmed by evidence. This is why many physicists are convinced that string theory too has some use in the description of nature, even if this use eventually may not be to describe the quantum structure of space and time. And so, in the last decade string theory has become regarded less as a “final theory” and more as mathematical framework to address questions that are difficult or impossible to answer with quantum field theory or general relativity. It yet has to prove its use on these accounts.

Speculation in theory development is a necessary part of the scientific method. If a theory isn’t developed to explain already existing data, there is always a lag between the hypotheses and their tests. String theory is just another such speculation, and it is thereby a normal part of science. I have never met a physicist who claimed that string theory isn’t science. This is a statement I have only come across by people who are not familiar with the field – which is why Ethan’s recent blogpost puzzled me greatly.

No, the question that separates the community is not whether string theory is science. The controversial question is how long is too long to wait for data supporting a theory? Are 30 years too long? Does it make any sense to demand payoff after a certain time?

It doesn’t make any sense to me to force theorists to abandon a research project because experimental test is slow to come by. It seems natural that in the process of knowledge discovery it becomes increasingly harder to find evidence for new theories. What one should do in this case though is not admit defeat on the experimental front and focus solely on the theory, but instead increase efforts to find new evidence that could guide the development of the theory. That, and the non-empirical criteria should be regularly scrutinized to prevent scientists from discarding hypotheses for the wrong reasons.

I am not sure who is responsible for this needlessly provocative title of the SciAm piece, just that it’s most likely not the author, because the same article previously appeared in Nature News with the somewhat more reasonable title “Feuding physicists turn to philosophy for help.” There was, however, not much feud at the workshop, because it was mainly populated by string theory proponents and multiverse opponents, who nodded to each other’s talks. The main feud, as always, will be carried out in the blogosphere...

Tl;dr: Yes, string theory is science. No, this doesn’t mean we know it’s a correct description of nature.

Wednesday, December 16, 2015

No, you don’t need general relativity to ride a hoverboard.

Image credit: Technologistlaboratory.
This morning, someone sent me a link to a piece that appeared on WIRED

The hoverboards in question here are the currently fashionable two-wheeled motorized boards that are driven by shifting your weight. I haven’t tried one, but it sure looks like fun.

I would have ignored this article as your average internet nonsense, but turns out the WIRED piece is written by someone by name Rhett Allain who, according to the website “is an Associate Professor of Physics at Southeastern Louisiana University.” Which makes me fear that some readers might actually believe what he wrote. Because he is something with professor, certainly he must know the physics.

Now, the claim of the article is correct in the sense that if you took the laws of physics and removed general relativity then there would be no galaxy formation, no planet Earth, no people, and certainly no hoverboards. I don’t think though that Allain had such a philosophical argument in mind. Besides, on this ground you could equally well argue that you can’t throw a pebble without general relativity because there wouldn’t be any pebbles.

What Allain argues instead is that you somehow need the effects of gravity to be the same as that of acceleration and that this sounds a little like general relativity, therefore you need general relativity.

You should find this claim immediately suspicious because if you know one thing about general relativity it’s that it’s hard to test. If you couldn’t “ride a hoverboard without Einstein’s theory of General Relativity,” then why bother with light deflection and gravitational lensing to prove that the theory is correct? Must be a giant conspiracy of scientists wasting taxpayers’ money I presume.

Image Credit: Jared Mecham
Another reason to be suspicious about the correctness of this argument is the author’s explanation that special relativity is special because “Well, before Einstein, everyone thought reference frames were relative.” I am hoping this was just a typographical error, but just to avoid any confusion: before Einstein time was absolute. It’s called special relativity because according to Einstein, time too is relative.

But to come back to the issue about gravity. What you need to drive a hoverboard is to balance the inertial force caused by the board’s acceleration with another force, for which you have pretty much only gravity available. If the board accelerates and pushes forward your feet (friction required), you better bend forward to shift your center of mass because otherwise you’ll fall flat on your back. Bend forward too much and you fall on your nose because gravity. Don’t bend enough, you’ll fall backwards because inertia. To keep standing, you need to balance these forces.

This is basic mechanics and has nothing to do with General Relativity. That one of the forces is gravity is irrelevant to the requirement that you have to balance them to not fall. And even if you take into account that it’s gravity, Newtonian gravity is entirely sufficient. And it doesn’t have anything to do with hoverboards either. You can also see people standing on a train bend forwards when the train accelerates because otherwise they’ll fall in dominoes. You don’t need to bend when sitting because the seat back balances the force for you.

What’s different about general relativity is that it explains gravity is not a force but a property of space-time. That is, it deviates from Newtonian gravity. These deviations are ridiculously small corrections though and you don’t need to take them into account for your average Joe on the Hoverboard, unless possibly Joe is a Neutron star.

The key ingredient to general relativity is the equivalence principle, a simplified version of which states that the gravitational mass is equal to the inertial mass. This is my best guess of what Allain was alluding to. But you don’t need the equivalence principle to balance forces. The equivalence principle just tells you exactly how the forces are balanced. In this case it would tell you the angle you have to aim at to not fall.

In summary: The correct statement would have been “You can’t ride a hoverboard without balancing forces.” If you lean too much forward and write about General Relativity without knowing how it works, you’ll fall flat on your nose.

Saturday, December 05, 2015

What Fermilab’s Holometer Experiment teaches us about Quantum Gravity.

The Fermilab Holometer searched for
correlations between two interferometers
Tl;dr: Nothing. It teaches us nothing. It just wasted time and money.

The Holometer experiment at Fermilab just published the results of their search for holographic space-time foam. They didn’t find any evidence for noise that could be indicative of quantum gravity.

The idea of the experiment was to find correlations in quantum gravitational fluctuations of space-time by using two very sensitive interferometers and comparing their measurements. Quantum gravitational fluctuations are exceedingly tiny, and in all existing models they are far too small to be picked up by interferometers. But the head of the experiment, Craig Hogan, argued that, if the holographic principle is valid, then the fluctuations should be large enough to be detectable by the experiment.

The holographic principle is the idea that everything that happens in a volume can be encoded on the volume’s surface. Many physicists believe that the principle is realized in nature. If that was so, it would indeed imply that fluctuations have correlations. But these correlations are not of the type that the experiment could test for. They are far too subtle to be measureable in this way.

In physics, all theories have to be expressed in form of a consistent mathematical description. Mathematical consistency is an extremely strong constraint when combined with the requirement that the theory also has to agree with all observations we already have. There is very little that can be changed in the existing theories that a) leads to new effects and b) does not spoil the compatibility with existing data. It’s not an easy job.

Hogan didn’t have a theory. It’s not that I am just grumpy  he said so himself: “It's a slight cheat because I don't have a theory,” as quoted by Michael Moyer in a 2012 Scientific American article.

For what I have extracted from Hogan’s papers on the arxiv, he tried twice to construct a theory that would capture his idea of holographic noise. The first violated Lorentz-invariance and was thus already ruled out by other data. The second violated basic properties of quantum mechanics and was thus already ruled out too. In the end he seems to have given up finding a theory. Indeed, it’s not an easy job.

Searching for a prediction based on a hunch rather than a theory makes it exceedingly unlikely that something will be found. That is because there is no proof that the effect would even be consistent with already existing data, which is difficult to achieve. But Hogan isn’t a no-one; he is head of Fermilab’s Center for Particle Astrophysics. I assume he got funding for his experiment by short-circuiting peer review. A proposal for such an experiment would never have passed peer review – it simply doesn’t live up to today’s quality standards in physics.

I wasn’t the only one perplexed about this experiment becoming reality. Hogan relates the following anecdote: “Lenny [Susskind] has an idea of how the holographic principle works, and this isn’t it. He’s pretty sure that we’re not going to see anything. We were at a conference last year, and he said that he would slit his throat if we saw this effect.” This is a quote from another Scientific American article. Oh, yes, Hogan definitely got plenty of press coverage for his idea.

Ok, so maybe I am grumpy. That’s because there are hundreds of people working on developing testable models for quantum gravitational effects, each of whom could tell you about more promising experiments than this. It’s a research area by name quantum gravity phenomenology. The whole point of quantum gravity phenomenology is to make sure that new experiments test promising ranges of parameter space, rather than just wasting money.

I might have kept my grumpiness to myself, but then the Fermilab Press release informed me that “Hogan is already putting forth a new model of holographic structure that would require similar instruments of the same sensitivity, but different configurations sensitive to the rotation of space. The Holometer, he said, will serve as a template for an entirely new field of experimental science.”

An entirely new field of experimental science, based on models that either don’t exist or are ruled out already and that, when put to test, morph into new ideas that require higher sensitivity. That scared me so much I thought somebody has to spell it out: I sincerely hope that Fermilab won’t pump any more money into this unless the idea goes through rigorous peer review. It isn’t just annoying. It’s a slap into the face of many hard-working physicists whose proposals for experiments are of much higher quality but who don’t get funding.

At the very least, if you have a model for what you test, you can rule out the model. With the Holometer you can’t even rule out anything because there is no theory and no model that would be tested with it. So what we have learned is nothing. I can only hope that at least this episode draws some attention to the necessity of having at mathematically consistent model. It’s not an easy job. But it has to be done.

The only good news here is that Lenny Susskind isn’t going to slit his throat.

Tuesday, November 17, 2015

The scientific method is not a myth

Heliocentrism, natural selection, plate tectonics – much of what is now accepted fact was once controversial. Paradigm-shifting ideas were, at their time, often considered provocative. Consequently the way to truth must be pissing off as many people as possible by making totally idiotic statements. Like declaring that the scientific method is a myth, which was most recently proclaimed by Daniel Thurs on Discover Blogs.

Even worse, his article turns out to be a book excerpt. This hits me hard after just having discovered that someone by name Matt Ridley also published a book full of misconceptions about how science supposedly works. Both fellows seem to have the same misunderstanding: the belief that science is a self-organized system and therefore operates without method – in Thurs’ case – and without governmental funding – in Ridley’s case. That science is self-organized is correct. But to conclude from this that progress comes from nothing is wrong.

I blame Adam Smith for all this mistaken faith in self-organization. Smith used the “invisible hand” as a metaphor for the regulation of prices in a free market economy. If the actors in the market have full information and act perfectly rational, then all goods should eventually be priced at their actual value, maximizing the benefit for everyone involved. And ever since Smith, self-organization has been successfully used out of context.

In a free market, the value of the good is whatever price this ideal market would lead to. This might seem circular but it isn’t: It’s a well-defined notion, at least in principle. The main argument of neo-conservatism is that any kind of additional regulation, like taxes, fees, or socialization of services, will only lead to inefficiencies.

There are many things wrong with this ideal of a self-regulating free market. To begin with real actors are neither perfectly rational nor do they ever have full information. And then the optimal prices aren’t unique; instead there are infinitely many optimal pricing schemes, so one needs an additional selection mechanism. But oversimplified as it is, this model, now known as equilibrium economics, explains why free markets work well, or at least better than planned economies.

No, the main problem with trust in self-optimization isn’t the many shortcomings of equilibrium economics. The main problem is the failure to see that the system itself must be arranged suitably so that it can optimize something, preferably something you want to be optimized.

A free market needs, besides fiat money, rules that must be obeyed by actors. They must fulfil contracts, aren’t allowed to have secret information, and can’t form monopolies – any such behavior would prevent the market from fulfilling its function. To some extent violations of these rules can be tolerated, and the system itself would punish the dissidents. But if too many actors break the rules, self-optimization would fail and chaos would result.

Then of course you may want to question whether the free market actually optimizes what you desire. In a free market, future discounting and personal risk tends to be higher than many people prefer, which is why all democracies have put in place additional regulations that shift the optimum away from maximal profit to something we perceive as more important to our well-being. But that’s a different story that shall be told another time.

The scientific system in many regards works similar to a free market. Unfortunately the market of ideas isn’t as free as it should be to really work efficiently, but by and large it works well. As with the market economies though, it only works if the system is set up suitably. And then it optimizes only what it’s designed to optimize, so you better configure it carefully.

The development of good scientific theories and the pricing of goods are examples for adaptive systems, and so is natural selection. Such adaptive systems generally work in a circle of four steps:
  1. Modification: A set of elements that can be modified.
  2. Evaluation: A mechanism to evaluate each element according to a measure. It’s this measure that is being optimized.
  3. Feedback: A way to feed the outcome of the evaluation back into the system.
  4. Reaction: A reaction to the feedback that optimizes elements according to the measure by another modification.
With these mechanisms in place, the system will be able to self-optimize according to whatever measure you have given it, by reiterating a cycle going through steps one to four.

In the economy the set of elements are priced goods. The evaluation is whether the goods sell. The feedback is the vendor being able to tell how many goods sell. The reaction is to either change the prices or improve the goods. What is being optimized is the satisfaction (“utility”) of vendors and consumers.

In natural selection the set of elements are genes. The evaluation is whether the organism thrives. The feedback is the dependence of the amount of offspring on the organisms’ well-being. The reaction is survival or extinction. What is being optimized are survival chances (“fitness”).

In science the set of elements are hypotheses. The evaluation is whether they are useful. The feedback is the test of hypotheses. The reaction is that scientists modify or discard hypotheses that don’t work. What is being optimized in the scientific system depends on how you define “useful.” It once used to mean predictive, yet if you look at high energy physics today you might be tempted to think it’s instead mathematical elegance. But that’s a different story that shall be told another time.

That some systems optimize a set of elements according to certain criteria is not self-evident and doesn’t come from nothing. There are many ways systems can fail at this, for example because feedback is missing or a reaction isn’t targeted enough. A good example for lacking feedback is the administration of higher education institutions. They operate incredibly inefficiently, to the extent that the only way one can work with them is by circumvention. The reason is that, by my own experience, it’s next to impossible to fix obviously nonsensical policies or to boot incompetent administrative personnel.

Natural selection, to take another example, wouldn’t work if genetic mutations scrambled the genetic code too much because whole generations would be entirely unviable and feedback wasn’t possible. Or take the free market. If we’d all agree that tomorrow we don’t believe in the value of our currency any more, the whole system would come down.

Back to science.

Self-optimization by feedback in science, now known as the scientific method, was far from obvious for people in the middle ages. It seems difficult to fathom today how they could not have known. But to see how this could be you only have to look at fields where they still don’t have a scientific method, like much of the social and political sciences. They’re not testing hypotheses so much as trying to come up with narratives or interpretations because most of their models don’t make testable predictions. For a long time, this is exactly what the natural sciences also were about: They were trying to find narratives, they were trying to make sense. Quantification, prediction, and application came much later, and only then could the feedback cycle be closed.

We are so used to rapid technological progress now that we forget it didn’t used to be this way. For someone living 2000 years ago, the world must have appeared comparably static and unchanging. The idea that developing theories about nature allows us to shape our environment to better suit human needs is only a few hundred years old. And now that we are able to collect and handle sufficient amounts of data to study social systems, the feedback on hypotheses in this area will probably also become more immediate. This is another opportunity to shape our environment better to our needs, by recognizing just which setup makes a system optimize what measure. That includes our political systems as well as our scientific systems.

The four steps that an adaptive system needs to cycle through don’t come from nothing. In science, the most relevant restriction is that we can’t just randomly generate hypotheses because we wouldn’t be able to test and evaluate them all. This is why science heavily relies on education standards, peer review, and requires new hypotheses to tightly fit into existing knowledge. We also need guidelines for good scientific conduct, reproducibility, and a mechanism to give credits to scientists with successful ideas. Take away any of that and the system wouldn’t work.

The often-depicted cycle of the scientific method, consisting of hypotheses-generation and subsequent testing, is incomplete and lacks details, but it’s correct in its core. The scientific method is not a myth.


Really I think today anybody can write a book about whatever idiotic idea comes to their mind. I suppose the time has come for me to join the club.

Thursday, October 29, 2015

What is basic science and what is it good for?

Basic science is, above all, a stupid word. It sounds like those onions we sliced in 8th grade. And if people don’t mistake “basic” for “everybody knows,” they might think instead it means “foundational,” that is, dedicated to questioning the present paradigms. But that’s not what the word refers to.

Basic science refers to research which is not pursued with the aim of producing new technologies; it is sometimes, more aptly, referred to as “curiosity driven” or “blue skies” research. The NSF calls it “transformative,” the ERC calls it “frontier” research. Quite possibly they don’t mean exactly the same, which is another reason why it’s a stupid word.

A few days ago, Matt Ridley wrote an article for the Wall Street Journal in which he argues that basic research, to the extent that it’s necessary at all, doesn’t need governmental funding. He believes that it is technology that drives science, not the other way round. “Deep scientific insights are the fruits that fall from the tree of technological change,” Ridley concludes. Apparently he has written a whole book with this theme, which is about to be published next week. The WSJ piece strikes me as shallow and deliberately provocative, published with the only aim of drawing attention to his book, which I hope has more substance and not just more words.

The essence of the article seems to be that it’s hard to demonstrate a correlation, not to mention causation, between tax-funded basic science and economic growth. Instead, Ridley argues, in many examples scientific innovations originated not in one single place, but more or less simultaneously in various different places. He concludes that tax-funded research is unnecessary.

Leaving aside for a moment that measures for economic growth can mislead about a countries’ prosperity, it is hardly surprising that a link between tax-funded basic research and economic growth is difficult to find. It must come as a shock to nationalists, but basic research is the possibly most international profession in existence. Ideas don’t stop at country borders. Consequently, to make use of basic research, you don’t yourself need to finance it. You can just wait until a breakthrough occurs elsewhere and then pay your people to jump on it. The main reason we so frequently see examples of simultaneous breakthroughs in different groups is that they build on more or less the same knowledge. Scientists can jump very quickly.

But the conclusion that this means one does not need to support basic research is just wrong. It’s a classic demonstration of the “free rider” problem. Your country can reap the benefits of basic research elsewhere, as long as somebody else does the thinking for you. But if every country does this, innovation would run dry, eventually.

Besides this, the idea that technology drives science might have worked in the last century but it does no longer work today. The times where you could find new laws of nature by dabbling with some equipment in the lab are over. To make breakthroughs today, you need to know what to build, and you need to know how to analyze your data. Where will you get that knowledge if not from basic resarch?

The technologies we use today, the computer that you sit in front of – semiconductors, lasers, liquid crystal displays – are based on last century’s theories. We still reap the benefits. And we all do, regardless of whether our nation paid salary for one of quantum mechanics’ founding fathers. But if we want progress to continue in the next century, we have to go beyond that. You need basic research to find out which direction is promising, which is a good investment. Or otherwise, you’ll waste lots of time and money.

There is a longer discussion that one can have whether some types of basic research have any practical use at all. It is questionable, for example, that knowing about the accelerated expansion of the universe will ever lead to a better phone. In my perspective the materialistic focus is as depressing as meaningless. Sure, it would be nice if my damned phone battery wouldn’t die in the middle of a call, and, yeah, I want to live forever watching cat videos on my hoverboard. But I fail to see what it’s ultimately good for. The only meaning I can find in being thrown into this universe is to understand how it works and how we are part of it. To me, knowledge is an end unto itself. Keep your hoverboard, just tell me how to quantize gravity.

Here is a simple thought experiment. Consider all tax-funded basic research were to cease tomorrow. What would go missing? No more stories about black holes, exoplanets, or loophole-free tests of quantum entanglement. No more string theory, no multiverses, no theories of everything, no higgsinos, no dark matter, no cosmic neutrinos, extra-dimensions, wormholes, or holographic universes. Except for a handful of lucky survivors at partly privately funded places – like Perimeter Institute, the KAVLI institutes, and some Templeton-funded initiatives, who in no way would be able to continue all of this research – all this research would die quickly. The world would be a poorer place, one with no hope of ever understanding this amazing universe that we live in.

Democracy is a funny thing, you know, it’s kind of like an opinion poll. Basic research is tax-funded in all developed countries. Could there be any clearer expression of the people’s opinion? They say: we want to know. We want to know where we come from, and what we are made of, and what’s the fate of our universe. Yes, they say, we are willing to pay taxes for that, but please tell us. As someone who works in basic research, I see my task as delivering to this want.

Monday, September 28, 2015

No, Loop Quantum Gravity has not been shown to violate the Holographic Principle

Didn't fly.
Tl;dr: The claim in the paper is just wrong. Read on if you want to know why it matters.

Several people asked me for comments on a recent paper that appeared on the arxiv, “Violation of the Holographic Principle in the Loop Quantum Gravity” by Ozan Sargın and Mir Faizal. We have met Mir Faizal before; he is the one who explained that the LHC would make contact to parallel universes [spoiler alert: it won’t]. Now, I have recently decided to adapt a strict diet of intellectual veganism: I’ll refuse to read anything produced by making science suffer. So I wouldn’t normally have touched the paper, not even with a fork. But since you asked, I gave it a look.

The claim in the paper is that Loop Quantum Gravity (LQG), the most popular approach to quantum gravity after string theory, must be wrong because it violates the Holographic Principle. The Holographic Principle requires that the number of different states inside a volume is bounded by the surface of the volume. That sounds like a rather innocuous and academic constraint, but once you start thinking about it it’s totally mindboggling.

All our intuition tells us that the number of different states in a volume is bounded by the volume, not the surface. Try stuffing the Legos back into your kid’s toy box, and you will think it’s the volume that bounds what you can cram inside. But the Holographic Principle says that this is only approximately so. If you would try to pack more and more, smaller and smaller Legos into the box, you would eventually fail to get anything more inside. And if you would measure what bounds the success of your stuffing of the tiniest Legos, it would be the surface area of the box. In more detail, the amount of different states has to be less then a quarter of the surface area measured in Planck units. That’s a huge number and so far off our daily experience that we never notice this limit. What we notice in practice is only the bound by the volume.

The Holographic Principle is a consequence of black hole physics, which does not depend on the details of quantizing gravity, and it is therefore generally expected that the entropy bound must be obeyed by all approaches to quantum gravity.

Physicists have tried, of course, to see whether they can find a way to violate this bound. You can consider various types of systems, pack them as tightly as possible, and then calculate the number of degrees of freedom. In this, it is essential that you take into account quantum behavior, because it’s the uncertainty principle that ultimately prevents arbitrarily tight packing. In all known cases however, it was found that the system will collapse to a black hole before the bound is saturated. And black holes themselves saturate the bound. So whatever physicists tried, they only confirmed that the bound holds indeed. With every such thought-experiment, and with every failure of violating the entropy bound, they have grown more convinced that the holographic principle captures a deep truth about nature.

The only known exception that violates the holographic entropy bound are the super-entropic monster-states constructed by Hsu and collaborators. These states however are pathological in that not only will they inevitably go on to collapse to a black hole, they also must have come out of a white hole in the past. They are thus mathematically possible, but not physically realistic. (Aside: That the states come out of a white hole and vanish into a black hole also means you can’t create these super-entropic configurations by throwing in stuff from infinity, which should come as a relief to anybody who believes in the AdS/CFT correspondence.)

So if Loop Quantum Gravity would violate the Holographic Principle that would be a pretty big deal, making the theory inconsistent with all that’s known about black hole physics!

In the paper, the authors redo the calculation for the entropy of a particular quantum system. With the usual quantization, this system obeys the holographic principle. With the quantization technique from Loop Quantum Gravity, the authors get an additional term but the system still obeys the holographic entropy bound, since the additional term is subdominant to the first. They conclude “We have demonstrated that the holographic principle is violated due to the effects coming from LQG.” It’s a plain non-sequitur.

I suspect that the authors mistook the maximum entropy of the quantum system under consideration, previously calculated by ‘t Hooft, for the holographic bound. This is strange because in the introduction they have the correct definition for the holographic bound. Besides this, the claim that in LQG it should be more difficult to obey the holographic bound is highly implausible to begin with. LQG is a discretization approach. It reduces the number of states, it doesn’t increase them. Clearly, if you go down to the discretization scale, the number of states should drop to zero. This makes me think that not only did the authors misinterpret the result, they probably also got the sign of the additional term wrong.

(To prevent confusion, please note that in the paper they calculated corrections to the entropy of the matter, not corrections to the black hole entropy, which would go onto the other side of the equation.)

You might get away with the impression that we have here two unfortunate researchers who were confused about some terminology, and I’m being an ass for highlighting their mistakes. And you would be right, of course, they were confused, and I’m an ass. But let me add that after having read the paper I did contact the authors and explained that their statement that the LQG violates the Holographic Principle is wrong and does not follow from their calculation. After some back and forth, they agreed with me, but refused to change anything about their paper, claiming that it’s a matter of phrasing and in their opinion it’s all okay even though it might confuse some people. And so I am posting this explanation here because then it will show up as an arxiv trackback. Just to avoid that it confuses some people.

In summary: Loop Quantum Gravity is alive and well. If you feed me papers in the future, could you please take into account my dietary preferences?

Saturday, August 08, 2015

To the women pregnant with my children: Here is what to expect [Totally TMI – Proceed on your own risk]

Last year I got a strange email, from a person entirely unknown to me, letting me know that one of their acquaintances seemed to pretend an ultrasound image from my twin pregnancy was their own. They sent along the following screen capture that shows a collection of ultrasound images. It springs to the eye that these images were not taken with the same device as they differ in contrast and color scheme. It seems exceedingly unlikely you would get this selection of ultrasound image from one screening.


In comparison, here is my ultrasound image at 14 weeks pregnancy, taken in July 2010:



You can immediately see that the top right image from the stranger is my ultrasound image, easily recognizable by the structure in the middle that looks like an upside-down V. The header containing my name is cropped. I don’t know where the other images came from, but I’d put my bets on Google.

I didn’t really know what to make of this. Why would some strange woman pretend my ultrasound images are theirs? Did she fake being pregnant? Was she indeed pregnant but didn’t have ultrasound images? Did she just not like their own images?

My ultrasound images were tiny, glossy printouts, and to get them online I first had to take a high resolution photo of the image, straighten it, remove reflections, turn up contrast and twiddle some other software knobs. I’m not exactly an award-winning photoshopper, but from the images that Google brings up, mine is one with the highest resolution.

So maybe somebody just wanted to save time, thinking ultrasound images all look alike anyway. Well, they don’t. Truth be said, to me reading an ultrasound is somewhat like reading tea leaves, and I’m a coffee drinker. But the days in which ultrasound images all looked alike are long gone. If you do an inverse image search, it identifies my ultrasound flawlessly. And then there’s the upside-down V that my doctor said was the cord, which might or might not be correct.

The babies are not a boy and a girl, as is claimed in the caption of the screenshot; they are two girls with separate placentas. In the case with two placentas the twins might be fraternal – stemming from two different eggs – or identical – stemming from the same egg that divided early on. We didn’t know they were two girls though until 20 weeks, at which age you should be able to see the dangling part of the genitals, if there is one.

If I upload an image to my blog, I do not mind it being used by other people. What irked me wasn’t somebody used my image, but that they implicitly claimed my experience was theirs.

In any case, I forgot all about this bizarre story until last week I got another note from a person I don’t know, alerting me that somebody else is going about pretending to carry my children. Excuse me if I might not have made too much effort in blurring out the picture of the supposedly pregnant woman


This case is even more bizarre as I’ve been told the woman apparently had her uterus removed and is claiming the embryos have attached to other organs. Now, it is indeed possible that a fertilized egg implants outside the uterus and the embryo continues to grow, sometimes for several months. The abdomen for example has a good blood circulation that can support a pregnancy for quite some while. Sooner or later though the supply of nutrients and oxygen will become insufficient, and the embryo dies, triggering a miscarriage. That’s a major problem because if the pregnancy isn’t in the uterus the embryo has no exit through which to leave. Such out-of-place pregnancies are medical emergencies and, if not discovered early on, normally end fatally for the mother: Even if the dead embryo can be surgically removed, the placenta has grown into the abdomen and cannot detach the way it can cleanly separate from the rather special lining of the uterus, resulting in heavy inner bleeding and, often, death.

Be that as it may, if you’ve had your uterus removed you can’t get pregnant because the semen has no way to fertilize an egg.

I do not have the faintest clue why somebody would want to fake a twin pregnancy. But then the internet seems to proliferate what I want to call, in absence of a better word, “experience theft”. Some people pretend to suffer from an illness they don’t have, travel to places they’ve never been, or having grown up as members of a minority when they didn’t. Maybe pretending to be pregnant with twins is just the newest trend.

Well, ladies, so let me tell you what to expect, so you will get it right. At 20 weeks you’ll start getting preterm contractions, several hours a day, repeating stoically every 10 minutes. They’ll turn out to be what is called “not labor active”, pushing inwards but not downwards, still damned painful. Doctors warn that you’ll have a preterm delivery and issue a red flag: No sex, no travel, no exercise for the rest of the pregnancy.

At 6 months your bump will have reached the size of a full-term single pregnancy, but you still have 3 months to go. People start making cheerful remarks that you must be almost due! Your cervix length has started to shorten and it is highly recommended you stay in bed with your hips elevated and so you’ll go on sick leave following the doctor’s advice. The allegedly so awesome Swedish health insurance will later refuse to cover for this and you’ll lose two months worth of salary.

By 7 month your cervix length has shortened to 1 cm and the doctors get increasingly nervous. By 8 months it’s dilated 1 cm. You’re now supposed to visit your doctor every day. Every day they record your contractions, which still come, “not labor active”, in 10 minute intervals. They still do when you’ve reached full term, at which point you’ll start developing a nasty kidney problem accompanied by substantial water retention. And so, after warning you of a preterm delivery for 4 months, the doctors now insist that you have labor induced.

Once in the hospital they put you on Cytotec, which after 36 hours hasn’t had any effect other than making you even more miserable. But since the doctors expect that you will need a Cesarean section eventually, they don’t want you to eat. After 48 hours mostly lying in bed, not being allowed to eat more than cookies – while being 9 month pregnant with twins! –  your blood pressure will give in and one of the babies’ heartbeats will drop from a steady 140 to 90. And then it’s entirely gone. An electronic device starts beeping widely, a nurse pushes a red button, and suddenly you will find yourself with an oxygen mask on your face and an Epinephrine shot in your vein. You use the situation to yell at a doctor to stop the Cytotec nonsense and put you on Pitocin, which they promise to do the next morning.

The next morning you finally get your PDA and the Pitocin does its work. Within an hour you’ll go from 1 cm to 8 cm dilation. Your waters will never break – a midwife will break them for you. Both. The doctor insists on shaving off your hair “down there”, because he still expects you’ll need a Cesarean. These days, you don’t deliver twins naturally any more, is the message you get. Eventually, after eternity has come and gone, somebody will ask you to push. And push you will, 5 times for two babies.

I have no scars and I have no stretch marks. The doctor never got to use his knife. I’m living proof you don’t need a Cesarean to give birth to twins. The children whose ultrasound image you’ve used are called Lara Lily and Gloria Sophie. At birth, they had a low weight, but full Apgar score. They are now 4 years old, beat me at memory, and their favorite food is meatballs.

The twins are now 4 years old.

If there are two cases that have been brought to my attention that involve my images, how many of these cases are there in total?

Update: Read comments for some more information about the first case.

Wednesday, March 25, 2015

No, the LHC will not make contact with parallel universes

Evidence for rainbow gravity by butterfly
production at the LHC.

The most recent news about quantum gravity phenomenology going through the press is that the LHC upon restart at higher energies will make contact with parallel universes, excuse me, with PARALLEL UNIVERSES. The telegraph even wants you to believe that this would disprove the Big Bang, and tomorrow maybe it will cause global warming, cure Alzheimer and lead to the production of butterflies at the LHC, who knows. This story is so obviously nonsense that I thought it would be unnecessary to comment on this, but I have underestimated the willingness of news outlets to promote shallow science, and also the willingness of authors to feed that fire.

This story is based on the paper:
    Absence of Black Holes at LHC due to Gravity's Rainbow
    Ahmed Farag Ali, Mir Faizal, Mohammed M. Khalil
    arXiv:1410.4765 [hep-th]
    Phys.Lett. B743 (2015) 295
which just got published in PLB. Let me tell you right away that this paper would not have passed my desk. I'd have returned it as major revisions necessary.

Here is a summary of what they have done. In models with large additional dimensions, the Planck scale, where effects of quantum gravity become important, can be lowered to energies accessible at colliders. This is an old story that was big 15 years ago or so, and I wrote my PhD thesis on this. In the new paper they use a modification of general relativity that is called "rainbow gravity" and revisit the story in this framework.

In rainbow gravity the metric is energy-dependent which it normally is not. This energy-dependence is a non-standard modification that is not confirmed by any evidence. It is neither a theory nor a model, it is just an idea that, despite more than a decade of work, never developed into a proper model. Rainbow gravity has not been shown to be compatible with the standard model. There is no known quantization of this approach and one cannot describe interactions in this framework at all. Moreover, it is known to lead to non-localities with are ruled out already. For what I am concerned, no papers should get published on the topic until these issues have been resolved.

Rainbow gravity enjoys some popularity because it leads to Planck scale effects that can affect the propagation of particles, which could potentially be observable. Alas, no such effects have been found. No such effects have been found if the Planck scale is the normal one! The absolutely last thing you want to do at this point is argue that rainbow gravity should be combined with large extra dimensions, because then its effects would get stronger and probably be ruled out already. At the very least you would have to revisit all existing constraints on modified dispersion relations and reaction thresholds and so on. This isn't even mentioned in the paper.

That isn't all there is to say though. In their paper, the authors also unashamedly claim that such a modification has been predicted by Loop Quantum Gravity, and that it is a natural incorporation of effects found in string theory. Both of these statements are manifestly wrong. Modifications like this have been motivated by, but never been derived from Loop Quantum Gravity. And String Theory gives rise to some kind of minimal length, yes, but certainly not to rainbow gravity; in fact, the expression of the minimal length relation in string theory is known to be incompatible with the one the authors use. The claims that this model they use has some kind of derivation or even a semi-plausible motivation from other theories is just marketing. If I had been a referee of this paper, I would have requested that all these wrong claims be scraped.

In the rest of the paper, the authors then reconsider the emission rate of black holes in extra dimension with the energy-dependent metric.

They erroneously state that the temperature diverges when the mass goes to zero and that it comes to a "catastrophic evaporation". This has been known to be wrong since 20 years. This supposed catastrophic evaporation is due to an incorrect thermodynamical treatment, see for example section 3.1 of this paper. You do not need quantum gravitational effects to avoid this, you just have to get thermodynamics right. Another reason to not publish the paper. To be fair though, this point is pretty irrelevant for the rest of the authors' calculation.

They then argue that rainbow gravity leads to black hole remnants because the temperature of the black hole decreases towards the Planck scale. This isn't so surprising and is something that happens generically in models with modifications at the Planck scale, because they can bring down the final emission rate so that it converges and eventually stops.

The authors then further claim that the modification from rainbow gravity affects the cross-section for black hole production, which is probably correct, or at least not wrong. They then take constraints on the lowered Planck scale from existing searches for gravitons (ie missing energy) that should also be produced in this case. They use the contraints obtained from the graviton limits to say that with these limits, black hole production should not yet have been seen, but might appear in the upcoming LHC runs. They should not of course have used the constaints from a paper that were obtained in a scenario without the rainbow gravity modification, because the production of gravitons would likewise be modified.

Having said all that, the conclusion that they come to that rainbow gravity may lead to black hole remnants and make it more difficult to produce black holes is probably right, but it is nothing new. The reason is that these types of models lead to a generalized uncertainty principle, and all these calculations have been done before in this context. As the authors nicely point out, I wrote a paper already in 2004 saying that black hole production at the LHC should be suppressed if one takes into account that the Planck length acts as a minimal length.

Yes, in my youth I worked on black hole production at the LHC. I gracefully got out of this when it became obvious there wouldn't be black holes at the LHC, some time in 2005. And my paper, I should add, doesn't work with rainbow gravity but with a Lorentz-invariant high-energy deformation that only becomes relevant in the collision region and thus does not affect the propagation of free particles. In other words, in contrast to the model that the authors use, my model is not already ruled out by astrophysical constraints. The relevant aspects of the argument however are quite similar, thus the similar conclusions: If you take into account Planck length effects, it becomes more difficult to squeeze matter together to form a black hole because the additional space-time distortion acts against your efforts. This means you need to invest more energy than you thought to get particles close enough to collapse and form a horizon.

What does any of this have to do with paralell universes? Nothing, really, except that one of the authors, Mir Faizal, told some journalist there is a connection. In the phys.org piece one can read:
""Normally, when people think of the multiverse, they think of the many-worlds interpretation of quantum mechanics, where every possibility is actualized," Faizal told Phys.org. "This cannot be tested and so it is philosophy and not science. This is not what we mean by parallel universes. What we mean is real universes in extra dimensions. As gravity can flow out of our universe into the extra dimensions, such a model can be tested by the detection of mini black holes at the LHC. We have calculated the energy at which we expect to detect these mini black holes in gravity's rainbow [a new theory]. If we do detect mini black holes at this energy, then we will know that both gravity's rainbow and extra dimensions are correct."
To begin with rainbow gravity is neither new nor a theory, but that addition seems to be the journalist's fault. For what the parallel universes are concerned, to get these in extra dimensions you would need to have additional branes next to our own one and there is nothing like this in the paper. What this has to do with the multiverse I don't know, that's an entirely different story. Maybe this quote was taken out of context.

Why does the media hype this nonsense? Three reasons I can think of. First, the next LHC startup is near and they're looking for a hook to get the story across. Black holes and parallel universes sound good, regardless of whether this has anything to do with reality. Second, the paper shamelessly overstates the relevance of the investigation, makes claims that are manifestly wrong, and fails to point out the miserable state that the framework they use is in. Third, the authors willingly feed the hype in the press.

Did the topic of rainbow gravity and the author's name, Mir Faizal, sound familiar? That's because I wrote about both only a month ago, when the press was hyping another nonsense story about black holes in rainbow gravity with the same author. In that previous paper they claimed that black holes in rainbow gravity don't have a horizon and nothing was mentioned about them forming remnants. I don't see how these both supposed consequences of rainbow gravity are even compatible with each other. If anything this just reinforces my impression that this isn't physics, it's just fanciful interpretation of algebraic manipulations that have no relation to reality whatsoever.

In summary: The authors work in a framework that combines rainbow gravity with a lowered Planck scale, which is already ruled out. They derive bounds on black hole production using existing data analysis that does not apply in the framework they use. The main conclusion that Planck length effects should suppress black hole production at the LHC is correct, but this has been known since 10 years at least. None of this has anything to do with parallel universes.

Wednesday, February 04, 2015

Black holes don’t exist again. Newsflash: It’s a trap!

Several people have pointed me towards an article at phys.org about this paper
    Absence of an Effective Horizon for Black Holes in Gravity's Rainbow
    Ahmed Farag Ali, Mir Faizal, Barun Majumder
    arXiv:1406.1980 [gr-qc]
    Europhys.Lett. 109 (2015) 20001
Among other things, the authors claim to have solved the black hole information loss problem, and the phys.org piece praises them as using a “new theory.” The first author is cited saying: “The absence of an effective horizon means there is nothing absolutely stopping information from going out of the black hole.”

The paper uses a modification of General Relativity known under the name of “rainbow gravity” which means that the metric and so the space-time background is energy-dependent. Dependent on which energy, you ask rightfully. I don’t know. Everyone who writes papers on this makes their own pick. Rainbow gravity is an ill-defined framework that has more problems than I can list here. In the paper the authors motivate it, amazingly enough, by string theory.

The argument goes somewhat like this: rainbow gravity has something to do with deformed special relativity (DSR), some versions of which have something to do with a minimal length, which has something to do with non-commutative geometry, which has something to do with string theory. (Check paper if you don’t believe this is what they write.) This argument has more gaps than the sentence has words.

To begin with DSR was formulated in momentum space. Rainbow gravity is supposedly a formulation of DSR in position space, plus that it takes into account gravity. Except that it is known that the only ways to do DSR in position space in a mathematically consistent way either lead to violations of Lorentz-invariance (ruled out) or violations of locality (also ruled out).

This was once a nice idea that caused some excitement, but that was 15 years ago. For what I am concerned, papers on the topic shouldn’t be accepted for publication any more unless these problems are solved or at least attempted to be solved. At the very least the problems should be mentioned in an article on the topic. The paper in question doesn’t list any of these issues. Rainbow gravity isn’t only not new, it is also not a theory. It once may have been an idea from which a theory might have been developed, but this never happened. Now it’s a zombie idea that doesn’t die because journal editors think it must be okay if others have published papers on it too.

There is one way to make sense of rainbow gravity which is in the context of running coupling constants. Coupling constants, including Newton’s constant, aren’t actually constant, but depend on the energy scale that the physics is probed with. This is a well-known effect which can be measured for the interactions in the standard model and it is plausible that it should also exist for gravity. Since the curvature of spacetime depends on the strength of the gravitational coupling, the metric then becomes a function of the energy that it is probed with. This is to my knowledge also the only way to make sense of deformed special relativity. (I wrote a paper on this with Xavier and Roberto some years ago.) Alas, to see any effect from this you’d need to do measurements at Planckian energies (com), and the energy-dependent metric would only apply directly in the collision region.

In their paper the authors allude to some “measurement” that supposedly sets the energy in their metric. Unfortunately, there is never any observer doing any measurement, so one doesn’t know which energy it is. It’s just a word that they appeal to. What they do instead is making use of a known relation in some versions of DSR that prevents one from measuring distances below the Planck length. They then argue that if one cannot resolve structures below the Planck length then the horizon of a black hole cannot be strictly speaking defined. That quantum gravity effects should blur out the horizon to finite width is correct in principle.

Generally, all surfaces of zero width, like the horizon, are mathematical constructs. This is hardly a new insight, but it’s also not very meaningful. The “surface of the Earth” for example doesn’t strictly speaking exist either. You will still smash to pieces if you jump out of a window, you just can’t tell exactly where you will die. Similarly, that the exact location of the horizon cannot be measured doesn’t mean that the space-time does no longer have a causally disconnected region. You just can’t tell exactly when you enter it. The authors’ statement that:
“The absence of an effective horizon means there is nothing absolutely stopping information from going out of the black hole.”
is therefore logically equivalent to the statement that there is nothing absolutely stopping you at the surface of the Earth when you jump out the window.

The paper also contains a calculation. The authors first point out that in the normal metric of the Schwarzschild black hole an infalling observer needs a finite time to cross the horizon, but for a faraway observer it looks like it takes an infinite time. This is correct. If one calculates the time in the faraway observer’s coordinates it diverges if the infalling observer approaches the horizon. The authors then find out that it takes only a finite time to reach a surface that is still a Planck length away from the horizon. This is also correct. It’s also a calculation that normally is assigned to undergrad students.

They try to conclude from this that the faraway observer sees a crossing of the horizon in finite time, which doesn’t make sense because they’ve previously argued that one cannot measure exactly where the horizon is, though they never say who is measuring what and how. What it really means is that the faraway observer cannot exactly tell when the horizon is crossed. This is correct too, but since it takes an infinite time anyway, the uncertainty is also infinite. The authors then argue: “Divergence in time is actually an signal of breakdown of spacetime description of quantum theory of gravity, which occurs because of specifying a point in spacetime beyond the Planck scale.” The authors, in short, conclude that if an observer cannot tell exactly when he reaches a certain distance, he can never cross it. Thus the position at which the asymptotic time diverges is never reached. And the observer is never causally connected.

In their paper, this reads as follows:
“Even though there is a Horizon, as we can never know when a string cross it, so effectively, it appears as if there is no Horizon.”
Talking about strings here is just cosmetics, the relevant point is that they believe if you cannot tell exactly when you cross the horizon, you will never become causally disconnected, which just isn’t so.

The rest of the paper is devoted to trying to explain what this means, and the authors keep talking about some measurements which are never done by anybody. If you would indeed make a measurement that reaches the Planck energy (com) at the horizon, you could indeed locally induce a strong perturbation, thereby denting away the horizon a bit, temporarily. But this isn’t what the authors are after. They are trying to convince the reader that the impossibility of resolving distances arbitrarily well, though without actually making any measurement, bears some relevance for the causal structure of spacetime.

A polite way to summarize this finding is that the calculation doesn’t support the conclusion.

This paper is a nice example though to demonstrate what is going wrong in theoretical physics. It isn’t actually that the calculation is wrong, in the sense that the mathematical manipulations are most likely correct (I didn’t check in detail, but it looks good). The problem is that not only is the framework that they use ill-defined (in their version it is plainly lacking necessary definitions, notably the transformation behavior under a change of coordinate frame and the meaning of the energy scale that they use), but that they moreover misinterpret their results.

The authors do not only not mention the shortcomings of the framework that they use but also oversell it by trying to connect it to string theory. Even though they should know that the type of uncertainty that results from their framework is known to NOT be valid in string theory. And the author of the phys.org article totally bought into this. The tragedy is of course that for the authors their overselling has worked out just fine and they’ll most likely do it again. I’m writing this in the hope to prevent it, though on the risk that they’ll now hate me and never again cite any of my papers. This is how academia works these days, or rather, doesn’t work. Now I’m depressed. And this is all your fault for pointing out this article to me.

I can only hope that Lisa Zyga, who wrote the piece at phys.org, will learn from this that solely relying on the author’s own statements is never good journalistic practice. Anybody working on black hole physics could have told her that this isn’t a newsworthy paper.

Saturday, November 22, 2014

Gender disparity? Yes, please.

[Image Source: Papercards]

Last month, a group of Australian researchers from the life sciences published a paper that breaks down the duration of talks at a 2013 conference by gender. They found that while the overall attendance and number of presentations was almost equally shared between men and women, the women spoke on the average for shorter periods of time. The main reason for this was that the women applied for shorter talks to begin with. You find a brief summary on the Nature website.

The twitter community of women in science was all over this, encouraging women to make the same requests as men, asserting that women “underpromote” themselves by not taking up enough of their colleagues’ time.



Other studies have previously found that while women on the average speak as much as men during the day, they tend to speak less in groups, especially so if the group is predominantly male. So the findings from the conference aren’t very surprising.

Now a lot of what goes around on twitter isn’t really meant seriously, see the smiley in Katie Hinde’s tweet. I remarked one could also interpret the numbers to show that men talk too much and overpromote themselves. I was joking of course to make a point, but after dwelling on this for a while I didn’t find it that funny anymore.

Women are frequently told that to be successful they should do the same as men do. I don’t know how often I have seen advice explaining how women are allegedly belittling themselves by talking, well, like a woman. We are supposed to be assertive and take credit for our achievements. Pull your shoulders back, don’t cross your legs, don’t flip your hair. We’re not supposed to end every sentence as if it was a question. We’re not supposed to start every interjection with an apology. We’re not supposed to be emotional and personal, and so on. Yes, all of these are typically “female” habits. We are told, in essence, there’s something wrong with being what we are.

Here is for example a list with public speaking tips: Don’t speak about yourself, don’t speak in a high pitch, don’t speak too fast because “Talking fast is natural with two of your best friends and a bottle of Mumm, but audiences (especially we slower listening men) can’t take it all in”. Aha. Also, don’t flirt and don’t wear jewelry because the slow men might notice you’re a woman.

Sorry, I got sick at point five and couldn’t continue – must have been the Mumm. Too bad if your anatomy doesn’t support the low pitches. If you believe this guy that is, but listen to me for a moment, I swear I’ll try not to flirt. If your voice sounds unpleasant when you’re giving a talk, it’s not your voice, it’s the microphone and the equalizer, probably set for male voices. And do we really need a man to tell us that if we’re speaking about our research at a conference we shouldn’t talk about our recent hiking trip instead?

There are many reasons why women are underrepresented in some professions and overrepresented in others. Some of it is probably biological, some of it is cultural. If you are raising or have raised a child it is abundantly obvious that our little ones are subjected to gender stereotypes starting at very young age. Part of it is the clothing and the toys, but more importantly it’s simply that they observe the status quo: Childcare is still predominantly female business and I yet have to see a woman on the garbage truck.

Humans are incredibly social animals. It would be surprising if the prevailing stereotypes did not affect us at all. That’s why I am supportive of all initiatives that encourage children to develop their talents regardless of whether these talents are deemed suitable for their gender, race, or social background. Because these stereotypes are thousands of years old and have become hurdles to our selfdevelopment. By and large, I see more encouragements for girls than I see for boys to follow their passion regardless of what society thinks, and I also see that women have more backup fighting unrealistic body images which is what this previous post was about. Ironically, I was criticized on twitter for saying that boys don’t need to have a superhero body to be real men because that supposedly wasn’t fair to the girls.

I am not supportive of hard quotas that aim at prefixed male-female ratios. There is no scientific support for these ratios, and moreover I witnessed repeatedly that these quotas have a big backlash, creating a stigma that “She is just here because” whether or not that is true.

Thus, at the present level women are likely to still be underrepresented from where we would be if we’d manage to ignore social pressure to follow ancient stereotypes. And so I think that we would benefit from more women among the scientists, especially in math-heavy disciplines. Firstly because we are unnecessarily missing out of talent. But also because diversity is beneficial for the successful generation and realization of ideas. The relevant diversity is in the way we think and argue. Again, this is probably partly biological and partly cultural, but whatever the reason, a diversity of thought should be encouraged and this diversity is almost certainly correlated with demographic diversity.

That’s why I disapprove of so-called advice that women should talk and walk and act like men. Because that’s exactly the opposite from what we need. Science stands to benefit from women being different from men. Gender equality doesn’t mean genders should be equal, it means they should have the same opportunities. So women are more likely to volunteer organizing social events? Wtf is wrong with that?

So please go flip your hair if you feel like it, wear your favorite shirt, put on all the jewelry you like, and generally be yourself. Don’t let anybody tell you to be something you are not. If you need the long slot for your talk go ahead. If you’re confident you can get across your message in 15 minutes, even better, because we all talk too much anyway.


About the video: I mysteriously managed to produce a video in High Definition! Now you can see all my pimples. My husband made a good camera man. My anonymous friend again helped cleaning up the audio file. Enjoy :)

Monday, August 04, 2014

What is a singularity?

Not Von Neumann's urinal, but a
model of an essential singularity.
[Source: Wikipedia Commons.]
I recently read a bit around about the technological singularity, but it’s hard. It’s hard because I have to endure sentences like this:
“Singularity is a term derived from physics, where it means the point at the unknowable centre of a black hole where the laws of physics break down.”
Ouch. Or this:
“[W]e cannot see beyond the [technological] singularity, just as we cannot see beyond a black hole's event horizon.”
Aargh. Then I thought certainly they must have looked up the word in a dictionary, how difficult can it be? In the dictionary, I found this:
sin-gu-lar-i-ty
noun, plural sin-gu-lar-i-ties for 2–4.

1. the state, fact, or quality of being singular.
2. a singular, unusual, or unique quality; peculiarity.
3. Mathematics, singular point.
4. Astronomy (in general relativity) the mathematical representation of a black hole.”
I don’t even know where to start complaining. Yes, I did realize that black holes and event horizons made it into pop culture, but little did I realize that something as seemingly simple as the word “singularity” is surrounded by such misunderstanding.

Von Neumann.

Let me start with some history. Contrary to what you read in many places, it was not Vernor Vinge who first used the word “singularity” to describe a possible breakdown of predictability in technological development, it was von Neumann.

Von Neumann may be known to you as the man behind the Von Neumann entropy. He was a multiple talented genius, one of a now almost extinct breed, who contributed to many disciplines in math and physics, and what are now interdisciplinary fields like game theory or quantum information.

In Chapter 16 (p 157) of Stanislav Ulam’s biography of Von Neumann, published in 1958, one reads:
“One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
The term “singularity” was then picked up in 1993 by Vinge who coined the expression “technological singularity”. But let us dwell for a moment on the above Von Neumann quote. Ulam speaks of an “essential singularity”. You may be forgiven mistaking the adjective “essential” as a filler, but “essential singularity” is a technical expression, typically found in the field of complex analysis.

A singularity in mathematics is basically a point in which a function is undefined. Now it might be undefined just because you didn’t define it, but it is possible to continue the function through that point. In this case the singularity is said to be removable and, in some sense, just isn’t an interesting singularity, so let us leave this aside.

What one typically means with a singularity is a point where a function behaves badly, so that one or several of its derivatives diverge, that is they go to infinity. The ubiquitous example in school math is the poles of inverse powers of x, which diverge with x to zero.

However, such poles are not malign, you can remove them easily enough by multiplying the function with the respective positive power. Of course this gives you a different function, but this function still carries much of the information of the original function, notably all the coefficients in a series expansion. This procedure of removing poles (or creating poles) is very important in complex analysis where it is necessary to obtain the “residuals” of a function.

Some singularities however cannot be removed by multiplication with any positive power. These are those cases in which the function contains an infinite number of negative powers, the most commonly used example is exp(-1/x) at x=0. Such a singularity is said to be “essential”. Please appreciate the remarkable fact that the function itself does not diverge for x to zero, but neatly goes to zero! So do all its derivatives!!

So what did von Neumann mean with referring to an essential singularity?

From the context it seems he referred to the breakdown of predictability at this point. If all derivatives of a function are zero, you cannot make a series expansion (neither Taylor nor Laurent) around that point. If you hit that point, you don’t know what happens next, basically. This is a characteristic feature of essential singularities. (The radius of convergence cannot be pushed through the singular point.)

However, predictability of the laws of nature that we have (so far) never breaks down in this very sense. It breaks down because the measurement in quantum theory is non-deterministic, but that has for all we know nothing to do with essential singularites. (Yes, I’ve tried to make this connection. I’ve always been fond of essential singularities. Alas, not even the Templeton Foundation wanted anything to do with my great idea. So much about the reality of research.)

Geodesic incompleteness.
Artist's impression.
The other breakdown of predictability that we know of are singularities in general relativity. These are not technically essential singularities if you ask for the behavior of certain observables – they are typically poles or conical singularities. But they bear a resemblance to essential singularities by a concept known as “geodesic incompleteness”. It basically means that there are curves in space-time which end at finite proper time and cannot be continued. It’s like hitting the wall at 32km.

The reason for the continuation being impossible is that a singularity is a singularity is a singularity, no matter how you got there. You lose all information about your past when you hit it. (This is why, incidentally, the Maldacena-Horowitz proposal to resolve the black hole information loss by putting initial conditions on the singularity makes a lot of sense to me. Imho a totally under-appreciated idea.)

A common confusion about black holes concerns the nature of the event horizon. You can construct certain quantities of the black hole spacetime that diverge at the event horizon. In the mathematical sense they are singular, and that did confuse many people after the black hole space-time was first derived, in the middle of the last century. But it was quickly understood that these quantities do not correspond to physical observables. The physically relevant singularity is where geodesics end, at the center of the black hole. It corresponds to an infinitely large curvature. (This is an observer independent statement.) Nothing special happens upon horizon crossing, except that one can never get out again.

The singularity inside black holes is widely believed not to exist though, exactly because it implies a breakdown of predictability and causes the so paradoxical loss of information. The singularity is expected to be removed by quantum gravitational effects. The defining property of the black hole is the horizon, not the singularity. A black hole with the singularity removed is still a black hole. A singularity with the horizon removed is a naked singularity, no longer a black hole.

What has all of this to do with the technological singularity?

Nothing, really.

To begin with, there are like 17 different definitions for the technological singularity (no kidding). None of them has anything to do with an actual singularity, neither in the mathematical nor in the physical sense, and we have absolutely no reason to believe that the laws of physics or predictability in general breaks down within the next decades or so. In principle.

In practice, on some emergent level of an effective theory, I can see predictability becoming impossible. How do you want to predict what an artificial intelligence will do without having something more powerful than that artificial intelligence already? Not that anybody has been able to predict what averagely intelligent humans will do. Indeed one could say that predictability becomes more difficult with absence of intelligence, not the other way round, but I digress.

Having said all that, let us go back to these scary quotes from the beginning:
“Singularity is a term derived from physics, where it means the point at the unknowable centre of a black hole where the laws of physics break down.”
The term singularity comes from mathematics. It does not mean “at the center of the black hole”, but it can be “like the center of a black hole”. Provided you are talking about the classical black hole solution, which is however believed to not be realized in nature.
“[W]e cannot see beyond the [technological] singularity, just as we cannot see beyond a black hole's event horizon.”
There is no singularity at the black hole horizon, and predictability does not break down at the black hole horizon. You cannot see beyond a black hole horizon as long as you stay outside the black hole. If you jump in, you will see - and then die. But I don’t know what this has to do with technological development, or maybe I just didn’t read the facebook fineprint closely enough.

And finally there’s this amazing piece of nonsense:
“Singularity: Astronomy. (in general relativity) the mathematical representation of a black hole.”
To begin with General Relativity is not a field of astronomy. But worse, the “mathematical representation of a black hole” is certainly not a singularity. The mathematical representation of a (classical) black hole is the black hole spacetime and it contains a singularity.

And just in case you wondered, singularities have absolutely nothing to do with singing, except that you find both on my blog.

Saturday, July 12, 2014

Post-empirical science is an oxymoron.

Image illustrating a phenomenologist after
reading a philosopher go on about
empiricism.

3:AM has an interview with philosopher Richard Dawid who argues that physics, or at least parts of it, are about to enter an era of post-empirical science. By this he means that “theory confirmation” in physics will increasingly be sought by means other than observational evidence because it has become very hard to experimentally test new theories. He argues that the scientific method must be updated to adapt to this development.

The interview is a mixture of statements that everybody must agree on, followed by subtle linguistic shifts that turn these statements into much stronger claims. The most obvious of these shifts is that Dawid flips repeatedly between “theory confirmation” and “theory assessment”.

Theoretical physicists do of course assess their theories by means other than fitting data. Mathematical consistency clearly leads the list, followed by semi-objective criteria like simplicity or naturalness, and other mostly subjective criteria like elegance, beauty, and the popularity of people working on the topic. These criteria are used for assessment because some of them have proven useful to arrive at theories that are empirically successful. Other criteria are used because they have proven useful to arrive on a tenured position.

Theory confirmation on the other hand doesn’t exist. The expression is sometimes used in a sloppy way to mean that a theory has been useful to explain many observations. But you never confirm a theory. You just have theories that are more, and others that are less useful. The whole purpose of the natural sciences is to find those theories that are maximally useful to describe the world around us.

This brings me to the other shift that Dawid makes in his string (ha-ha-ha) of words, which is that he alters the meaning of “science” as he goes. To see what I mean we have to make a short linguistic excursion.

The German word for science (“Wissenschaft”) is much closer to the original Latin meaning, “scientia” as “knowledge”. Science, in German, includes the social and the natural sciences, computer science, mathematics, and even the arts and humanities. There is for example the science of religion (Religionswissenschaft), the science of art (Kunstwissenschaft), science of literature, and so on. Science in German is basically everything you can study at a university and for what I am concerned mathematics is of course a science. However, in stark contrast to this, the common English use of the word “science” refers exclusively to the natural sciences and does typically not even include mathematics. To avoid conflating these two different meanings, I will explicitly refer to the natural sciences as such.

Dawid sets out talking about the natural sciences, but then strings (ha-ha-ha) his argument along on the “insights” that string theory has lead to and the internal consistency that gives string theorists confidence their theory is a correct description of nature. This “non-empirical theory assessment”, while important, can however only be means to the end of an eventual empirical assessment. Without making contact to observation a theory isn’t useful to describe the natural world, not part of the natural sciences, and not physics. These “insights” that Dawid speaks of are thus not assessments that can ever validate an idea as being good to describe nature, and a theory based only on non-empirical assessment does not belong into the natural sciences.

Did that hurt? I hope it did. Because I am pretty sick and tired of people selling semi-mathematical speculations as theoretical physics and blocking jobs with their so-called theories of nothing specifically that lead nowhere in particular. And that while looking down on those who work on phenomenological models because those phenomenologists, they’re not speaking Real Truth, they’re not among the believers, and their models are, as one string theorist once so charmingly explained to me “way out there”.

Yeah, phenomenology is out there where science is done. To many of those who call themselves theoretical physicists today seem to have forgotten physics is all about building models. It’s not about proving convergence criteria in some Hilbert-space or classifying the topology of solutions of some equation in an arbitrary number of dimensions. Physics is not about finding Real Truth. Physics is about describing the world. That’s why I became a physicist – because I want to understand the world that we live in. And Dawid is certainly not helping to prevent more theoretical physicists get lost in math and philosophy when he attempts to validate their behavior claiming the scientific method has to be updated.

The scientific method is a misnomer. There really isn’t such a thing as a scientific method. Science operates as an adaptive system, much like natural selection. Ideas are produced, their usefulness is assessed, and the result of this assessment is fed back into the system, leading to selection and gradual improvement of these ideas.

What is normally referred to as “scientific method” are certain institutionalized procedures that scientists use because they have shown to be efficient to find the most promising ideas quickly. That includes peer review, double-blind studies, criteria for statistical significance, mathematical rigor, etc. The procedures and how stringent (ha-ha-ha) they are is somewhat field-dependent. Non-empirical theory assessment has been used in theoretical physics for a long time. But these procedures are not set in stone, they’re there as long as they seem to work and the scientific method certainly does not have to be changed. (I would even argue it can’t be changed.)

The question that we should ask instead, the question I think Dawid should have asked, is whether more non-empirical assessment is useful at the present moment. This is a relevant question because it requires one to ask “useful for what”? As I clarified above, I myself mean “useful to describe the real world”. I don’t know what “use” Dawid is after. Maybe he just wants to sell his book, that’s some use indeed.

It is not a simple question to answer how much theory assessment is good and how much is too much, or for how long one should pursue a theory trying to make contact to observation before giving up. I don’t have answers to this, and I don’t see that Dawid has.

Some argue that string theory has been assessed too much already, and that more than enough money has been invested into it. Maybe that is so, but I think the problem is not that too much effort has been put into non-empirical assessment, but that too little effort has been put into pursuing the possibility of empirical test. It’s not a question of absolute weight on any side, it’s a question of balance.

And yes, of course this is related to it becoming increasingly more difficult to experimentally test new theories. That together with self-supporting community dynamics that Lee so nicely called out as group-think. Not that loop quantum gravity is any better than string theory.

In summary, there’s no such thing as post-empirical physics. If it doesn’t describe nature, if it has nothing to say about any observation, if it doesn’t even aspire to this, it’s not physics. This leaves us with a nomenclature problem. How do you call a theory that has only non-empirical facts speaking for it and one that the mathematical physicists apparently don’t want either? How about mathematical philosophy, or philosophical mathematics? Or maybe we should call it Post-empirical Dawidism.

[Peter Woit also had a comment on the 3:AM interview with Richard Dawid.]