Showing posts with label Quantum Gravity. Show all posts
Showing posts with label Quantum Gravity. Show all posts

Tuesday, July 12, 2016

Pulsars could probe black hole horizons

The first antenna of MeerKAT,
a SKA precursor in South Africa.
[Image Source.]

It’s hard to see black holes – after all, their defining feature is that they swallow light. But it’s also hard to discourage scientists from trying to shed light on mysteries. In a recent paper, a group of researchers from Long Island University and Virginia Tech have proposed a new way to probe the near-horizon region of black holes and, potentially, quantum gravitational effects.

    Shining Light on Quantum Gravity with Pulsar-Black Hole Binaries
    John Estes, Michael Kavic, Matthew Lippert, John H. Simonetti
    arXiv:1607.00018 [hep-th]

The idea is simple and yet promising: Search for a binary system in which a pulsar and a black hole orbit around each other, then analyze the pulsar signal for unusual fluctuations.

A pulsar is a rapidly rotating neutron star that emits a focused beam of electromagnetic radiation. This beam goes into the direction of the poles of the magnetic field, and is normally not aligned with the neutron star’s axis of rotation. The beam therefore spins with a regular period like a lighthouse beacon. If Earth is located within the beam’s reach, our telescopes receive a pulse every time the beam points into our direction.

Pulsar timing can be extremely precise. We know some pulsars that have been flashing for decades every couple of milliseconds to a precision of a few microseconds. This high regularity allows astrophysicists to search for signals which might affect the timing. Fluctuations of space-time itself, for example, would increase the pulsar-timing uncertainty, a method that has been used to derive constraints on the stochastic gravitational wave background. And if a pulsar is in a binary system with a black hole, the pulsar’s signal might scrape by the black hole and thus encode information about the horizon which we can catch on Earth.


No such pulsar-black hole binaries are known to date. But upcoming experiments like eLISA and the Square Kilometer Array (SKA) will almost certainly detect new pulsars. In their paper, the authors estimate that SKA might observe up to 100 new pulsar-black hole binaries, and they put the probability that a newly discovered system would have a suitable orientation at roughly one in a hundred. If they are right, the SKA would have a good chance to find a promising binary.

Much of the paper is dedicated to arguing that the timing accuracy of such a binary pulsar could carry information about quantum gravitational effects. This is not impossible but speculative. Quantum gravitational effects are normally expect to be strong towards the black hole singularity, ie well inside the black hole and hidden from observation. Naïve dimensional estimates reveal that quantum gravity should be unobservably small in the horizon area.

However, this argument has recently been questioned in the aftermath of the firewall controversy surrounding black holes, because one solution to the black hole firewall paradox is that quantum gravitational effects can stretch over much longer distances than the dimensional estimates lead one to expect. Steve Giddings has long been a proponent of such long-distance fluctuations, and scenarios like black hole fuzzballs, or Dvali’s Bose-Einstein Computers also lead to horizon-scale deviations from general relativity. It is hence something that one should definitely look for.

Previous proposals to test the near-horizon geometry were based on measurements of gravitational waves from merger events or the black hole shadow, each of which could reveal deviations from general relativity. However, so far these were quite general ideas lacking quantitative estimates. To my knowledge, this paper is the first to demonstrate that it’s technologically feasible.

Michael Kavic, one of the authors of this paper, will attend our September conference on “Experimental Search for Quantum Gravity.” We’re still planning to life-streaming the talks, so stay tuned and you’ll get a chance to listen in.

Monday, June 06, 2016

Dear Dr B: Why not string theory?

[I got this question in reply to my last week’s book review of Why String Theory? by Joseph Conlon.]

Dear Marco:

Because we might be wasting time and money and, ultimately, risk that progress stalls entirely.

In contrast to many of my colleagues I do not think that trying to find a quantum theory of gravity is an endeavor purely for the sake of knowledge. Instead, it seems likely to me that finding out what are the quantum properties of space and time will further our understanding of quantum theory in general. And since that theory underlies all modern technology, this is research which bears relevance for applications. Not in ten years and not in 50 years, but maybe in 100 or 500 years.

So far, string theory has scored in two areas. First, it has proved interesting for mathematicians. But I’m not one to easily get floored by pretty theorems – I care about math only to the extent that it’s useful to explain the world. Second, string theory has shown to be useful to push ahead with the lesser understood aspects of quantum field theories. This seems a fruitful avenue and is certainly something to continue. However, this has nothing to do with string theory as a theory of quantum gravity and a unification of the fundamental interactions.

As far as quantum gravity is concerned, string theorist’s main argument seems to be “Well, can you come up with something better?” Then of course if someone answers this question with “Yes” they would never agree that something else might possibly be better. And why would they – there’s no evidence forcing them one way or the other.

I don’t see what one learns from discussing which theory is “better” based on philosophical or aesthetic criteria. That’s why I decided to stay out of this and instead work on quantum gravity phenomenology. As far as testability is concerned all existing approaches to quantum gravity do equally badly, and so I’m equally unconvinced by all of them. It is somewhat of a mystery to me why string theory has become so dominant.

String theorists are very proud of having a microcanonical explanation for the black hole entropy. But we don’t know whether that’s actually a correct description of nature, since nobody has ever seen a black hole evaporate. In fact one could read the firewall problem as a demonstration that indeed this cannot be a correct description of nature. Therefore, this calculation leaves me utterly unimpressed.

But let me be clear here. Nobody (at least nobody whose opinion matters) says that string theory is a research program that should just be discontinued. The question is instead one of balance – does the promise justify the amount of funding spend on it? And the answer to this question is almost certainly no.

The reason is that academia is currently organized so that it invites communal reinforcement, prevents researchers from leaving fields whose promise is dwindling, and supports a rich-get-richer trend. That institutional assessments use the quantity of papers and citation counts as a proxy for quality creates a bonus for fields in which papers can be cranked out quickly. Hence it isn’t surprising that an area whose mathematics its own practitioners frequently describe as “rich” would flourish. What does mathematical “richness” tell us about the use of a theory in the description of nature? I am not aware of any known relation.

In his book Why String Theory?, Conlon tells the history of the discipline from a string theorist’s perspective. As a counterpoint, let me tell you how a cynical outsider might tell this story:

String theory was originally conceived as a theory of the strong nuclear force, but it was soon discovered that quantum chromodynamics was more up to the task. After noting that string theory contains a particle that could be identified as the graviton, it was reconsidered as a theory of quantum gravity.

It turned out however that string theory only makes sense in a 25-dimensional space. To make that compatible with observations, 22 of the dimensions were moved out of sight by rolling them up (compactifying) them to a radius so small they couldn’t be observationally probed.

Next it was noted that the theory also needs supersymmetry. This brings down the number of space dimensions to 9, but also brings a new problem: The world, unfortunately, doesn’t seem to be supersymmetric. Hence, it was postulated that supersymmetry is broken at an energy scale so high we wouldn’t see the symmetry. Even with that problem fixed, however, it was quickly noticed that moving the superpartners out of direct reach would still induce flavor changing neutral currents that, among other things, would lead to proton decay and so be in conflict with observation. Thus, theorists invented R-parity to fix that problem.

The next problem that appeared was that the cosmological constant turned out to be positive instead of zero or negative. While a negative cosmological constant would have been easy to accommodate, string theorists didn’t know what to do with a positive one. But it only took some years to come up with an idea to make that happen too.

String theory was hoped to be a unique completion of the standard model including general relativity. Instead it slowly became clear that there is a huge number of different ways to get rid of the additional dimensions, each of which leads to a different theory at low energies. String theorists are now trying to deal with that problem by inventing some probability measure according to which the standard model is at least a probable occurrence in string theory.

So, you asked, why not string theory? Because it’s an approach that has been fixed over and over again to make it compatible with conflicting observations. Every time that’s been done, string theorists became more convinced of their ideas. And every time they did this, I became more convinced they are merely building a mathematical toy universe.

String theorists of course deny that they are influenced by anything but objective assessment. One noteworthy exception is Joe Polchinski who has considered that social effects play a role, but just came to the conclusion that they aren’t relevant. I think it speaks for his intellectual sincerity that he at least considered it.

At the Munich workshop last December, David Gross (in an exchange with Carlo Rovelli) explained that funding decisions have no influence on whether theoretical physicists chose to work in one field or the other. Well, that’s easy to say if you’re a Nobel Prize winner.

Conlon in his book provides “evidence” that social bias plays no role by explaining that there was only one string theorist in a panel that (positively) evaluated one of his grants. To begin with anecdotes can’t replace data and there is ample evidence that social biases are common human traits, so by default scientists should be susceptible. But even considering his anecdote, I’m not sure why Conlon thinks leaving decisions to non-experts limits bias. My expectation would be that it amplifies bias because it requires drawing on simplified criteria, like the number of papers published and how often they’ve been cited. And what does that depend on? Depends on how many people there are in the field and how many peers favorably reviewed papers on the topic of your work.

I am listing these examples to demonstrate that it is quite common of theoretical physicists (not string theorists in particular) to dismiss the mere possibility that social dynamics influences research decisions.

How large a role play social dynamics and cognitive biases, and how much do they slow down progress on the foundations of physics? I can’t tell you. But even though I can’t tell you how much faster progress could be, I am sure it’s slowed down. I can tell that in the same way that I can tell you diesel in Germany is sold under market value even though I don’t know the market value. I know that because it’s subsidized. And in the same way I can tell that string theory is overpopulated and its promise is overestimated because it’s an idea that benefits from biases which humans demonstrably possess. But I can’t tell you what its real value would be.

The reproduction crisis in the life-sciences and psychology has spurred a debate for better measures of statistical significance. Experimentalists go to length to put into place all kinds of standardized procedures to not draw the wrong conclusions from what their apparatuses measures. In theory development, we have our own crisis, but nobody talks about it. The apparatuses that we use are our own brains and biases we should guard against are cognitive and social biases, communal reinforcement, sunk cost fallacy, wishful thinking and status-quo bias, for just to mention the most common ones. These however are presently entirely unaccounted for. Is this the reason why string theory has gathered so many followers?

Some days I side with Polchinski and Gross and don’t think it makes that much of a difference. It really is an interesting topic and it’s promising. On other days I think we’ve wasted 30 years studying bizarre aspects of a theory that doesn’t bring us any closer to understanding quantum gravity, and it’s nothing but an empty bubble of disappointed expectations. Most days I have to admit I just don’t know.

Why not string theory? Because enough is enough.

Thanks for an interesting question.

Monday, May 30, 2016

Book Review: “Why String Theory?” by Joseph Conlon

Why String Theory?
By Joseph Conlon
CRC Press (November 24, 2015)

I was sure I’d hate the book. Let me explain.

I often hear people speak about the “marketplace of ideas” as if science was a trade show where researchers sell their work. But science isn’t about manufacturing and selling products, it’s about understanding nature. And the sine qua non for evaluating the promise of an idea is objectivity.

In my mind, therefore, the absolutely last thing that scientists should engage in is marketing. Marketing, advertising, and product promotion are commercial tactics with the very purpose of affecting their targets’ objectivity. These tactics shouldn’t have any place in science.

Consequently, I have mixed feelings about scientists who attempt to convince the public that their research area is promising, with the implicit or explicit goal of securing funding and attracting students. It’s not that I have a problem with scientists who write for the public in general – I have a problem with scientists who pass off their personal opinion as fact, often supporting their conviction by quoting the number of people who share their beliefs.

In the last two decades this procedure has created an absolutely astonishing amount of so-called “science” books about string theory, supersymmetry, the multiverse and other fantasies (note careful chosen placement of commata), with no other purpose than asking the reader to please continue funding fruitless avenues of research by appealing to lofty ideals like elegance and beauty.

And indeed, Conlon starts with dedicating the book to “the taxpayers of the UK without whom this book could never have been written” and then states explicitly that his goal is to win the favor of taxpayers:
“I want to explain, to my wonderful fellow citizens who support scientific research through their taxes, why string theory is so popular, and why, despite the lack of direct empirical support, it has attained the level of prominence it has.”

That’s on page six. The prospect of reading 250 pages filled with a string theorists’ attempt to lick butts of his “wonderful fellow citizens” made me feel somewhat nauseous. I put the book aside and instead read Sean Carroll’s new book. After that I felt slightly better and made a second attempt at Why String Theory?

Once I got past the first chapter, however, the book got markedly better. Conlon keeps the introduction to basic physics (relativity and quantum theory) to an absolute minimum. After this he lays out the history of string theory, with its many twists and turns, and explains how much string theorists’ understanding of the approach has changed within the decades.

He then gets to the reasons why people work on string theory. The first reason he lists is a chapter titled “Direct Experimental Evidence for String Theory” which consists of the single sentence “There is no direct experimental evidence for string theory.” At first, I thought that he might have wanted to point out that string theorists work on it despite the lack of evidence, but that the previous paragraph accidentally made it look as if he, rather cynically, wanted to say that the absence of evidence is the main reason they work on it.

But actually he returns to this point later in the book (in section 10.5), where he addresses “objections made concerning connection to experiment” and points out very clearly that even though these are prevalent, he thinks these deserve little or no sympathy. This makes me think, maybe he indeed wanted to say that he suspects the main reason so many people work on string theory is because there’s no evidence for it. Especially the objection that it is “too early” to seek experimental support for string theory because the theory is not fully understood he responds to with:
“The problem with this objection is that it is a time-invariant statement. It was made thirty years ago, it was made twenty years ago, it was made a decade ago, and it is made now. It is also, by observation, an objection made by those who are uninterested in observation. Muscles that are never used waste away. It is like never commencing a journey because one is always waiting for better modes of transportation, and in the end produces a community of scientists where the language of measurement and experiment is one that may be read but cannot be spoken.”
Conlon writes that he himself isn’t particularly interested in quantum gravity. His own research is finding evidence for moduli fields in cosmology, and he has a chapter about this. He lists the usual arguments in favor of string theory, that it connects well to both general relativity and the standard model, that it’s been helpful in deriving some math theorems, and that now there is the AdS/CFT duality by help of which one might maybe one day be able to describe some aspect of the real world.

He somehow forgets to mention that the AdS/CFT predictions for heavy ion collisions at the LHC turned out to be dramatically wrong, and by now very few people think that the duality is of much use in this area. I actually suspect he just plainly didn’t know this. It’s not something that string theorists like to talk about. This omission is my major point of criticism. The rest of the book seems a quite balanced account, and he restrains from making cheap arguments of the type that the theory must be right because a thousand people with brains can’t be mistaken. Conlon even has a subsection addressing Witten-cult, which is rather scathing, and a hit on Arkani-Hamed gathering 5000 citations and a $3 million price for proposing large extra dimensions (an idea that was quietly buried after the LHC ruled it out).

At the end of the book Conlon has a chapter addressing explicit criticisms – he manages to remain remarkably neutral and polite – and a “fun” chapter in which he lists different styles of doing research. Maybe there’s something wrong with my sense of humor but I didn’t find it much fun. It’s more like he is converting Kuhn’s phases of “normal science” and “revolution” into personal profiles, trying to reassure students that they don’t need to quantize gravity to get tenure.

Leaving aside Conlon’s fondness of mixing up sometimes rather odd metaphors (“quantum mechanics is a jealous theory... it has spread through the population of scientific theories like a successful mutation” – “The anthropic landscape... represents incontinence of speculation joined to constipation of experiment.” – “quantum field theorists became drunk on the new wine of string theory”) and an overuse of unnecessary loanwords (in pectore, pons asinorum, affaire de coer, lebensraum, mirabile dictum, for just to mention a few), the book is reasonably well written. The reference list isn’t too extensive. This is to say in the couple of cases in which I wanted to look up a reference it wasn’t listed, and the one case I wanted to check a quotation it didn’t have an original source.

Altogether, Why String Theory? gives the reader a mostly fair and balanced account of string theory, and a pretty good impression for just how much the field has changed since Brian Greene’s Elegant Universe. I looked up something in Greene’s book the other day, and found him complaining that the standard model is “too flexible.” Oh, yes, things have changed a lot since. I doubt it’s a complaint any string theorist dare raise today.

In the end, I didn’t hate Conlon’s book. Maybe I’m getting older, or maybe I’m getting wiser, or maybe I’m just not capable of hating books.

[Disclaimer: Free review copy.]


Win a copy of Why String Theory by Joseph Conlon!

I had bought the book before I was sent the review copy, and so I have a second copy of the book, entirely new and untouched. You can win the book if you are the first to answer this question correctly: Who was second author on the first paper to point out that some types of neutrino detectors might also be used to directly detect certain candidate particles for dark matter? Submit answer in the comments, do not send an email. The time-stamp of the comment counts. (Please only submit an answer if you are willing to send me a postal address to which the book can be shipped.)

Update: The book is gone!

Thursday, May 26, 2016

How can we test quantum gravity?

If you have good eyes, the smallest objects you can make out are about a tenth of a millimeter, roughly the width of a human hair. Add technology, and the smallest structures we have measured so far are approximately 10-19m, that’s the wavelength of the protons collided at the LHC. It has taken us about 400 years from the invention of the microscope to the construction of the LHC – 400 years to cross 15 orders of magnitude.

Quantum effects of gravity are estimated to become relevant on distance scales of approximately 10-35m, known as the Planck length. That’s another 16 orders of magnitude to go. It makes you wonder whether it’s possible at all, or whether all the effort to find a quantum theory of gravity is just idle speculation.

I am optimistic. The history of science is full with people who thought things to be impossible that have meanwhile been done: measuring the light deflection on the sun, heavier-than-air flying machines, detecting gravitational waves. Hence, I don’t think it’s impossible to experimentally test quantum gravity. Maybe it will take some decades, or maybe it will take some centuries – but if only we keep pushing, one day we will measure quantum gravitational effects. Not by directly crossing these 15 orders of magnitude, I believe, but instead by indirect detections at lower energies.

From nothing comes nothing though. If we don’t think about how quantum gravitational effects can look like and where they might show up, we’ll certainly never find them. But fueling my optimism is the steadily increasing interest in the phenomenology of quantum gravity, the research area dedicated to studying how to best find evidence for quantum gravitational effects.

Since there isn’t any one agreed-upon theory for quantum gravity, existing efforts to find observable phenomena focus on finding ways to test general features of the theory, properties that have been found in several different approaches to quantum gravity. Quantum fluctuations of space-time, for example, or the presence of a “minimal length” that would impose a fundamental resolution limit. Such effects can be quantified in mathematical models, which can then be used to estimate the strength of the effects and thus to find out which experiments are most promising.

Testing quantum gravity has long thought to be out of reach of experiments, based on estimates that show it would take a collider the size of the Milky Way to accelerate protons enough to produce a measureable amount of gravitons (the quanta of the gravitational field), or that we would need a detector the size of planet Jupiter to measure a graviton produced elsewhere. Not impossible, but clearly not something that will happen in my lifetime.

One testable consequence of quantum gravity might be, for example, the violation of the symmetry of special and general relativity, known as Lorentz-invariance. Interestingly it turns out that violations of Lorentz-invariance are not necessarily small even if they are created at distances too short to be measurable. Instead, these symmetry violations seep into many particle reactions at accessible energies, and these have been tested to extremely high accuracy. No evidence for violations of Lorentz-invariance have been found. This might sound like not much, but knowing that this symmetry has to be respected by quantum gravity is an extremely useful guide in the development of the theory.

Other testable consequences might be in the weak-field limit of quantum gravity. In the early universe, quantum fluctuations of space-time would have led to temperature fluctuation of matter. And these temperature fluctuations are still observable today in the Cosmic Microwave Background (CMB). The imprint of such “primordial gravitational waves” on the CMB has not yet been measured (LIGO is not sensitive to them), but they are not so far off measurement precision.

A lot of experiments are currently searching for this signal, including BICEP and Planck. This raises the question whether it is possible to infer from the primordial gravitational waves that gravity must have been quantized in the early universe. Answering this question is one of the presently most active areas in quantum gravity phenomenology.

Also testing the weak-field limit of quantum gravity are attempts to bring objects into quantum superpositions that are much heavier than elementary particles. This makes the gravitational field stronger and potentially offers the chance to probe its quantum behavior. The heaviest objects that have so far been brought into superpositions weigh about a nano-gram, which is still several orders of magnitude too small to measure the gravitational field. But a group in Vienna recently proposed an experimental scheme that would allow to measure the gravitational field more precisely than ever before. We are slowly closing in on the quantum gravitational range.

Such arguments however merely concern the direct detection of gravitons, and that isn’t the only manifestation of quantum gravitational effects. There are various other observable consequences that quantum gravity could give rise to, some of which have already been looked for, and others that we plan to look for. So far, we have only negative results. But even negative results are valuable because they tell us what properties the sought-for theory cannot have.

[From arXiv:1602.07539, for details, see here]

The weak field limit would prove that gravity really is quantized and finally deliver the much-needed experimental evidence, confirming that we’re not just doing philosophy. However, for most of us in the field the strong gravity limit is more interesting. With strong gravity limit I mean Planckian curvature, which (not counting those galaxy-sized colliders) can only be found close by the center of black holes and towards the big bang.

(Note that in astrophysics, “strong gravity” is sometimes used to mean something different, referring to large deviations from Newtonian gravity which can be found, eg, around the horizon of black holes. In comparison to the Planckian curvature required for strong quantum gravitational effects, this is still exceedingly weak.)

Strong quantum gravitational effects could also have left an imprint in the cosmic microwave background, notably in the type of correlations that can be found in the fluctuations. There are various models of string cosmology and loop quantum cosmology that have explored the observational consequences, and proposed experiments like EUCLID and PRISM might find first hints. Also the upcoming experiments to test the 21-cm hydrogen absorption could harbor information about quantum gravity.

A somewhat more speculative idea is based on a recent finding according to which the gravitational collapse of matter might not always form a black hole, but could escape the formation of a horizon. If that is so, then the remaining object would give us open view on a region with quantum gravitational effects. It isn’t yet clear exactly what signals we would have to look for to find such an object, but this is promising research direction because it could give us direct access to strong space-time curvature.

There are many other ideas out there. A large class of models for example deals with the possibility that quantum gravitational effects endow space-time with the properties of a medium. This can lead to the dispersion of light (colors running apart), birefringence (polarizations running apart), decoherence (preventing interference), or an opacity of otherwise empty space. More speculative ideas include Craig Hogan’s quest for holographic noise, Bekenstein’s table-top experiment that searches for Planck-length discreteness, or searches for evidence of a minimal length in tritium decay. Some general properties that have recently been found and that we yet have to find good experimental tests for are geometric phase transitions in the early universe, or dimensional reduction.

Without doubt, there is much that remains to be done. But we’re on the way.

[This post previously appeared on Starts With A Bang.]

Tuesday, May 03, 2016

Experimental Search for Quantum Gravity 2016

I am happy to announce that this year we will run the 5th international conference on Experimental Search for Quantum Gravity here in Frankfurt, Germany. The meeting will take place Sep 19-23, 2016.

We have a (quite preliminary) website up here. Application is now open and will run through June 1st. If you're a student or young postdoc with an interest in the phenomenology of quantum gravity, this conference might be a good starting point and I encourage you to apply. We cannot afford handing out travel grants, but we will waive the conference fee for young participants (young in terms of PhD age, not biological age).

The location of the meeting will be at my new workplace, the Frankfurt Institute for Advanced Studies, FIAS for short. When it comes to technical support, they seem considerably better organized (not to mention staffed) than my previous institution. At this stage I am thus tentatively hopeful that this year we'll both record and livestream the talks. So stay tuned, there's more to come.

Wednesday, April 27, 2016

If you fall into a black hole

If you fall into a black hole, you’ll die. That much is pretty sure. But what happens before that?

The gravitational pull of a black hole depends on its mass. At a fixed distance from the center, it isn’t any stronger or weaker than that of a star with the same mass. The difference is that, since a black hole doesn’t have a surface, the gravitational pull can continue to increase as you approach the center.

The gravitational pull itself isn’t the problem, the problem is the change in the pull, the tidal force. It will stretch any extended object in a process with technical name “spaghettification.” That’s what will eventually kill you. Whether this happens before or after you cross the horizon depends, again, on the mass of the black hole. The larger the mass, the smaller the space-time curvature at the horizon, and the smaller the tidal force.

Leaving aside lots of hot gas and swirling particles, you have good chances to survive crossing the horizon of a supermassive black hole, like that in the center of our galaxy. You would, however, probably be torn apart before crossing the horizon of a solar-mass black hole.

It takes you a finite time to reach the horizon of a black hole. For an outside observer however, you seem to be moving slower and slower and will never quite reach the black hole, due to the (technically infinitely large) gravitational redshift. If you take into account that black holes evaporate, it doesn’t quite take forever, and your friends will eventually see you vanishing. It might just take a few hundred billion years.

In an article that recently appeared on “Quick And Dirty Tips” (featured by SciAm), Everyday Einstein Sabrina Stierwalt explains:
“As you approach a black hole, you do not notice a change in time as you experience it, but from an outsider’s perspective, time appears to slow down and eventually crawl to a stop for you [...] So who is right? This discrepancy, and whose reality is ultimately correct, is a highly contested area of current physics research.”
No, it isn’t. The two observers have different descriptions of the process of falling into a black hole because they both use different time coordinates. There is no contradiction between the conclusions they draw. The outside observer’s story is an infinitely stretched version of the infalling observer’s story, covering only the part before horizon crossing. Nobody contests this.

I suspect this confusion was caused by the idea of black hole complementarity. Which is indeed a highly contest area of current physics research. According to black hole complementarity the information that falls into a black hole both goes in and comes out. This is in contradiction with quantum mechanics which forbids making exact copies of a state. The idea of black hole complementarity is that nobody can ever make a measurement to document the forbidden copying and hence, it isn’t a real inconsistency. Making such measurements is typically impossible because the infalling observer only has a limited amount of time before hitting the singularity.

Black hole complementarity is actually a pretty philosophical idea.

Now, the black hole firewall issue points out that black hole complementarity is inconsistent. Even if you can’t measure that a copy has been made, pushing the infalling information in the outgoing radiation changes the vacuum state in the horizon vicinity to a state which is no longer empty: that’s the firewall.

Be that as it may, even in black hole complementarity the infalling observer still falls in, and crosses the horizon at a finite time.

The real question that drives much current research is how the information comes out of the black hole before it has completely evaporated. It’s a topic which has been discussed for more than 40 years now, and there is little sign that theorists will agree on a solution. And why would they? Leaving aside fluid analogies, there is no experimental evidence for what happens with black hole information, and there is hence no reason for theorists to converge on any one option.

The theory assessment in this research area is purely non-empirical, to use an expression by philosopher Richard Dawid. It’s why I think if we ever want to see progress on the foundations of physics we have to think very carefully about the non-empirical criteria that we use.

Anyway, the lesson here is: Everyday Einstein’s Quick and Dirty Tips is not a recommended travel guide for black holes.

Wednesday, April 20, 2016

Dear Dr B: Why is Lorentz-invariance in conflict with discreteness?

Can we build up space-time from
discrete entities?
“Could you elaborate (even) more on […] the exact tension between Lorentz invariance and attempts for discretisation?

Best,

Noa”

Dear Noa:

Discretization is a common procedure to deal with infinities. Since quantum mechanics relates large energies to short (wave) lengths, introducing a shortest possible distance corresponds to cutting off momentum integrals. This can remove infinites that come in at large momenta (or, as the physicists say “in the UV”).

Such hard cut-off procedures were quite common in the early days of quantum field theory. They have since been replaced with more sophisticated regulation procedures, but these don’t work for quantum gravity. Hence it lies at hand to use discretization to get rid of the infinities that plague quantum gravity.

Lorentz-invariance is the symmetry of Special Relativity; it tells us how observables transform from one reference frame to another. Certain types of observables, called “scalars,” don’t change at all. In general, observables do change, but they do so under a well-defined procedure that is by the application of Lorentz-transformations.We call these “covariant.” Or at least we should. Most often invariance is conflated with covariance in the literature.

(To be precise, Lorentz-covariance isn’t the full symmetry of Special Relativity because there are also translations in space and time that should maintain the laws of nature. If you add these, you get Poincaré-invariance. But the translations aren’t so relevant for our purposes.)

Lorentz-transformations acting on distances and times lead to the phenomena of Lorentz-contraction and time-dilatation. That means observers at relative velocities to each other measure different lengths and time-intervals. As long as there aren’t any interactions, this has no consequences. But once you have objects that can interact, relativistic contraction has measurable consequences.

Heavy ions for example, which are collided in facilities like RHIC or the LHC, are accelerated to almost the speed of light, which results in a significant length contraction in beam direction, and a corresponding increase in the density. This relativistic squeeze has to be taken into account to correctly compute observables. It isn’t merely an apparent distortion, it’s a real effect.

Now consider you have a regular cubic lattice which is at rest relative to you. Alice comes by in a space-ship at high velocity, what does she see? She doesn’t see a cubic lattice – she sees a lattice that is squeezed into one direction due to Lorentz-contraction. Who of you is right? You’re both right. It’s just that the lattice isn’t invariant under the Lorentz-transformation, and neither are any interactions with it.

The lattice can therefore be used to define a preferred frame, that is a particular reference frame which isn’t like any other frame, violating observer independence. The easiest way to do this would be to use the frame in which the spacing is regular, ie your restframe. If you compute any observables that take into account interactions with the lattice, the result will now explicitly depend on the motion relative to the lattice. Condensed matter systems are thus generally not Lorentz-invariant.

A Lorentz-contraction can convert any distance, no matter how large, into another distance, no matter how short. Similarly, it can blue-shift long wavelengths to short wavelengths, and hence can make small momenta arbitrarily large. This however runs into conflict with the idea of cutting off momentum integrals. For this reason approaches to quantum gravity that rely on discretization or analogies to condensed matter systems are difficult to reconcile with Lorentz-invariance.

So what, you may say, let’s just throw out Lorentz-invariance then. Let us just take a tiny lattice spacing so that we won’t see the effects. Unfortunately, it isn’t that easy. Violations of Lorentz-invariance, even if tiny, spill over into all kinds of observables even at low energies.

A good example is vacuum Cherenkov radiation, that is the spontaneous emission of a photon by an electron. This effect is normally – ie when Lorentz-invariance is respected – forbidden due to energy-momentum conservation. It can only take place in a medium which has components that can recoil. But Lorentz-invariance violation would allow electrons to radiate off photons even in empty space. No such effect has been seen, and this leads to very strong bounds on Lorentz-invariance violation.

And this isn’t the only bound. There are literally dozens of particle interactions that have been checked for Lorentz-invariance violating contributions with absolutely no evidence showing up. Hence, we know that Lorentz-invariance, if not exact, is respected by nature to extremely high precision. And this is very hard to achieve in a model that relies on a discretization.

Having said that, I must point out that not every quantity of dimension length actually transforms as a distance. Thus, the existence of a fundamental length scale is not a priori in conflict with Lorentz-invariance. The best example is maybe the Planck length itself. It has dimension length, but it’s defined from constants of nature that are themselves frame-independent. It has units of a length, but it doesn’t transform as a distance. For the same reason string theory is perfectly compatible with Lorentz-invariance even though it contains a fundamental length scale.

The tension between discreteness and Lorentz-invariance appears always if you have objects that transform like distances or like areas or like spatial volumes. The Causal Set approach therefore is an exception to the problems with discreteness (to my knowledge the only exception). The reason is that Causal Sets are a randomly distributed collection of (unconnected!) points with a four-density that is constant on the average. The random distribution prevents the problems with regular lattices. And since points and four-volumes are both Lorentz-invariant, no preferred frame is introduced.

It is remarkable just how difficult Lorentz-invariance makes it to reconcile general relativity with quantum field theory. The fact that no violations of Lorentz-invariance have been found and the insight that discreteness therefore seems an ill-fated approach has significantly contributed to the conviction of string theorists that they are working on the only right approach. Needless to say there are some people who would disagree, such as probably Carlo Rovelli and Garrett Lisi.

Either way, the absence of Lorentz-invariance violations is one of the prime examples that I draw upon to demonstrate that it is possible to constrain theory development in quantum gravity with existing data. Everyone who still works on discrete approaches must now make really sure to demonstrate there is no conflict with observation.

Thanks for an interesting question!

Sunday, April 03, 2016

New link between quantum computing and black hole may solve information loss problem

[image source: IBT]

If you leave the city limits of Established Knowledge and pass the Fields of Extrapolation, you enter the Forest of Speculations. As you get deeper into the forest, larger and larger trees impinge on the road, strangely deformed, knotted onto themselves, bent over backwards. They eventually grow so close that they block out the sunlight. It must be somewhere here, just before you cross over from speculation to insanity, that Gia Dvali looks for new ideas and drags them into the sunlight.

Dvali’s newest idea is that every black hole is a quantum computer. And not just any quantum computer, but a quantum computer made of a Bose-Einstein condensate that self-tunes to the quantum critical point. In one sweep, he has combined everything that is cool in physics at the moment.

This link between black holes and Bose-Einstein condensates is based on simple premises. Dvali set out to find some stuff that would share properties with black holes, notably the relation between entropy and mass (BH entropy), the decrease in entropy during evaporation (Page time), and the ability to scramble information quickly (scrambling time). What he found was that certain condensates do exactly this.

Consequently he went and conjectured that this is more than a coincidence, and that black holes themselves are condensates – condensates of gravitons, whose quantum criticality allows the fast scrambling. The gravitons equip black holes with quantum hair on horizon scale, and hence provide a solution to the black hole information loss problem by first storing information and then slowly leaking it out.

Bose-Einstein condensates on the other hand contain long-range quantum effects that make them good candidates for quantum computers. The individual q-bits that have been proposed for use in these condensates are normally correlated atoms trapped in optical lattices. Based on his analogy with black holes however, Dvali suggests to use a different type of state for information storage, which would optimize the storage capacity.

I had the opportunity to speak with Immanuel Bloch from the Max Planck Institute for Quantum Optics about Dvali’s idea, and I learned that while it seems possible to create a self-tuned condensate to mimic the black hole, addressing the states that Dvali has identified is difficult and, at least presently, not practical. You can read more about this in my recent Aeon essay.

But really, you may ask, what isn’t a quantum computer? Doesn’t anything that changes in time according to the equations of quantum mechanics process information and compute something? Doesn’t every piece of chalk execute the laws of nature and evaluate its own fate, doing a computation that somehow implies something with quantum?

That’s right. But when physicists speak of quantum computers, they mean a particularly powerful collection of entangled states, assemblies that allow to hold and manipulate much more information than a largely classical state. It’s this property of quantum computers specifically that Dvali claims black holes must also possess. The chalk just won’t do.

If it is correct what Dvali says, a real black hole out there in space doesn’t compute anything in particular. It merely stores the information of what fell in and spits it back out again. But a better understanding of how to initialize a state might allow us one day – give it some hundred years – to make use of nature’s ability to distribute information enormously quickly.

The relevant question is of course, can you test that it’s true?

I first heard of Dvali’s idea on a conference I attended last year in July. In his talk, Dvali spoke about possible observational evidence for the quantum hair due to modifications of orbits nearby the black hole. At least that’s my dim recollection almost a year later. He showed some preliminary results of this, but the paper hasn’t gotten published and the slides aren’t online. Instead, together with some collaborators, he published a paper arguing that the idea is compatible with the Hawking, Perry, Strominger proposal to solve the black hole information loss, which also relies on black hole hair.

In November then, I heard another talk by Stefan Hofmann, who had also worked on some aspects of the idea that black holes are Bose-Einstein condensates. He told the audience that one might see a modification in the gravitational wave signal of black hole merger ringdowns. Which have since indeed been detected. Again though, there is no paper.

So I am tentatively hopeful that we can look for evidence of this idea in the soon future, but so far there aren’t any predictions. I have an own proposal to add for observational consequences of this approach, which is to look at the scattering cross-section of the graviton condensate with photons in the wave-length regime of the horizon-size (ie radio-waves). I don’t have time to really work on this, but if you’re looking for one-year project in quantum gravity phenomenology, this one seems interesting.

Dvali’s idea has some loose ends of course. Notably it isn’t clear how the condensate escapes collapse, at least it isn’t clear to me and not clear to anyone I talked to. The general argument is that for the condensate the semi-classical limit is a bad approximation, and thus the singularity theorems are rather meaningless. While that might be, it’s too vague for my comfort. The idea also seems superficially similar to the fuzzball proposal, and it would be good to know the relation or differences.

After these words of caution, let me add that this link between condensed matter, quantum information, and black holes isn’t as crazy as it seems at first. In the last years, a lot of research has piled up that tightens the connections between these fields. Indeed, a recent paper by Brown et al hypothesizes that black holes are not only the most efficient storage devices but indeed the fastest computers.

It’s amazing just how much we have learned from a single solution to Einstein’s field equations, and not even a particularly difficult one. “Black hole physics” really should be a research field on its own right.

Monday, March 28, 2016

Dear Dr. B: What are the requirements for a successful theory of quantum gravity?

“I've often heard you say that we don't have a theory of quantum gravity yet. What would be the requirements, the conditions, for quantum gravity to earn the label of 'a theory' ?

I am particularly interested in the nuances on the difference between satisfying current theories (GR&QM) and satisfying existing experimental data. Because a theory often entails an interpretation whereas a piece of experimental evidence or observation can be regarded as correct 'an sich'.

That aside from satisfying the need for new predictions, etc.

Thank you,

Best Regards,

Noa Drake”

Dear Noa,

I want to answer your question in two parts. First: What does it take for a hypothesis to earn the label “theory” in physics? And second: What are the requirements for a theory of quantum gravity in particular?”

What does it take for a hypothesis to earn the label “theory” in physics?

Like almost all nomenclature in physics – except the names of new heavy elements – the label “theory” is not awarded by some agreed-upon regulation, but emerges from usage in the community – or doesn’t. Contrary to what some science popularizers want the public to believe, scientists do not use the word “theory” in a very precise way. Some names stick, others don’t, and trying to change a name already in use is often futile.

The best way to capture what physicists mean with “theory” is that it describes an identification between mathematical structures and observables. The theory is the map between the math-world and the real world. A “model” on the other hand is something slightly different: it’s the stand-in for the real world that is being mapped by help of the theory. For example the standard model is the math-thing which is mapped by quantum field theory to the real world. The cosmological concordance model is mapped by the theory of general relativity to the real world. And so on.


But of course not everybody agrees. Frank Wilczek and Sean Carroll for example want to rename the standard model to “core theory.” David Gross argues that string theory isn’t a theory, but actually a “framework.” And Paul Steinhardt insists on calling the model of inflation a “paradigm.” I have a theory that physicists like being disagreeable.

Sticking with my own nomenclature, what it takes to make a theory in physics is 1) a mathematically consistent formulation – at least in some well-controlled approximation, 2) an unambiguous identification of observables, and 3) agreement with all available data relevant in the range in which the theory applies.

These are high demands, and the difficulty of meeting them is almost always underestimated by those who don’t work in the field. Physics is a very advanced discipline and the existing theories have been confirmed to extremely high precision. It is therefore very hard to make any changes that improve the existing theories rather than screwing them up altogether.

What are the requirements for a theory of quantum gravity in particular?

The combination of the standard model and general relativity is not mathematically consistent at energies beyond the Planck scale, which is why we know that a theory of quantum gravity is necessary. The successful theory of quantum gravity must achieve mathematical consistencies at all energies, or – if it is not a final theory – at least well beyond the Planck scale.

If you quantize gravity like the other interactions, the theory you end up with – perturbatively quantized gravity – breaks down at high energies; it produces nonsensical answers. In physics parlance, high energies are often referred to as “the ultra-violet” or “the UV” for short, and the missing theory is hence the “UV-completion” of perturbatively quantized gravity.

At the energies that we have tested so far, quantum gravity must reproduce general relativity with a suitable coupling to the standard model. Strictly speaking it doesn’t have to reproduce these models themselves, but only the data that we have measured. But since there is such a lot of data at low energies, and we already know this data is described by the standard model and general relativity, we don’t try to reproduce each and every observation. Instead we just try to recover the already known theories in the low-energy approximation.

That the theory of quantum gravity must remove inconsistencies in the combination of the standard model and general relativity means in particular it must solve the black hole information loss problem. It also means that it must produce meaningful answers for the interaction probabilities of particles at energies beyond the Planck scale. It is furthermore generally believed that quantum gravity will avoid the formation of space-time singularities, though this isn’t strictly speaking necessary for mathematical consistency.

These requirements are very strong and incredibly hard to meet. There are presently only a few serious candidates for quantum gravity: string theory, loop quantum gravity, asymptotically safe gravity, causal dynamical triangulation, and, somewhat down the line, causal sets and a collection of emergent gravity ideas.

Among those candidates, string theory and asymptotically safe gravity have a well-established compatibility with general relativity and the standard model. From these two, string theory is favored by the vast majority of physicists in the field, primarily because it has given rise to more insights and contains more internal connections. Whenever I ask someone what they think about asymptotically safe gravity, they tell me that would be “depressing” or “disappointing.” I know, it sounds more like psychology than physics.

Having said that, let me mention for completeness that, based on purely logical reasoning, it isn’t necessary to find a UV-completion for perturbatively quantized gravity. Instead of quantizing gravity at high energies, you can ‘unquantize’ matter at high energies, which also solves the problem. From all existing attempts to remove the inconsistencies that arise when combining the standard model with general relativity, this is the possibly most unpopular option.

I do not think that the data we have so far plus the requirement of mathematical consistency will allow us to derive one unique theory. This means that without additional data physicists have no reason to ever converge on any one approach to quantum gravity.

Thank you for an interesting question!

Tuesday, March 15, 2016

Researchers propose experiment to measure the gravitational force of milli-gram objects, reaching almost into the quantum realm.

Neutrinos, gravitational waves, light deflection on the sun – the history of physics is full with phenomena once believed immeasurably small but now yesterday’s news. And on the list of impossible things turned possible, quantum gravity might be next.

Quantum gravitational effects have widely been believed inaccessible by experiment because enormously high energy densities are required to make them comparably large as other quantum effects. This argument however neglects that quantum effects of gravity can also become relevant for massive objects in quantum superpositions. Once we are able to measure the gravitational pull of an object that is in a superposition of two different places, we can determine whether the gravitational field is in a quantum superposition as well.

This neat idea has two problematic aspects. First, since gravity is very weak, measuring gravitational fields of small objects is extremely difficult. And second, bringing massive objects into quantum states is hard because the states rapidly decohere due to interaction with the environment. However, technological advances on both aspects of the problem have been stunning during the last decade.

In two previous posts we discussed some examples of massive quantum oscillators that can create location superpositions of objects as heavy as a nano-gram. The objects under consideration here are typically small disks made of silicon that are bombarded with laser light while trapped between two mirrors. A nano-gram might not sound much, but compared to the masses of elementary particles that’s enormous.

Meanwhile, progress on the other aspect of the problem - measuring tiny gravitational fields – has also been remarkable. Currently, the smallest mass whose gravitational pull has been measured is about 90g. But a recent proposal by the group of Markus Aspelmeyer in Vienna lays out a method for measuring the gravitational force of masses as small as a few milli-gram.
    A micromechanical proof-of-principle experiment for measuring the gravitational force of milligram masses
    Jonas Schmöle, Mathias Dragosits, Hans Hepach, Markus Aspelmeyer
    arXiv:1602.07539 [physics.ins-det]

Their proposal relies on a relatively new field of technology that employs micro-mechanical devices, which basically means you make your whole measurement apparatus as small as you can, piling single atoms on atoms. This trend, which has itself become possible only by the nanotechnology required to to design these devices, allows measurements of unprecedented precision.

The smallest force that has so far been measured with nano-devices is around a zepto-Newton (zepto is 10-21). That’s not yet the world-record in tiny-force measurements, which is currently held by a group in Berkely and lies at about a yocto-Newton (that’s 10-24). But the huge benefit of the nano-devices is that you can get them close to the probe, whereas the experiment holding the record relies on precisely tracking the motion of a cloud of atoms in a trap. Not only doesn’t the cloud-tracking mean that it’s difficult to scale up the mass without ruining precision. The necessity to trap the particles also means that it’s difficult to get the source of the force-field close to the probe. The use of micro-mechanical devices in contrast does not have the same limitations and thus lends itself better to the task of measuring the gravitational force exerted by quantum systems.

The Aspelmeyer group sketches their experiment as shown in the figure below

[From arXiv:1602.07539]

The blue circles are the masses whose gravitational interaction one wants to measure, with the source mass to the right and the test-mass to the left. The test-mass is attached to the micro-mechanical oscillator, whereas the source-mass is driven by another oscillator close by the systems’ resonance frequency. The gravitational pull between the two masses transfers the oscillation of the source-mass to the test-mass, where it can be picked up by the detector.

In their paper, the experimentalists argue that it should be possible by this method to measure the gravitational force of a source mass not heavier than a few milli-grams. And that’s the conservative estimate. With better detector efficiency even that limit could be improved on.

There are still a few orders of magnitude between a milli-gram and a nano-gram, which is the current maximum mass for which quantum superpositions have been achieved. But in typical estimates for quantum gravitational effects you end up at least 30 orders of magnitude away from measurement precision. Now we are talking about five orders of magnitude – and that in a field with rapid technological developments for which there is no fundamental limit in sight.

What is most remarkable about this development is that this proposal relies on technology that until a few years ago literally nobody in quantum gravity ever talked about. It’s not even that the technological development has been faster than anticipated, it’s a possibility that plainly wasn’t on the radar. Now there is a Nobel Prize waiting here, for the first experimental measurement of quantum gravitational effects.

And as the Prize comes within reach, competition will speed up the pace. So stay tuned, I am sure we will hear more about this soon.

Friday, February 05, 2016

Much Ado around Nothing: The Cosmological non-Constant Problem

Tl;dr: Researchers put forward a theoretical argument that new physics must appear at energies much lower than commonly thought, barely beyond the reach of the LHC.

The cosmological constant is the worst-ever prediction of quantum field theory, infamously off by 120 orders of magnitude. And as if that wasn’t embarrassing enough, this gives rise to, not one, but three problems: Why is the measured cosmological constant neither 1) huge nor 2) zero, and 3) Why didn’t this occur to us a billion years earlier? With that, you’d think that physicists have their hands full getting zeroes arranged correctly. But Niayesh Afshordi and Elliot Nelson just added to our worries.

In a paper that made it third place of this year’s Buchalter Cosmology Prize, Afshordi and Nelson pointed out that the cosmological constant, if it arises from the vacuum energy of matter fields, should be subject to quantum fluctuations. And these fluctuations around the average are still large even if you have managed to get the constant itself to be small.

The cosmological constant, thus, is not actually constant. And since matter curves space-time, the matter fluctuations lead to space-time fluctuations – which can screw with our cosmological models. Afshordi and Nelson dubbed it the “Cosmological non-Constant Problem.”

But there is more to their argument than just adding to our problems because Afshordi and Nelson quantified what it takes to avoid a conflict with observation. They calculate the effect of stress-energy fluctuations on the space-time background, and then analyze what consequences this would have for the gravitational interaction. They introduce as a free parameter an energy scale up to which the fluctuations abound, and then contrast the corrections from this with observations, like for example the CMB power spectrum or the peculiar velocities of galaxy clusters. From these measurements they derive bounds on the scale at which the fluctuations must cease, and thus, where some new physics must come into play.

They find that the scale beyond which we should already have seen the effect of the vacuum fluctuations is about 35 TeV. If their argument is right, this means something must happen either to matter or to gravity before reaching this energy scale; the option the authors advocate in their paper is that physics becomes strongly coupled below this scale (thus invalidating the extrapolation to larger energies, removing the problem).

Unfortunately, the LHC will not be able to reach all the way up to 35 TeV. But a next larger collider – and we all hope there will be one! – almost certainly would be able to test the full range. As Niayesh put it: “It’s not a problem yet” – but it will be a problem if there is no new physics before getting all the way up to 35 TeV.

I find this an interesting new twist on the cosmological constant problem(s). Something about this argument irks me, but I can’t quite put a finger on it. If I have an insight, you’ll hear from me again. Just generally I would caution you to not take the exact numerical value too seriously because in this kind of estimate there are usually various places where factors of order one might come in.

In summary, if Afshordi and Nelson are right, we’ve been missing something really essential about gravity.

Monday, January 25, 2016

Is space-time a prism?

Tl;dr: A new paper demonstrates that quantum gravity can split light into spectral colors. Gravitational rainbows are almost certainly undetectable on cosmological scales, but the idea might become useful for Earth-based experiments.

Einstein’s theory of general relativity still stands apart from the other known forces by its refusal to be quantized. Progress in finding a theory of quantum gravity has stalled because of the complete lack of data – a challenging situation that physicists have never encountered before.

The main problem in measuring quantum gravitational effects is the weakness of gravity. Estimates show that testing its quantum effects would require detectors the size of planet Jupiter or particle accelerators the size of the Milky-way. Thus, experiments to guide theory development are unfeasible. Or so we’ve been told.

But gravity is not a weak force – its strength depends on the masses between which it acts. (Indeed, that is the very reason gravity is so difficult to quantize.) Saying that gravity is weak makes sense only when referring to a specific mass, like that of the proton for example. We can then compare the strength of gravity to the strength of the other interactions, demonstrating its relative weakness – a puzzling fact known as the “strong hierarchy problem.” But that the strength of gravity depends on the particles’ masses also means that quantum gravitational effects are not generally weak: their magnitude too depends on the gravitating masses.

To be more precise one should thus say that quantum gravity is hard to detect because if an object is massive enough to have large gravitational effects then its quantum properties are negligible and don’t cause quantum behavior of space-time. General relativity however acts in two ways: Matter affects space-time and space-time affects matter. And so the reverse is also true: If the dynamical background of general relativity for some reason has an intrinsic quantum uncertainty, then this will affect the matter moving in this space-time – in a potentially observable way.

Rainbow gravity, proposed in 2003 by Magueijo and Smolin, is based on this idea, that the quantum properties of space-time could noticeably affect particles propagating in it. In rainbow gravity, space-time itself depends on the particle’s energy. In particular, light of different energies travels with different speeds, splitting up different colors, hence the name. It’s a nice idea but unfortunately it’s is an internally inconsistent theory and so far nobody has managed to make much sense of it.

First, let us note that already in general relativity the background depends of course on the energy of the particle, and this certainly should carry over also into quantum gravity. More precisely though, space-time depends not on the energy but on the energy-density of matter in it. So this cannot give rise to rainbow gravity. Worse even, because of this, general relativity is in outright conflict with rainbow gravity.

Second, an energy-dependent metric can be given meaning to in the framework of asymptotically safe gravity, but this is not what rainbow gravity is about either. Asymptotically safe gravity is an approach to quantum gravity in which space-time depends on the energy by which it is probed. The energy in rainbow gravity is however not that by which space-time is probed (which is observer-independent), but is supposedly the energy of a single particle (which is observer-dependent).

Third, the whole idea crumbles to dust once you start wondering how the particles in rainbow gravity are supposed to interact. You need space-time to define “where” and “when”. If each particle has its own notion of where and when, the requirement that an interaction be local rather than “spookily” on a distance can no longer be fulfilled.

In a paper which recently appeared in PLB (arXiv version here), three researchers from the University of Warsaw have made a new attempt to give meaning to rainbow gravity. While it doesn’t really solve all problems, it makes considerably more sense than the previous attempts.

In their paper, the authors look a small (scalar) perturbations over a cosmological background, that are modes with different energies. They assume that there is some theory for quantum gravity which dictates what the background does but do not specify this theory. They then ask what happens to the perturbations which travel in the background and derive equations for each mode of the perturbation. Finally, they demonstrate that these equations can be reformulated so that, effectively, the perturbation travels in a space-time which depends on the perturbation’s own energy – it is a variant of rainbow gravity.

The unknown theory of quantum gravity only enters into the equations by an average over the quantum states of the background’s dynamical variables. That is, if the background is classical and in one specific quantum state, gravity doesn’t cause any rainbows, which is the usual state of affairs in general relativity. It is the quantum uncertainty of the space-time background that gives rise to rainbows.

This type of effective metric makes somewhat more sense to me than the previously considered scenarios. In this new approach, it is not the perturbation itself that causes the quantum effect (which would be highly non-local and extremely suspicious). Instead the particle merely acts as a probe for the background (a quite common approximation that neglects backreaction).

Unfortunately, one must expect the quantum uncertainty of space-time to be extremely tiny and undetectable. A long time has passed since quantum gravitational effects were strong in the very early universe and since then they have long decohered. Of course we don’t really know this with certainty, so looking for such effects is generally a good idea. But I don’t think it’s likely we’d find something here.

The situation looks somewhat better though for a case not discussed in the paper, which is a quantum uncertainty in space-time caused by massive particles with a large position uncertainty. I discussed this possibility in this earlier post, and it might be that the effect considered in the new paper can serve as a way to probe it. This would require though to know what happens not to background perturbations but other particles traveling in this background, requiring a different approach than the one used in this paper.

I am not really satisfied with this version of rainbow gravity because I still don’t understand how particles would know where to interact, or which effective background to travel in if several of them are superposed, which seems somewhat of a shortcoming for a quantum theory. But this version isn’t quite as nonsensical as the previous one, so let me say I am cautiously hopeful that this idea might one day become useful.

In summary, the new paper demonstrates that gravitational rainbows might appear in quantum gravity under quite general circumstances. It might be an interesting contribution that, with further work, could become useful in the search for experimental evidence of quantum gravity.

Note added: The paper deals with a FRW background and thus trivially violates Lorentz-invariance.

Thursday, January 07, 2016

More information emerges about new proposal to solve black hole information loss problem

Soft hair. Redshifted.

Last year August, Stephen Hawking announced he had been working with Malcom Perry and Andrew Strominger on a solution to the black hole information loss problem, and they were closing in on a solution. But little was explained other than that this solution rests on a symmetry group by name of supertranslations.

Yesterday then, Hawking, Perry, and Strominger, had a new paper on the arxiv that fills in a little more detail
    Soft Hair on Black Holes
    Stephen W. Hawking, Malcolm J. Perry, Andrew Strominger
    arXiv:1601.00921
I haven’t had much time to think about this, but I didn’t want to leave you hanging, so here is a brief summary.

First of all, the paper seems only a first step in a longer argument. Several relevant questions are not addressed and I assume further work will follow. As the authors write: “Details will appear elsewhere.”

The present paper does not study information retrieval in general. It instead focuses on a particular type of information, the one contained in electrically charged particles. The benefit in doing this is that the quantum theory of electric fields is well understood.

Importantly, they are looking at black holes in asymptotically flat (Minkowski) space, not in asymptotic Anti-de-Sitter (AdS) space. This is relevant because string theorists believe that the black hole information loss problem doesn’t exist in asymptotic AdS space. They don’t know however how to extend this argument to asymptotically flat space or space with a positive cosmological constant. To best present knowledge we don’t live in AdS space, so understanding the case with a positive cosmological constant is necessary to describe what happens in the universe we actually inhabit.

In the usual treatment, a black hole counts only the net electric charge of particles as they fall in. The total charge is one of the three classical black hole “hairs,” next to mass and angular momentum. But all other details about the charges (eg in which chunks they came in) is lost: there is no way to store anything in or on an object that has no features, has no “hairs”.

In the new paper the authors argue that the entire information about the infalling charges is stored on the horizon in form of 'soft photons', that are photons of zero energy. These photons are the “hair” which previously was believed to be absent.

Since these photons can carry information but have zero energy, the authors conclude that the vacuum is degenerate. A 'degenerate' state is one on which several distinct quantum states share the same energy. This means there are different vacuum states which can surround the black hole and so the vacuum can hold and release information.

It is normally assumed that the vacuum state is unique. If it is not, this allows one to have information in the outgoing radiation (which is the ingoing vacuum). A vacuum degeneracy is thus a loophole in the argument originally lead by Hawking according to which information must get lost.

What the ‘soft photons’ are isn't further explained in the paper; they are simply identified with the action of certain operators and supposedly Goldstone bosons of a spontaneously broken symmetry. Or rather of an infinite amount of symmetries that, basically, belong to the conserved charges of something akin multipole moments. It sounds plausible, but the interpretation eludes me. I haven’t yet read the relevant references.

I think the argument goes basically like this: We can expand the electric field in form of all these (infinitely many) higher moments and show that each of them is associated with a conserved charge. Since the charge is conserved, the black hole can’t destroy it. Consequently, it must be maintained somehow. In the presence of a horizon, future infinity is not a Cauchy surface, so we add the horizon as boundary. And on this additional boundary we put the information that we know can’t get lost, which is what the soft photons are good for.

The new paper adds to Hawking’s previous short note by providing an argument for why the amount of information that can be stored this way by the black hole is not infinite, but instead bounded by the Bekenstein-Hawking entropy (ie proportional to the surface area). This is an important step to assure this idea is compatible with everything else we know about black holes. Their argument however is operational and not conceptual. It is based on saying, not that the excess degrees of freedom don't exist, but that they cannot be used by infalling matter to store information. Note that, if this argument is correct, the Bekenstein-Hawking entropy does not count the microstates of the black hole, it instead sets an upper limit to the possible number of microstates.

The authors don’t explain just how the information becomes physically encoded in the outgoing radiation, aside from writing down an operator. Neither, for that matter, do they demonstrate that by this method actually all of the information of the initial can be stored and released. Focusing on photons of course they can't do this anyway. But they don’t have an argument how it can be extended to all degrees of freedom. So, needless to say, I have to remain skeptical that they can live up to the promise.

In particular, I still don’t see that the conserved charges they are referring to actually encode all the information that’s in the field configuration. For all I can tell they only encode the information in the angular directions, not the information in the radial direction. If I were to throw in two concentric shells of matter, I don’t see how the asymptotic expansion could possibly capture the difference between two shells and one shell, as long as the total charge (or mass) is identical. The only way I see to get around this issue is to just postulate that the boundary at infinity does indeed contain all the information. And that in return we only know to work in AdS space. (At least it’s believed to work in this case.)

Also, the argument for why the charges on the horizon are bounded and the limit reproduces the Bekenstein-Hawking entropy irks me. I would have expected the argument for the bound to rely on taking into account that not all configurations that one can encode in the infinite distance will actually go on to form black holes.

Having said that, I think it’s correct that a degeneracy of the vacuum state would solve the black hole information loss problem. It’s such an obvious solution that you have to wonder why nobody thought of this before, except that I thought of it before. In a note from 2012, I showed that a vacuum degeneracy is the conclusion one is forced to draw from the firewall problem. And in a follow-up paper I demonstrated explicitly how this solves the problem. I didn’t have a mechanism though to transfer the information into the outgoing radiation. So now I’m tempted to look at this, despite my best intentions to not touch the topic again...

In summary, I am not at all convinced that the new idea proposed by Hawking, Perry, and Strominger solves the information loss problem. But it seems an interesting avenue that is worth further exploration. And I am sure we will see further exploration...

Monday, January 04, 2016

Finding space-time quanta in the cosmic microwave background: Not so simple

“Final theory” is such a misnomer. The long sought-after unification of Einstein’s General Relativity with quantum mechanics would not be an end, it would be a beginning. A beginning to unravel the nature of space and time, and also a beginning to understand our own beginning – the origin of the universe.

The biggest problem physicists face while trying to find such a theory of quantum gravity is the lack of experimental guidance. The energy necessary to directly test quantum gravity is enormous, and far beyond what we can achieve on Earth. But for cosmologists, the universe is the laboratory. And the universe knows how to reach such high energies. It’s been there, it’s done it.

Our universe was born when quantum gravitational effects were strong. Looking back in time for traces of these effects is therefore one of the most promising, if not the most promising, place to find experimental evidence for quantum gravity. But if it was simple, it would already have been done.

The first issue is that, lacking a theory of quantum gravity, nobody knows how to describe the strong quantum gravitational effects in the early universe. This is the area where phenomenological model building becomes important. But this brings up the next difficulty, which is that the realm of strong quantum gravity is even before inflation – the early phase in which the universe blew up exponentially fast – and neither today’s nor tomorrow’s observations will pin down any one particular model.

There is another option though, that is focusing on the regime of where quantum gravitational effects are weak, yet strong enough to still affect matter. In this regime, relevant during and towards the end of inflation, we know how the theory works. The mathematics to treat the quantum properties of space-time during this period is well-understood because such small perturbations can be dealt with almost the same way as with all other quantum fields.

Indeed, the weak quantum gravity approximation is routinely used in the calculation of today’s observables, such as the spectrum of the cosmic microwave background. That is right – cosmologists do actually use quantum gravity. It becomes necessary because, according to the currently most widely accepted models, inflation is driven by a quantum field – the “inflaton” – whose fluctuations go on to seed the structures we observe today. The quantum fluctuations of the inflaton cause quantum fluctuations of space-time. And these, in return, remain visible today in the large-scale distribution of matter and in the cosmic microwave background (CMB).

This is why last year’s claim by the BICEP collaboration that they had observed the CMB imprint left by gravitational waves from the early was claimed by some media outlets to be evidence for quantum gravity. But the situation is so simple not. Let us assume they had indeed measured what they originally claimed. Even then, obtaining correct predictions from a theory that was quantized doesn’t demonstrate the correct theory must have been quantized. To demonstrate that space-time must have had quantum behavior in the early universe, we must instead find an observable that could not have been produced by any unquantized theory.

In the last months, two papers appeared that studied this question and analyzed the prospects of finding evidence for quantum gravity in the CMB. The conclusions, however, are in both cases rather pessimistic.

The first paper is “A model with cosmological Bell inequalities” by Juan Maldacena. Maldacena tries to construct a Bell-type test that could be used to rule out a non-quantum origin of the signatures that are leftover today from the early universe. The problem is that, once inflation ends, only the classical distribution of the, originally quantum, fluctuation goes on to enter the observables, like the CMB temperature fluctuations. This makes any Bell-type setup with detectors in the current era impossible because the signal was long gone.

Maldacena refuses to be discouraged by this and instead tries to find a way in which another field, present during inflation, plays the role of the detector in the Bell-experiment. This additional field could then preserve the information about the quantum-ness of space-time. He explicitly constructs such a model with an additional field that serves as detector, but calls it himself “baroque” and “contrived.” It is a toy-model to demonstrate there exist cases in which a Bell-test can be performed on the CMB, but not a plausible scenario for our universe.

I find the paper nevertheless interesting as it shows what it would take to use this method and also exhibits where the problem lies. I wish there were more papers like this, where theorists come forward with ideas that didn’t work, because these failures are still a valuable basis for further studies.

The second paper is “Quantum Discord of Cosmic Inflation: Can we Show that CMB Anisotropies are of Quantum-Mechanical Origin?” by Jerome Martin and Vincent Vennin. The authors of this paper don’t rely on the Bell-type test specifically, but instead try to measure the “quantum discord” of the CMB temperature fluctuations. The quantum discord, in a nutshell, measures the quantum-ness in the correlations of a system. The observables they look at are firstly the CMB two-point correlations and later also higher correlation functions.

The authors address the question in two steps. In the first step they ask whether the CMB observations can also be reproduced in the standard treatment if the state has little or no quantum correlations, ie if one has a ‘classical state’ (in terms of correlations) in a quantum theory. They find that for what already existing observables are concerned, the modifications due to the lack of quantum correlations are existent but unobservable.
    “[I]n practice, the difference between the quantum and the classical results is tiny and unobservable probably forever.”
They are tentatively hopeful that the two cases might become distinguishable with higher-order correlation functions. On these correlations, experimentalists have so far only very little data, but it is a general topic of interest and future missions will undoubtedly sharpen the existing constraints. In the present work, the authors however do not quantify the predictions, but rather defer to future work: “[I]t remains to generate templates […] to determine whether such a four-point function is already excluded or not.”

The second step is that they study whether the observed correlations could be created by a theory that is classical to begin with, so that the fluctuations are stochastic. They then demonstrate that this can always be achieved, and thus there is no way to distinguish the two cases. To arrive at this conclusion, they first derive the equations for the correlations in the unquantized case, then demand that they reproduce those of the quantized case, and then argue that these equations can be fulfilled.

On the latter point I am, maybe uncharacteristically, less pessimistic than the authors themselves because their general case might be too general. Combining a classical theory with a quantum field gives rise to a semi-classical set of equations that lead to peculiar violations of the uncertainty principle, and an entirely classical theory would need a different mechanism to even create the fluctuations. That is to say, I believe that it might be possible to further constrain the prospects of unquantized fluctuations if one takes into account other properties that such models necessarily must have.

In summary, I have to conclude that we still have a long way to go until we can conclude that space-time must have been quantized in the early universe. Nevertheless, I think it is one of the most promising avenues to pin down the first experimental signature for quantum gravity.