Showing posts with label Particle Physics. Show all posts
Showing posts with label Particle Physics. Show all posts

Wednesday, January 13, 2016

Book review: “From the Great Wall to the Great Collider” by Nadis and Yau

From the Great Wall to the Great Collider: China and the Quest to Uncover the Inner Workings of the Universe
By Steve Nadis and Shing-Tung Yau
International Press of Boston (October 23, 2015)

Did you know that particle physicists like the Chinese government’s interest in building the next larger particle collider? If not, then this neat little book about the current plans for the Great Collider, aka “Nimatron,” is just for you.

Nadis and Yau begin their book laying out the need for a larger collider, followed by a brief history of accelerator physics that emphasizes the contribution of Chinese researchers. Then come two chapters about the hunt for the Higgs boson, the LHC’s success, and a brief survey of beyond the standard model physics that focuses on supersymmetry and extra dimensions. The reader then learns about other large-scale physics experiments that China has run or is running, and about the currently discussed options for the next larger particle accelerator. Nadis and Yau don’t waste time discussing details of all accelerators that are presently considered, but get quickly to the point of laying out the benefits of a circular 50 or even 100 TeV collider in China.

And the benefits are manifold. The favored location for the gigantic project is Qinghuangdao, which is “an attractive destination that might appeal to foreign scientists” because, among other things, “its many beaches [are] ranked among the country’s finest,” “the countryside is home to some of China’s leading vineyards” and even the air quality is “quite good” at least “compared to Beijing.” Book me in.

The authors make a good case that both the world and China only have to gain from the giant collider project. China because “one result would likely be an enhancement of national prestige, with the country becoming a leader in the field of high-energy physics and perhaps eventually becoming the world center for such research. Improved international relations may be the most important consequence of all.” And the rest of the world benefits because, besides preventing thousands of particle physicists from boredom, “civil engineering costs are low in the country – much cheaper than those in many Western countries.”

The book is skillfully written with scientific explanations that are detailed, yet not overly technical, and much space is given to researchers in the field. Nadis and Yau quote whoever might help getting their message across: David Gross, Lisa Randall, Frank Wilczek, Don Lincoln, Don Hopper, Joseph Lykken, Nima Arkani-Hamed, Nathan Seiberg, Martinus Veltman, Steven Weinberg, Gordon Kane, John Ellis – everybody gets a say.

My favorite quote is maybe that by Henry Tye, who argues that the project is a good investment because “the worldwide impact of a collider is much bigger than if the money were put into some other area of science,” since “even if China were to spend more than the United States in some field of science and engineering other than high-energy physics, US professors would still do their research in the US.” This quote sums up the authors’ investigation of whether such a major financial commitment might maybe have a larger payoff were it invested into any other research area.

Don’t get me wrong there, if the Chinese want to build a collider, I think that’s totally great and an awesome contribution to knowledge discovery and the good of humanity, the forgiveness of sins, the resurrection of the body, and the life everlasting, amen. But there’s a real discussion here to be had whether building the next bigger ring-thing is where the money should flow or if not putting a radio telescope on the moon or a gravitational wave interferometer in space would bring more bang for the Yuan. Unfortunately, you’re not going to find that discussion in Nadis and Yau’s book.

Aside: The print has smear-stripes.Yes, that puts me in a bad mood.

In summary, this book will come in very handy next time you have to convince a Chinese government official to spend a lot of money on bringing protons up to speed.

[Disclaimer: Free review copy.]

Wednesday, December 30, 2015

How does a lightsaber work? Here is my best guess.

A lightsaber works by emitting a stream of magnetic monopoles. Magnetic monopoles are heavy particles that source magnetic fields. They are so-far undiscovered but many physicists believe they are real due to theoretical arguments. For string theorist Joe Polchinski, for example, “the existence of magnetic monopoles seems like one of the safest bets that one can make about physics not yet seen.” Magnetic monopoles are so heavy however that they cannot be produced by any known processes in the universe – a minor technological complication that I will come back to below.




Depending on the speed at which the monopoles are emitted, they will either escape or return back to the saber’s hilt which has the opposite magnetic charge. You could of course just blast your opponent with the monopoles, but that would be rather boring. The point of a lightsaber isn’t to merely kill your enemies, but to kill them with style.



So you are emitting this stream of monopoles. Since the hilt has the opposing magnetic charge they pull after them magnetic force lines. Next you eject some electrically charged particles – electrons or ions – into this field with an initial angular velocity. These will circle in spirals around the magnetic field and, due to the circular motion, they will emit synchroton radiation, which is why you can see the blade.

Due to the emission of light and the occasional collision with air molecules, the electrically charged particles slow down and eventually escape the magnetic field. That doesn’t sound really healthy, so you might want to make sure that their kinetic energy isn’t too high. To then still get an emission spectrum with a significant contribution in the visible range, you need a huge magnetic field. Which can’t really be healthy either, but at least it decays inversely proportional to the distance from the blade.

Letting the monopoles escape has the advantage that you don’t have to devise a complicated mechanism to make sure they actually return back to the hilt. It has the disadvantage though that one fighter’s monopoles can be sucked up by the other’s saber if that has opposite charge. Can the blades pass through each other? Well, if they both have the same charges, they repel. You couldn’t easily pass them through each other, but they would probably distort each other to some extent. How much depends on the strength of the magnetic field that keeps the electrons caught.


Finally, there is the question how to produce the magnetic monopoles to begin with. For this, you need a pocket-sized accelerator that generates collision energies at the Planck scale. The most commonly used method for this is to use a Kyber crystal. This also means that you need to know string theory to accurately calculate how a lightsaber operates. May the Force be with you.

[For more speculation, see also Is a Real Lightsaber Possible? by Don Lincoln.]

Tuesday, October 06, 2015

Repost in celebration of the 2015 Nobel Prize in Physics: Neutrino masses and angles

It was just announced that this year's Nobel Prize in physics goes to Takaaki Kajita from the Super-Kamiokande Collaboration and Arthur B. McDonald from the Sudbury Neutrino Observatory (SNO) Collaboration “for the discovery of neutrino oscillations, which shows that neutrinos have mass.” On this occasion, I am reposting a brief summary of the evidence for neutrino masses that I wrote in 2007.



Neutrinos come in three known flavors. These flavors correspond to the three charged leptons, the electron, the muon and the tau. The neutrino flavors can change during the neutrino's travel, and one flavor can be converted into another. This happens periodically. The neutrino flavor oscillations have a certain wavelength, and an amplitude which sets the probability of the change to happen. The amplitude is usually quantified in a mixing angle θ. In this, sin2(2 θ) = 1, or θ = π/4 corresponds to maximal mixing, which means one flavor changes completely into another, and then back.

This neutrino mixing happens when the mass-eigenstates of the Hamiltonian are not the same as the flavor eigenstates. The wavelength λ of the oscillation turns out to depend (in the relativistic limit) on the difference in the squared masses Δm2 (not the square of the difference!) and the neutrino's energy E as λ = 4Em2. The larger the energy of the neutrinos the larger the wavelength. For a source with a spectrum of different energies around some mean value, one has a superposition of various wavelengths. On distances larger than the typical oscillation length corresponding to the mean energy, this will average out the oscillation.

The plot below from the KamLAND Collaboration shows an example of an experiment to test neutrino flavor conversion. The KamLAND neutrino sources are several Japanese nuclear reactors that emit electron anti-neutrinos with a very well known energy and power spectrum, that has a mean value around some MeV. The average distance to the reactors is ~180 km. The plot shows the ratio of the observed electron anti-neutrinos to the expected number without oscillations. The KamLAND result is the red dot. The other data points were earlier experiments in other locations that did not find a drop. The dotted line is the best fit to this data.



[Figure: KamLAND Collaboration]


One sees however that there is some kind of redundancy in this fit, since one can shift around the wavelength and stay within the errorbars. These reactor data however are only one of the measurements of neutrino oscillations that have been made during the last decades. There are a lot of other experiments that have measured deficites in the expected solar and atmospheric neutrino flux. Especially important in this regard was the SNO data that confirmed that indeed not only there were less solar electron neutrinos than expected, but that they actually showed up in the detector with a different flavor, and the KamLAND analysis of the energy spectrum that clearly favors oscillation over decay.

The plot below depicts all the currently available data for electron neutrino oscillations, which places the mass-square around 8×10-5 eV2, and θ at about 33.9° (i.e. the mixing is with high confidence not maximal).




[Figure: Hitoshi Murayama, see here for references on the used data]


The lines on the top indicate excluded regions from earlier experiments, the filled regions are allowed values. You see the KamLAND 95%CL area in red, and SNO in brown. The remaining island in the overlap is pretty much constrained by now. Given that neutrinos are so elusive particles, and this mass scale is incredibly tiny, I am always impressed by the precision of these experiments!

To fit the oscillations between all the known three neutrino flavors, one needs three mixing angles, and two mass differences (the overall mass scale factors out and does not enter, neutrino oscillations thus are not sensitive to the total neutrino masses). All the presently available data has allowed us to tightly constrain the mixing angles and mass squares. The only outsider (that was thus excluded from the global fits) is famously LSND (see also the above plot), so MiniBooNE was designed to check on their results. For more info on MiniBooNE, see Heather Ray's excellent post at CV.



This post originally appeared in December 2007 as part of our advent calendar A Plottl A Day.

Wednesday, September 23, 2015

Can dark matter cause cancer?

Image Credits: Agnis Schmidt-May
Tl;dr: Yes. But it’s exceedingly unlikely.

Yesterday, a new paper appeared on the arxiv, provocatively titled “Dark matter as a cancer hazard.” It is a comment on an earlier paper by Freese and Savage, which I previously wrote about here.

Freese and Savage in their 2012 paper estimated the interaction rate of dark matter with the human body for weakly interacting massive particls (WIMPs). They came to the conclusion that the risk of getting cancer from damage caused by dark matter to the genetic code is much smaller than the risk posed by the cosmic radiation we are constantly exposed to.

Yes, dark matter can cause cancer. That’s because literally everything can cause cancer: The probability that a particle collision breaks a molecular bond is never strictly speaking zero, and such damage can potentially turn a cell into a cancerous reproduction machine. Even doing nothing at all can cause cancer, just because a bond may break simply due to quantum fluctuations. It’s not fair, I know. It’s also so unlikely to happen that it didn’t even make it onto the Daily Mail’s List of Things That Can Give You Cancer. Should dark matter go onto the list? After all, the idea that dark matter may lead to “biological phenomena having sometimes fatal late effects” dates back at least to 1990.

In the new paper the authors estimate the interaction probability with the human body for a different type of dark matter. They looked specifically at mirror dark matter whereas Freese and Savage had looked at one of the presently most popular dark matter models, the WIMPs. I can see a whole industry growing out of this.

But what is mirror dark matter and why have you never heard of it?

Mirror dark matter is a complex type of dark matter, a complete copy of the standard model that describes our normal matter. The mirror dark matter interacts only gravitationally with us, or at least only very weakly. This sounds like a nice idea, the next best thing you may think of after just having a single particle. The problem is that we know dark matter does not behave just like normal matter, which renders mirror dark matter immediately implausible.

To begin with there is more dark matter in the universe than normal matter. But more importantly, observations tell us that dark matter must be weakly interacting with itself, otherwise the cosmic microwave background would not have the observed spectrum of temperature fluctuations. Our normal matter interacts much too strongly with itself to achieve that. Then there are case studies like the Bullet Cluster, whose gravitational lensing images reveal that dark matter does not have as much friction among itself as normal dark matter. Dark matter also doesn’t form galaxies in the same way that normal matter does, but rather it acts as a seed for our galaxies. If it didn’t, structure formation wouldn’t come out correctly.

So clearly, dark matter that just does the same as normal matter doesn’t work. On the other hand, a copy of the standard model is a large set of particles with many emergent parameters (like particle abundances) that allow a lot of freedom to make the model fit the data.

You can try for example to adapt the mirror matter model by making changes to the initial conditions, so that they differ from the initial condition of normal matter. The mirror dark matter is assumed to start in the early universe from a specifically chosen configuration, that in particular implies that the two types of matter do not have the same temperature later on. This can solve some problems and make mirror dark matter fit many of our observations. It brings up the question though Why these initial conditions?

As has been argued, probably most vocally by Paul Davies, the distinction between initial conditions and evolution laws is fuzzy. If you fabricate your initial conditions smartly enough, you can make pretty much any model fit the data. (You can take the state that we observe today and evolve it backwards time. Then pick whatever you get as initial state. Voila.) So I don’t actually doubt that it is possible to explain the observations with mirror dark matter. But cherry picking initial conditions doesn’t seem very convincing to me.

In any case, leaving aside that mirror dark matter is not particularly popular because dark matter just doesn’t seem to behave anything like normal matter, it’s a model, and it has equations and so on, and now you can go and calculate things.

To estimate the cancer risk from mirror dark matter, the authors assume that the mirror dark matter forms atoms, which can bind together to “mirror micrometeorites” that contain about 1015 mirror atoms. They then estimate the energy deposited by the mirror micrometeorites in the human body and find that they can leave behind more energy than weakly interacting single-particle dark matter. These mirror-objects can thus damage multiple bonds on their path. The reason is basically that they are larger.

So how likely is mirror dark matter to give you cancer? Well, unfortunately in the paper they only estimate the energy deposited by the micrometeorites, but not the probability for these objects to hit you to begin with. I wrote an email to one of the authors and inquired if there is an estimate for the flux of such objects through earth, but apparently there is none. But one thing we know about dark matter is how much there has to be of it in total. So if dark matter is clumped to pieces larger than WIMPs this means that there must be fewer of these pieces. In other words, the flux of the mirror nuclei relative to that of WIMPs should be lower. Without a concrete model though, one really can’t say anything more.

In the new paper, the authors further speculate that dark matter may account for some types of cancer
“We can thus speculate that the mirror micrometeorite, when interacting with the DNA molecules, can lead to multiple simultaneous mutations and cause disease. For instance, there is an evidence that individual malignant cancer cells in human tumors contain thousands of random mutations and that normal mutation rates are insufficient to account for these multiple mutations found in human cancers [...]”
Whatever the risk of getting cancer from dark matter however, it probably hasn’t changed much for the last billion years or so. One could then try to turn the argument around and argue that if there were too many of such mirror micrometeorites then the dinosaurs would have died from cancer, or something like that. I am not very excited about such biological constraints, the uncertainties are much too large in this area. You almost certainly get better accuracy looking at traces in minerals or actual particle detectors.

In summary, the paper doesn’t estimate the cancer risk for an unconfirmed model. And in any case, short of moving to the center of the Earth there isn’t anything you could do about it anyway.

Wednesday, March 11, 2015

What physics says about the vacuum: A visit to the seashore.

[Image Source: www.wall321.com]
Imagine you are at the seashore, watching the waves. Somewhere in the distance you see a sailboat — wait, don’t fall asleep yet. The waves and I want to tell you a story about nothing.

Before quantum mechanics, “vacuum” meant the absence of particles, and that was it. But with the advent of quantum mechanics, the vacuum became much more interesting. The sea we’re watching is much like this quantum vacuum. The boats on the sea’s surface are what physicists call “real” particles; they are the things you put in colliders and shoot at each other. But there are also waves on the surface of the sea. The waves are like “virtual” particles; they are fluctuations around sea level that come out of the sea and fade back into it.

Virtual particles have to obey more rules than sea waves though. Because electric charge must be conserved, virtual particles can only be created together with their anti-particles that carry the opposite charge. Energy too must be conserved, but due to Heisenberg’s uncertainty principle, we are allowed to temporarily borrow some energy from the vacuum, as long as we give it back quickly enough. This means that the virtual particle pairs can only exist for a short time, and the more energy they carry, the shorter the duration of their existence.

You cannot directly measure virtual particles in a detector, but their presence has indirect observable consequences that have been tested to great accuracy. Atomic nuclei, for example, carry around them a cloud of virtual particles, and this cloud shifts the energy levels of electrons orbiting around the nucleus.

So we know, not just theoretically but experimentally, that the vacuum is not empty. It’s full with virtual particles that constantly bubble in and out of existence.

Vizualization of a quantum field theory calculation showing virtual particles in the quantum vacuum.
Image Credits: Derek Leinweber


Let us go back to the seashore; I quite liked it there. We measure elevation relative to the average sea level, which we call elevation zero. But this number is just a convention. All we really ever measure are differences between heights, so the absolute number does not matter. For the quantum vacuum, physicists similarly normalize the total energy and momentum to zero because all we ever measure are energies relative to it. Do not attempt to think of the vacuum’s energy and momentum as if it was that of a particle; it is not. In contrast to the energy-momentum of particles, that of the vacuum is invariant under a change of reference frame, as Einstein’s theory of Special Relativity requires. The vacuum looks the same for the guy in the train and for the one on the station.

But what if we take into account gravity, you ask? Well, there is the rub. According to General Relativity, all forms of energy have a gravitational pull. More energy, more pull. With gravity, we are no longer free to just define the sea level as zero. It’s like we had suddenly discovered that the Earth is round and there is an absolute zero of elevation, which is at the center of the Earth.

In best manner of a physicist, I have left out a small detail, which is that the calculated energy of the quantum vacuum is actually infinite. Yeah, I know, doesn’t sound good. If you don’t care what the total vacuum energy is anyway, this doesn’t matter. But if you take into account gravity, the vacuum energy becomes measurable, and therefore it does matter.

The vacuum energy one obtains from quantum field theory is of the same form as Einstein’s Cosmological Constant because this is the only form which (in an uncurved space-time) does not depend on the observer. We measured the Cosmological Constant to have a small, positive, nonzero value which is responsible for the accelerated expansion of the universe. But why it has just this value, and why not infinity (or at least something huge), nobody knows. This “Cosmological Constant Problem” is one of the big open problems in theoretical physics today and its origin lies in our lacking understanding of the quantum vacuum.

But this isn’t the only mystery surrounding the sea of virtual particles. Quantum theory tells you how particles belong together with fields. The quantum vacuum by definition doesn’t have real particles in it, and normally this means that the field that it belongs to also vanishes. For these fields, the average sea level is at zero, regardless of whether there are boats on the water or aren’t. But for some fields the real particles are more like stones. They’ll not stay on the surface, they will sink and make the sea level rise. We say the field “has a non-zero vacuum expectation value.”

On the seashore, you now have to wade through the water, which will slow you down. This is what the Higgs-field does: It drags down particles and thereby effectively gives them mass. If you dive and kick the stones that sunk to the bottom hard enough, you can sometimes make one jump out of the surface. This is essentially what the LHC does, just call the stones “Higgs bosons.” I’m really getting into this seashore thing ;)

Next, let us imagine we could shove the Earth closer to the Sun. Oceans would evaporate and you could walk again without having to drag through the water. You’d also be dead, sorry about this, but what about the vacuum? Amazingly, you can do the same. Physicists say the “vacuum melts” rather than evaporates, but it’s very similar: If you pump enough energy into the vacuum, the level sinks to zero and all particles are massless again.

You may complain now that if you pump energy into the vacuum, it’s no longer vacuum. True. But the point is that you change the previously non-zero vacuum expectation value. To our best knowledge, it was zero in the very early universe and theoretical physicists would love to have a glimpse at this state of matter. For this however they’d have to achieve a temperature of 1015 Kelvin! Even the core of the sun “only” makes it to 107 Kelvin.

One way to get to such high temperature, if only in a very small region of space, is with strong electromagnetic fields.

In a recent paper, Hegelich, Mourou, and Rafelski estimated that with the presently most advanced technology high intensity lasers could get close to the necessary temperature. This is still far off reality, but it will probably one day become possible!

Back to the sea: Fluids can exist in a “superheated” state. In such a state, the medium is liquid even though its temperature is above the boiling point. Superheated liquids are “metastable,” this means if you give them any opportunity they will very suddenly evaporate into the preferred stable gaseous state. This can happen if you boil water in the microwave, so always be very careful taking it out.

The vacuum that we live in might be a metastable state: a “false vacuum.” In this case it will evaporate at some point, and in this process release an enormous amount of energy. Nobody really knows whether this will indeed happen. But even if it does happen, best present estimates date this event into the distant future, when life is no longer possible anyway because stars have run out of power. Particle physicist Joseph Lykken estimated something like a Googol years; that’s about 1090 times the present age of the universe.

According to some theories, our universe came into existence from another metastable vacuum state, and the energy that was released in this process eventually gave rise to all we see around us now. Some physicists, notably Lawrence Krauss, refer to this as creating a universe from “nothing.”


If you take away all particles, you get the quantum vacuum, but you still have space-time. If we had a quantum theory for space-time as well, you could take away space-time too, at least operationally. This might be the best description of a physical “nothing” that we can ever reach, but it still would not be an absolute nothing because even this state is still a mathematical “something”.

Now what exactly it means for mathematics to “exist” I better leave to philosophers. All I have to say about this is, well, nothing.


If you want to know more about the philosophy behind nothing, you might like Jim Holt’s book “Why does the world exist”, which I reviewed here

This post previously appeared at Starts With a Bang under the title “Everything you ever wanted to know about nothing”.

Monday, January 26, 2015

Book review: "Cracking the Particle Code of the Universe" by John Moffat

Cracking the Particle Code of the Universe: the Hunt for the Higgs Boson
By John W Moffat
Oxford University Press (2014)

John Moffat’s new book covers the history of the Standard Model of particle physics from its beginnings to the recent discovery of the Higgs boson – or, as Moffat cautiously calls it, the new particle most physicists believe is the Standard Model Higgs. But Cracking the Particle Code of the Universe isn’t just any book about the Standard Model: it’s about the model as seen through the eyes of an insider, one who has witnessed many fads and statistical fluctuations come and go. As an emeritus professor at the University of Toronto, Canada and a senior researcher at the nearby Perimeter Institute, Moffat has the credentials to do more than just explain the theory and the experiments that back it up: he also offers his own opinion on the interpretation of the data, the status of the theories and the community’s reaction to the discovery of the Higgs.

The first half of the book is mainly dedicated to introducing the reader to the ingredients of the Standard Model, the particles and their properties, the relevance of gauge symmetries, symmetry breaking, and the workings of particle accelerators. Moffat also explains some proposed extensions and alternatives to the Standard Model, such as technicolor, supersymmetry, preons, additional dimensions and composite Higgs models as well as models based on his own work. In each case he lays out the experimental situation and the technical aspects that speak for and against these models.

In the second half of the book, Moffat recalls how the discovery unfolded at the LHC and comments on the data that the collisions yielded. He reports from several conferences he attended, or papers and lectures that appeared online, and summarizes how the experimental analysis proceeded and how it was interpreted. In this, he includes his own judgment and relates discussions with theorists and experimentalists. We meet many prominent people in particle physics, including Guido Altarelli, Jim Hartle and Stephen Hawking, to mention just a few. Moffat repeatedly calls for a cautious approach to claims that the Standard Model Higgs has indeed been discovered, and points out that not all necessary characteristics have been found. He finds that the experimentalists are careful with their claims, but that the theoreticians jump to conclusions.

The book covers the situation up to March 2013, so of course it is already somewhat outdated; the ATLAS collaboration’s evidence for the spin-0 nature of the Higgs boson was only published in June 2013, for example. But this does not matter all that much because the book will give the dedicated reader the necessary background to follow and understand the relevance of new data.

Moffat’s writing sometimes gets quite technical, albeit without recourse to equations, and I doubt that readers will fully understand his elaborations without at least some knowledge of quantum field theory. He introduces the main concepts he needs for his explanations, but he does so very briefly; for example, his book features the briefest explanation of gauge invariance I have ever come across, and many important concepts, such as cross-sections or the relation between the masses of force-carriers and the range of the force, are only explained in footnotes. The glossary can be used for orientation, but even so, the book will seem very demanding for readers who encounter the technical terms for the first time. However, even if they are not able to follow each argument in detail, they should still understand the main issues and the conclusions that Moffat draws.

Towards the end of the book, Moffat discusses several shortcomings of the Standard Model, including the Higgs mass hierarchy problem, the gauge hierarchy problem, and the unexplained values of particle masses. He also briefly mentions the cosmological constant problem, as it is related to questions about the nature of the vacuum in quantum field theory, but on the whole he stands clear from discussing cosmology. He does, however, comment on the anthropic principle and the multiverse and does not hesitate to express his dismay about the idea.

While Moffat gives some space to discussing his own contributions to the field, he does not promote his point of view as the only reasonable one. Rather, he makes a point of emphasizing the necessity of investigating alternative models. The measured mass of the particle-that-may-be-the-Higgs is, he notes, larger than expected, and this makes it even more pressing to find models better equipped to address the problems with “naturalness” in the Standard Model.

I have met Moffat on various occasions and I have found him to be not only a great physicist and an insightful thinker, but also one who is typically more up-to-date than many of his younger colleagues. As the book also reflects, he closely follows the online presentations and discussions of particle physics and particle physicists, and is conscious of the social problems and cognitive biases that media hype can produce. In his book, Moffat especially criticizes bloggers for spreading premature conclusions.

Moffat’s recollections also document that science is a community enterprise and that we sometimes forget to pay proper attention to the human element in our data interpretation. We all like to be confirmed in our beliefs, but as my physics teacher liked to say “belief belongs into the church.” I find it astonishing that many theoretical physicists these days publicly express their conviction that a popular theory “must be” right even when still unconfirmed by data – and that this has become accepted behavior for scientists. A theoretician who works on alternative models today is seen too easily as an outsider (a non-believer), and it takes much courage, persistence, and stable funding sources to persevere outside mainstream, like Moffat has done for decade and still does. This is an unfortunate trend that many in the community do not seem to be aware of, or do not see why it is of concern, and it is good that Moffat in his book touches on this point.

In summary, Moffat’s new book is a well-done and well-written survey of the history, achievements, and shortcomings of the Standard Model of particle physics. It will equip the reader with all the necessary knowledge to put into context the coming headlines about new discoveries at the LHC and future colliders.

This review first appeared in Physics World on Dec 4th under the title "A strong model, with flaws".

Friday, October 03, 2014

Is the next supercollider a good investment?

The relevance of basic research is difficult to communicate to politicians who only care about their next term and who don’t want to invest in what might take decades to pay off. But it is even more difficult to decide which research is the best to invest into, and how much it is worth, in numbers.

Whether a next supercollider is worth the billions of Euro that it will eat up is a very involved question. I find it partly annoying, partly disturbing, that many of my physics colleagues regard the answer as obvious. Clearly we need a new supercollider! To measure the details of this, and the decay channels of that, to get a cleaner signal of something and a better precision for whatever. And I am sure they will come up with an argument for why Susy, our invisible friend, is still just around the corner.

To me this superficial argumentation is just another way of demonstrating they don’t care about communicating the relevance of their research. Of course they want a next collider - they make their living writing papers about that.

The most common argument that I hear in favor of the next collider is that much more money is wasted on the war in Afghanistan (if you ask an American) or rebuilding the Greek economy (if you ask a German), and I am sure similar remarks are uttered worldwide. The logic here seems to be that a lot of money is wasted anyway, so what does it matter to spend some billions on a collider. Maybe this sounds convincing if you have a PhD in high energy physics, but I don’t know who else is supposed to buy this.

The next argument I keep hearing is that the worldwide web was invented at CERN which also hosts the LHC right now. If anything, this argument is even more stupid than the war-also-wastes-money argument. Yes, Tim Berners-Lee happened to work at CERN when he developed hypertext. The environment was certainly conductive to his invention, but the standard model of particle physics had otherwise very little to do with it. You could equally well argue we should build leaning towers to advance research on general relativity.

I just finished reading John Moffat’s book “Cracking the Particle Code of the Universe”. I can’t post the review here until it has appeared in print due to copyright issues, sorry, but by and large it’s a good book. No, he doesn’t use it to advertise his own theories. He mentions them of course, but most of the book is more generally dedicated to the history, achievements, and shortcomings of the standard model.

His argument for the relevance of particle colliders amounts to the following paragraph:
“As Guido Altarelli mused after my talk at CERN in 2008, can governments be persuaded to spend ever greater sums of money, amounting to many billions of dollars, on ever larger and higher energy accelerators than the LHC if they suspect that the new machines will also come up with nothing new beyond the Higgs boson? Of course, to put this in perspective, one should realize that the $9 billion spend on an accelerator would not run a contemporary war such as the Afghanistan war for more than five weeks. Rather than killing people, building and operating these large machines has practical and beneficial spinoffs for technology and for training scientists. Thus, even if the accelerators continued to find no new particles, they might still produce significant benefits for society. The Worldwide Web, after all, was invented at CERN.”

~ John Moffat, Cracking the Particle Code of the Universe, p. 78
Well, running a war also has practical and beneficial spinoffs for technology and training scientists. Sorry John, but that was disappointing. To be fair, the whole book itself makes a pretty good case for why understanding the laws of nature is important business. But what war doesn’t do for your country and what investing in basic research does is building a base for sustainable progress. Without new discoveries and fundamentally new insights, applied science must eventually run dry.

There is no doubt in my mind that society invests its billions well if it invests in theoretical physics. Whether that investment should go into particle colliders though is a different question. I don’t have a good answer to that, and I don’t see that the question is seriously being discussed. Is it a worthy cause?

Last year, Fermilab’s Symmetry Magazine ran a video contest on the topic “Why particle physics matters”. Ironically most of the answers have nothing to do with particle physics in particular: “could bring about a revolution,” “a wonderful model of successful international collaboration,” “explore the frontiers and boundaries of our universe,” “engages and sharpens the mind”, “captures the imagination of bright minds”. You could use literally the same arguments for cosmology, quantum information or high precision measurements. Indeed, I personally find the high precision frontier presently more promising than ramping up energy and luminosity.

I am happy of course if China will go ahead and build the next supercollider. After all it’s not my taxes and still better than spending money on diamond necklaces that your 16 year old can show off on facebook. I can’t quite shake the impression though that this plan is more the result of wanting to appear competitive than the result of a careful deliberation about return on investment.

Sunday, June 15, 2014

Evolving dimensions, now vanishing

Vanishing dimensions.
Technical sketch.
Source: arXiv:1406.2696 [gr-qc]

Some years ago, we discussed the “Evolving Dimensions”, a new concept in the area of physics beyond the standard model. The idea, put forward by Anchordoqui et al in 2010, is to make the dimensionality of space-time scale-dependent so that at high energies (small distances) there is only one spatial dimension and at small energies (large distances) the dimension is four, or possibly even higher. In between – in the energy regime that we deal with in everyday life and most of our experiments too – one finds the normal three spatial dimensions.

The hope is that these evolving dimensions address the problem of quantizing gravity, since gravity in lower dimensions is easier to handle, and possibly the cosmological constant problem, since it is a long-distance modification that becomes relevant at low energies.

One of the motivations for the evolving dimensions is the finding that the spectral dimension decreases at high energies in various approaches to quantum gravity. Note however that the evolving dimensions deal with the actual space-time dimension, not the spectral dimension. This immediately brings up a problem that I talked about to Dejan Stojkovic, one of the authors of the original proposal, several times, the issue of Lorentz-invariance. The transition between different numbers of dimensions is conjectured to happen at certain energies: how is that statement made Lorentz-invariant?

The first time I heard about the evolving dimensions was in a talk by Greg Landsberg at our 2010 conference on Experimental Search for Quantum Gravity. I was impressed by this talk, impressed because he was discussing predictions of a model that didn’t exist. Instead of a model for the spacetime of the evolving dimensions, he had an image of yarn. The yarn, you see, is one-dimensional , but you can knit it to two-dimensional sheets, which you can then form to a three-dimensional ball, so in some sense the dimension of the yarn can evolve depending on how closely you look. It’s a nice image. It is also obviously not Lorentz-invariant. I was impressed by this talk because I’d never have the courage to give a talk based on a yarn image.

It was the early days of this model, a nice idea indeed, and I was curious to see how they would construct their space-time and how it would fare with Lorentz-invariance.

Well, they never constructed a space-time model. Greg seems not to have continued working on this, but Dejan is still on the topic. A recent paper with Niayesh Afshordi from Perimeter Institute still has the yarn in it. The evolving dimensions are now called vanishing dimensions, not sure why. Dejan also wrote a review on the topic, which appeared on the arxiv last week. More yarn in that.

In one of my conversations with Dejan I mentioned that the Causal Set approach makes use of a discrete yet Lorentz-invariant sprinkling, and I was wondering out aloud if one could employ this sprinkling to obtain Lorentz-invariant yarn. I thought about this for a bit but came to the conclusion that it can’t be done.

The Causal Set sprinkling is a random distribution of points in Minkowski space. It can be explicitly constructed and shown to be Lorentz-invariant on the average. It looks like this:

Causal Set Sprinkling, Lorentz-invariant on the average. Top left: original sprinkling. Top right: zoom. Bottom left: Boost (note change in scale). Bottom right: zoom to same scale as top right. The points in the top right and bottom right images are randomly distributed in the same way. Image credits: David Rideout. [Source]

The reason this discreteness is compatible with Lorentz-invariance is that the sprinkling makes use only of four-volumes and of points, both of which are Lorentz-invariant, as opposed to Lorentz-covariant. The former doesn’t change under boosts, the latter changes in a well-defined way. Causal Sets, as the name says, are sets. They are collections of points. They are not, I emphasize, graphs – the points are not connected. The set has an order relation (the causal order), but a priori there are no links between the points. You can construct paths on the sets, they are called “chains”, but these paths make use of an additional initial condition (eg an initial momentum) to find a nearest neighbor.

The reason that looking for the nearest neighbor doesn’t make much physical sense is that the distance to all points on the lightcone is zero. The nearest neighbor to any point is almost certainly (in the mathematical sense) infinitely far away and on the lightcone. You can use these neighbors to make the sprinkling into a graph. But now you have infinitely many links that are infinitely long and the whole thing becomes space-filling. That is Lorentz-invariant of course. It is also in no sensible meaning still one-dimensional on small scales. [Aside: I suspect that the space you get in this way is not locally identical to R^4, though I can’t quite put my finger on it, it doesn’t seem dense enough if that makes any sense? Physically this doesn’t make any difference though.]

So it pains me somewhat that the recent paper of Dejan and Niayesh tries to use the Causal Set sprinkling to save Lorentz-invariance:

“One may also interpret these instantaneous string intersections as a causal set sprinkling of space-time [...] suggesting a potential connection between causal set and string theory approaches to quantum gravity.”

This interpretation is almost certainly wrong. In fact, in the argument that their string-based picture is Lorentz-invariant they write:
“Therefore, on scales much bigger than the inverse density of the string network, but much smaller than the size of the system, we expect the Lorentz-invariant (3+1)-dimensional action to emerge.”
Just that Lorentz-invariance which emerges at a certain system size is not Lorentz-invariant.

I must appear quite grumpy going about and picking on what is admittedly an interesting and very creative idea. I am annoyed because in my recent papers on space-time defects, I spent a considerable amount of time trying to figure out how to use the Causal Set sprinkling for something (the defects) that is not a point. The only way to make this work is to use additional information for a covariant (but not invariant) reference frame, as one does with the chains.

Needless to say, in none of the papers on the topic of evolving, vanishing, dimensions one finds an actual construction of the conjectured Lorentz-invariant random lattice. In the review, the explanation reads as follows: “One of the ways to evade strong Lorentz invariance violations is to have a random lattice (as in Fig 5), where Lorentz-invariance violations would be stochastic and would average to zero...” Here is Fig 5:

Fig 5 from arXiv:1406.2696 [gr-qc]


Unfortunately, the lattice in this proof by sketch is obviously not Lorentz-invariant – the spaces are all about the same size, which is a preferred size.

The recent paper of Dejan Stojkovic and Niayesh Afshordi attempts to construct a model for the space-time by giving the dimensions a temperature-dependend mass, so that, as temperatures drop, additional dimensions open up. This begs the question though, temperature of what? Such an approach might make sense maybe in the early universe, or when there is some plasma around, but a mean field approximation clearly does not make sense for the scattering of two asymptotically free states, which is one of the cases that the authors quote as a prediction. A highly energetic collision is supposed to take place in only two spatial dimensions, leading to a planar alignment.

Now, don’t get me wrong, I think that it is possible to make this scenario Lorentz-invariant, but not by appealing to a non-existent Lorentz-invariant random lattice. Instead, it should be possible to embed this idea into an effective field theory approach, some extension of asymptotically safe gravity, in which the relevant scale that is being tested then depends on the type of interaction. I do not know though in which sense these dimensions then still could be interpreted as space-time dimensions.

In any case, my summary of the recent papers is that, unsurprisingly, the issue with Lorentz-invariance has not been solved. I think the literature would really benefit from a proper no-go theorem proving what I have argued above, that there exist no random lattices that are Lorentz-invariant on the average. Or otherwise, show me a concrete example.

Bottomline: A set is not a graph. I claim that random graphs that are Lorentz-invariant on the average, and are not space-filling, don’t exist in (infinitely extended) Minkowski space. I challenge you to prove me wrong.

Monday, January 13, 2014

Shooting strings

Shooting strings.
Source: Meez Forums.
String theory, once hailed as theory of everything, now struggles to demonstrate its use for anything at all.

Most string theorists today, if not working for banks, study the gauge-gravity correspondence. This celebrated idea, arguably one of the most interesting findings in string theory, relates a strongly coupled field theory in flat space to weakly coupled gravity in a higher dimensional space. These higher-dimensional spaces do not resemble our universe, so the interesting applications of the gauge-gravity correspondence are analytical calculations in strongly coupled field theory. Notoriously difficult problems of the field theory can become manageable by reformulating them in the language of gravity.

The most widely promoted use of this gauge-gravity correspondence has been the quark gluon plasma, which is produced in highly energetic collisions of heavy ions, previously at RHIC and now at the LHC. There has been a lot of hype about the low viscosity that was analytically found using the gauge-gravity correspondence and that fit well with observations. But heavy ion physics isn’t just viscosity. There are many other observables that a good model must be able to explain.

One of these observables is the energy loss that elementary particles experience when they travel through the plasma. Just by chance it can happen that a particle pair is created but only one of the two particles travels through the plasma and loses energy. The primary particles are unstable and eventually decay to form stable hadrons. By measuring and summing up the momentum of the decay products one can infer the energy loss that happened in the plasma.

We previously saw that the gauge-gravity correspondence seems to work well for the RHIC data, but misses the mark when the more recent LHC is also taken into account. The prediction is far outside the error margin of the data, both in terms of magnitude and in terms of slope. The gauge-gravity correspondence predicts too much energy loss. I call that a bad fit to the data. String theorists call it “qualitatively correct” which seems to mean their prediction has an upward slope.

But heavy ion physics is a messy business where many different processes come together and that makes it difficult to draw unambiguous conclusions. Clearly that situation doesn’t look good. However, as I mentioned earlier, last year I heard a talk by Steven Gubser about an upcoming paper of his addressing the energy loss in the gauge-gravity correspondence. Ficnar, Gubser and Gyulassy now recently posted their new paper on the arxiv:
    Shooting String Holography of Jet Quenching at RHIC and LHC
    Andrej Ficnar, Steven S. Gubser, Miklos Gyulassy
    arXiv:1311.6160 [hep-ph]

In this paper, the authors propose a new description on the gravity side for the particle which on the field theory side loses energy while traveling through the plasma. Previously, this particle was modeled by a string parallel to the boundary that fell towards the black hole horizon. Ficnar et al instead model the particle by a string that ‘shoots up’ away from the horizon. They calculate the energy loss of the endpoint and find that the energy loss is reduced relative to the previous scenario.

They do not motivate the gravitational description and I am left wondering if not there should be an unambiguous procedure to find the gravitational analogue. If one can just choose a different setup and get a different energy loss that does not exactly increase my faith in the predictive value of the model.

Be that as it may, with their new model a sufficient reduction of the energy loss can only be achieved by pushing the crucial parameter (λ, the ‘t Hooft coupling) into a limit where the approximation actually breaks down. This is no good because then the results cannot be trusted.

So then they add higher curvature terms on the gravity side. This introduces an additional parameter, and a suitable choice for this second parameter allows the coupling to remain just about in the okay range. One would expect these higher-order terms to be present, but in principle I’d think the coupling shouldn’t be an independent parameter. In any case, this still doesn’t fit both the RHIC and the LHC data.

Since the interpretation of the data depends on the reconstruction of the effective temperature at the collision, they then speculate that maybe the temperature values are off by 10% or so, in which case their calculation would fit the data just fine.

This model is clearly an improvement though I can’t say I am terribly convinced. What seems to become increasingly clear though is that any successful model for highly energetic heavy ion collisions must use a suitable combination of both weakly and strongly coupled physics. The gauge-gravity correspondence still has a good chance to prove its use for the strongly coupled physics, but that will necessitate getting into all the messy details.

Thursday, September 12, 2013

Whatever happened to AdS/CFT and the Quark Gluon Plasma?

A decade ago, the AdS/CFT correspondence was celebrated as a possible description of the quark gluon plasma. RHIC measurements of heavy ion collisions at that time showed a surprisingly small viscosity that lead to a revision of the previous models. Excitingly, a small viscosity appears naturally in the gauge-theory dual of the AdS/CFT correspondence, nevermind that QCD is neither conformal nor supersymmetric. This development was all the more welcome as it served to demonstrate that string theory is not useless, as critics claimed, but that it can provide insights which improve our understanding of physical processes in the real world.

The gauge-gravity correspondence rapidly became a boom area in high energy physics. After the viscosity, people looked at other observables, notably the energy loss of particles going through the plasma. In highly energetic particle collisions, quarks are produced in pairs, but due to confinement individual quarks are never measured. What is measured instead are color-neutral hadrons that the quarks decay into and that are bundled into the direction of the original quarks. These bundles of hadrons are called jets and in the simplest case there are two of them with total momenta that are back-to-back correlated owing to their common origin from the quark pair.

In a heavy ion collision, one of the quarks may have to pass through the quark gluon plasma and thereby loses energy. This leads to what is known as ‘jet quenching’, a pair of back-to-back correlated jets where the total energy on one side is reduced. The energy loss in the plasma can and has been calculated in different models for heavy ion collisions. There are about a handful of such models, and in the days before the LHC all tried to get in their predictions for the jet quenching at LHC energies, the central question being how the energy loss scales with the increase in collision energy.

After the LHC heavy ion runs, it turned out the data do not agree very well with the scaling expected for energy loss from the AdS/CFT correspondence – in fact from all the models it was the worst prediction. As we discussed in an earlier post, AdS/CFT predicts too much energy loss, the plasma is too strongly coupled.

AdS/CFT confronts data. Image Credits: Thorsten Renk.
For details and references, please refer to this earlier post.

That the scaling doesn’t fit well with the data need not be too much of a worry because these scaling arguments were quite general and in reality the process of propagation through the quark gluon plasma isn’t quite as simple. But clearly the new data called on theoretical physicists working on AdS/CFT to study the observables and improve their model or to call it a failure and move on. Alas, nothing like that happened.

Since the LHC data came in, for two years or so, I’ve now been sitting through AdS/CFT talks that would inevitably be motivated by the low viscosity of the quark gluon plasma and the RHIC data, frog spawn picture and all. And every time I’d raise my hand at the end of the seminar and ask for the speaker’s opinion on the recent LHC data, expecting an update on the work on that matter and that there is no need to worry because the models can be improved to accommodate the data. Instead, it was like the LHC never happened. I don’t work in this field and don’t even follow the literature closely, but it seemed that I knew more about the problems with the LHC results than the people who got paid for talks motivated by yesterday’s data.

What they’d typically say is that nobody really expected AdS/CFT to make quantitative predictions. Alas, even the qualitative prediction, the mere slope of the curve, is wrong. The only prediction that is “qualitatively” correct is that there is some energy loss. Besides this, it’s all well and fine that a new model doesn’t make quantitative predictions, but that’s not a status that should become permanent.

It’s not that the data went entirely unnoticed. A few brave souls took on the issue. In this paper Ficnar, Norona and Gyulassy looked at the effects of higher derivative corrections to the gravity sector. It's somewhat ad-hoc, but apparently does reduce the energy loss. There is however no fit to the data and I’m not sure what this does to other observables. In another work, Ficnar also took into account a time-dependence of the configuration, but the conclusions with respect to the jet quenching and LHC data remain vague and amount to “a more thorough numerical analysis is needed.” In a recent paper, William Horowitz summarized the situation as follows:

“Despite significant efforts, AdS/CFT estimates for light quark and gluon energy loss are qualitative at best… it is difficult to imagine that a relatively sophisticated estimate of the suppression would be consistent with data.”

I was thus thrilled when I heard a talk by Stephen Gubser (about recent work with Ficnar) at a conference in Frankfurt this July, because he spoke about a possibility to improve the AdS/CFT model to accommodate the LHC data. Unfortunately, Gubser and collaborators don’t have a paper about this on the arXiv yet, so all I can do is refer you to the slides. My vague recollection is that he said one needs to take into account the momentum on the endpoints of the strings and that this does improve the scaling of the energy loss and fits considerably better with the LHC measurements. Though, if I recall correctly, getting the slope to match the data requires pushing the parameter into a range where one actually shouldn’t trust the model anymore. So in the end this might not solve the problem either.

If that explanation sounds like I don’t really understand the details it’s because I don’t really understand the details. I didn’t take notes, and two months later that’s as much as I can recall when looking at the slides and the Princeton professor has not been very communicative upon my inquiry. I thus just want to draw your attention to this development – if you’re interested in the topic, I recommend you have an eye on Ficnar and Gubser’s next arXiv uploads. For all I can tell, these guys are the only ones who take the issue seriously and so far it doesn’t sound too promising to me. If I’m missing some references, please let me know.

I don’t know enough about the topic to tell how likely it is that the AdS/CFT model can be improved to fit the data, and personally I find the applications to condensed matter systems better motivated. What annoys me about this situation is that people working in the field continue to decorate themselves with false achievements when they use the viscosity of the quark gluon plasma to justify the relevance of their own work and that of string theory by large.

It’s time the community comes clean and draws a conclusion. Either AdS/CFT cannot describe the quark gluon plasma, then please bury this episode in the history books and move on. Or it can, and then I expect to see a curve that fits on the LHC data. At the very least I want to hear it’s on the to-do list. Yes, the LHC really happened.


Tuesday, July 23, 2013

How stable is the photon? Yes, the photon.

Light. Source: Povray tutorial.
I never really got into the kitten-mania that has befallen the internet. But I do think there’s such a thing as a cute paper, and here is a particularly nice example:
The photon is normally assumed to be massless. While a photon mass breaks gauge invariance and seems unappealing from a theoretical perspective, in the end it’s an experimental question whether photons have a mass. And while the mass of the photon is tightly constrained by experiment, to below about 10-18 eV, the mere possibility that it may be non-zero brings up another very basic question. If the photon has a mass it can decay into other particles, for example a pair of the lightest neutrino and its anti-partner, or other so-far undiscovered particles beyond the standard model. But if decay is possible, what are the bounds on the life-time of the photon? That’s the question Julian Heeck set out to address in his paper.

If the photon is unstable and decays into other particles, then the number density of photons in the cosmic microwave background (CMB) should decrease while the photons are propagating. But then, the energy density of the spectrum would no longer fit the almost perfectly thermal Planck curve that we observe. One can thus use the CMB measurements to constrain the photon lifetime.

If one uses the largest photon mass presently consistent with experiment, the photon lifetime is 3 years in the restframe of the photon. If one calculates the γ-factor (ie, the time dilatation) to obtain the lifetime of light in the visible spectrum it turns out to be at least 1018 years.

It’s rare to find such a straight-forward and readable paper addressing an interesting question in particle physics. “Cute” was really the only word that came to my mind.

Monday, July 15, 2013

More mysteries in cosmic rays, and a proposed solution

The highest energetic particle collisions that we observe on our planet are created by particles from outer space that hit atomic nuclei in Earth’s upper atmosphere. The initial particle produces a large number of secondary particles which decay or scatter again, creating what is called a cosmic ray shower. The shower rains down on the surface where it is measured in large arrays of detectors. The challenge for the theoretical physicist is to reconstruct the cosmic ray shower so that it is compatible with all data. In practice this is done with numerical simulations in which enters our knowledge about particle physics that we have from collider experiments.

Cosmic ray shower, artist's impression. Source: ASPERA

One of the detectors is the Pierre Auger Observatory whose recent data has presented some mysteries.

One mystery we already discussed previously. The “penetration depth” of the shower, ie the location where the maximal number of secondary particles are generated, doesn’t match expectation. It doesn’t match when one assumes that the primary particle is a proton, and Shaham and Piran argued that it can’t be matched either by assuming that the primary is some nuclei or a composite of protons and nuclei. The problem is that using heavier nuclei as primaries would change the penetration depth to fit the data, but on the expenses that the width of the distribution would no longer fit the data. Back then, I asked the authors of the paper if they can give me a confidence level so I’d know how seriously to take this discrepancy between data and simulation. They never came back to me with a number though.

Now here’s an interesting new paper on the arXiv that adds another mystery. Pierre Auger sees too many muons


In the paper the authors go through possible explanations for this mismatch between data and our understanding of particle physics. They discuss the influence of several parameters on the shower simulation and eventually identify one that has the potential to influence both, the penetration depth and the number of muons. This parameter is the total energy in neutral pions.

Pions are the lightest mesons, that is particles composed of a quark and an anti-quark. They get produced abundantly in highly energetic particle collisions. Neutral pions have a very short lifetime and decay almost immediately into photons. This means essentially all energy that goes into neutral pions is lost for the production of muons. Besides the neutral pions there are two charged pions and the more energy is left for these and other hadrons, the more muons are produced in the end. Reducing the fraction of energy in neutral pions also changes the rate at which secondary particles are produced and with it the penetration depth.

This begs the question of course why the total energy in neutral pions should be smaller than present shower simulations predict. In their paper, the authors suggest that a possible explanation might be chiral symmetry restoration.

The breaking of chiral symmetry is what accounts for the biggest part of the masses of nucleons. The pions are the (pseudo) Goldstone bosons of that broken symmetry, which is why they are so light and ultimately why they are produced so abundantly. Pions are not exactly massless, and thus “pseudo”, because chiral symmetry is only approximate. The chiral phase transition is believed to be close by the confinement transition, that being the transition from a medium of quarks and gluons to color-neutral hadrons. For all we know, it takes place at a temperature of approximately 150 MeV. Above that temperature chiral symmetry is “restored”.

In their paper, the authors assume that the cosmic ray shower produces a phase with chiral symmetry restoration which suppresses the production of pions relative to baryons. They demonstrate that this can be used to fit the existing data, and it fits well. They also make a prediction that could be used to test this model, which is a correlation between the number of muons and the penetration depth in individual events.

They make it very clear that they have constructed a “toy model” that is quite ad-hoc and mainly meant to demonstrate that the energy fraction in neutral pions is a promising parameter to focus on. Their model raises some immediate questions. For example it isn’t clear to me in which sense a cosmic ray shower produces a “phase” in any meaningful sense, and they also don’t discuss to what extent their assumption about the chirally restored phase is compatible with data we have from heavy ion physics.

But be that as it may, it seems that they’re onto something and that cosmic rays are about to teach us new lessons about the structure of elementary matter.

Thursday, June 27, 2013

Passing through cosmic walls

Foam. Image source: DoITPoMS
Axions are hypothetical particles that are presently being searched for as possible dark matter candidates. The axion is a particle associated with the spontaneous breaking of a symmetry in the early universe. Unlike the case for the Higgs field, there can be a large number of ground states for the axion field. These states all have the same energy, but different values of the field. Since the ground states all have the same energy, they can coexist with each other, filling the universe with patches of different values of the axion fields, separated by boundaries called 'domain walls'.

The best visualization that came to my mind is a foam-like structure that fills the universe, though you shouldn't take this comparison too seriously.

At the domain walls, the axion field has to change values in order to interpolate between the different domains. This position-dependence of the field however creates a contribution to the energy density. Since the energy density of the domain walls decays slower with the expansion of the universe than the energy density of ordinary matter, this can become problematic, in the sense that it's in conflict with observation.

There are various ways to adjust these models or to pick the parameter ranges so that the domain walls do not appear to begin with, decay quickly, or are unlikely to be present in our observable universe. These are the most commonly used strategies for those interested in the axion as a particle. But in the recent years there has also been an increasing interest in using the domain walls themselves as gravitational sources, and so it has been suggested that they might play the role of dark energy or make contributions to dark matter.

In an interesting paper that appeared recently in PRL, Pospelov et al lay out how we could measure if planet Earth passed through such a domain wall
    How do you know if you ran through a wall?
    M. Pospelov, S. Pustelny, M. P. Ledbetter, D. F. Jackson Kimball, W. Gawlik, D. Budker
    Phys. Rev. Lett. 110, 021803 (2013)
    arXiv:1205.6260 [hep-ph]
(Apparently the arXiv-title did not survive peer review.)

The idea is to use the coupling of the gradient of the axion field, which is non-zero at the domain walls, to the spin of particles of the standard model. A passing through the domain wall would oh-so-slightly change the orientation of spins and align them into one direction.

This could be measured with devices normally used for very sensitive measurements of magnetic fields, optical magnetometers. Optical magnetometers consist basically of a bunch of atoms in gaseous form, typically alkali metals with one electron in the outer shell. These atoms are pumped with light into a polarized state of higher angular momentum, and then their polarization is measured again with light. This measurement is very sensitive to any change to the atomic spin's orientation, which may be caused by magnetic fields - or domain walls.

In the paper, and in a more recent follow-up paper, they estimate that that presently existing technology can test interesting parameter ranges of the model when other known constraints (mostly astrophysical) on the coupling of the axion have been taken into account. It should be mentioned though that they consider not a pure QCD axion, but a general axion-like field, in which case the relation between the mass of the particle and its coupling is not fixed.

The sensitivity to the event of a domain wall passing can be increased by not only reading out one particular magnetometer, but by monitoring many of them at the same time. Then one can look for correlations between them. This way one is not only able to better pick out a signal from the noise, but from the correlation time one could also determine the velocity of the passing through the domain wall.

I think this is an interesting experiment that nicely complements existing searches for dark matter. I also like it for its generality. Maybe while searching for axion domain walls, we'll find something else that we're moving through and that happens to couple very weakly to spins.

Friday, February 08, 2013

Book review "The Edge of Physics" by Anil Ananthaswamy

The Edge of Physics: A Journey to Earth's Extremes to Unlock the Secrets of the Universe
By Anil Ananthaswamy
Mariner Books (January 14, 2011)

In "The Edge of Physics", Ananthaswamy takes the reader on a trip to some of the presently most exciting experiments in physics. The Soudan Mine where physicists are looking for direct detection of dark matter, the Baikal Lake with its underwater neutrino detectors, the Square Kilometre Array in South Africa, the VLT in Chile, the IceCube Neutrino Observatory at the South Pole, and others more before he finishes his travels at CERN in Geneva.

Along this trip one learns a lot not only about the scenery, but also about physics and the history of physics. Ananthaswamy doesn't add the experiments as an afterthought to elaborations on quantum mechanics and special relativity, but the experiments and the people working on them take lead. His theoretical explanations are brief but to the point. The appendix contains the shortest summaries of the Standard Model and the Concordance Model that I've ever seen. He explains enough so the reader can understand which new physics the experiments are looking for and what the relevance is, but always quickly comes back to show how this search proceeds in reality.

I found this book hugely enjoyable because it is not your typical popular science book. I didn't have to make my way through yet another chapter that promises to explain general relativity without equations, and I learned quite some things along the way. It's amazing how many details experimentalists have to think about that would never have occurred to me. Ananthaswamy tells stories of people who found their destiny, stories of courage, stories of trial and error, and some quite dramatic accidents and almost accidents. It's a very well written narrative.

I have only one complaint about this book which is that it would have very much benefited from some illustrations, be that to explain the CMB power spectrum, the generations and families in the Standard Model, the thermal history of the universe, or sketches of the experiments and their parts.

In summary, I can recommend this book to everybody with an interest in contemporary physics or the history of physics. If you have no clue about particle physics or cosmology whatsoever, you might not be able to follow some of the explanations, which are really brief. But even then you'll still take something away from this book. I'd give "The Edge of Physics" 5 out of 5 stars.

Tuesday, October 30, 2012

ESQG 2012 - Conference Summary

Conference Photo: Experimental Search for Quantum Gravity 2012.

The third installment of our conference "Experimental Search for Quantum Gravity" just completed. It was good to see both familiar faces and new ones, sharing a common interest and excitement about this research direction. This time around the event was much more relaxing for me because most of the organizational work was done, masterfully, by Astrid Eichhorn, and the administrative support at Perimeter Institute worked flawlessly. In contrast to 2007 and 2010, this time I also gave a talk myself, albeit a short one, about the paper we discussed here.

All the talks were recorded and can be found on PIRSA. (The conference-collection tag isn't working properly yet, I hope this will be fixed soon. You'll have to go to "advanced search" and search for the dates Oct 22-26 to find the talks.) So if you have a week of time to spare don't hesitate to blow your monthly download limit ;o) In the unlikely event that you don't have that time, let me just tell you what I found most interesting.

For me, the most interesting aspect of this meeting was the recurring question about the universality of effective field theory. Deformed special relativity, you see, has returned in the reincarnation "relative locality" as to boldly abandon locality altogether after the problem could no longer be ignored. It still doesn't have, however, a limit to an effective field theory. A cynic might say "how convenient," considering that 5th order operators in Lorentz-invariance violating extensions of the standard model are so tightly constrained you might as well call them ruled out.

If you're not quite as cynic however, you might take into account the possibility that the effective field theory limit indeed just does not work. That, it was pointed out repeatedly -- among others by David Mattingly, Stefano Liberati and Giovanni Amelino-Camelia -- would actually be more interesting than evidence for some higher order corrections. If we find data that cannot be accommodated within the effective field theory framework, such as for example evidence for delayed photons without evidence for 5th order Lorentz-invariance violating operators, that would give us quite something to think about.

I agree: Clearly one shouldn't stop looking just because one believes to know nothing can be found. I have to add however that the mere absence of an effective field theory limit doesn't convince me there is none. I want to know why such a limit can't be made before I believe in this explanation. For all I know it might be absent just because nobody has made an effort to derive it. After all there isn't much of an incentive to do so. As the German saying goes: Don't saw on the branch you sit on. That having been said, I understand that it would be exciting, but I'm too skeptic myself to share the excitement.

A related development is the tightening of constraints on an energy-dependence of the speed of light. Robert Nemiroff gave a talk about his and his collaborator's recent analysis of the photon propagation time from distant gamma ray bursts (GRB). We discussed this paper here. (After some back and forth it finally got published in PRL.) The bound isn't the strongest in terms of significance, but makes it to 3σ. The relevance of this paper is the proposal of a new method to analyse the GRB data, one that, given enough statistics, will allow for tighter constraints. And, most importantly, it delivers constraints on scenarios in which the speed of highly energetic photons might be slower as well as on the case in which it might be faster than the photons with lower energy. And for an example on how that is supposed to happen, see Laurent Freidel's talk.

A particularly neat talk was delivered by Tobias Fritz who summarized a simple proof that a periodic lattice cannot reproduce isotropy for large velocities, and that without making use of an embedding space. Though his argument works so far for classical particles only, I find it interesting because with some additional work it might become useful to quantify just how well a discretized approach reproduces isotropy or, ideally, Lorentz-invariance, in the long-distance limit.

Another recurring theme at the conference was dimensional reduction at short distances which has recently become quite popular. While there are meanwhile several indications (most notably from Causal Dynamical Triangulation and Asymptotically Safe Gravity) that at short distances space-time might have less than three spatial dimensions, the ties to phenomenology are so far weak. It will be interesting to see though how this develops in the coming years, as clearly the desire to make contact to experiment is present. Dejan Stojkovic spoke on the model of "Evolving Dimensions" that he and his collaborators have worked on and that we previously discussed here. There has however, for all I can tell, not been progress on the fundamental description of space-time necessary to realize these evolving dimensions.

Noteworthy is also that Stephon Alexander, Joao Magueijo and Lee Smolin have for a while now been poking around on the possibility that gravity might be chiral, ie that there is an asymmetry between left- and right-handed gravitons, which might make itself noticeable in the polarization of the cosmic microwave background. I find it difficult to tell how plausible this possibility is, though Stephon, Lee and Joao all delivered their arguments very well. The relevant papers I think are this and this.

I very much enjoyed James Overduin's talk on tests of the equivalence principle, as I agree that this is one of the cases in which pushing the frontiers of parameter space might harbor surprises. He has a very readable paper on the arxiv about this here. And Xavier Calmet is among the brave who haven't given up hope on seeing black holes at the LHC, arguing that the quantum properties of these objects might not be captured by thermal decay at all. I agree with him of course (I pointed this out already in this post 6 years ago), yet I can't say that this lets me expect the LHC will see anything of that sort. More details about Xavier's quantum black holes are in his talk or in this paper.

As I had mentioned previously, the format of the conference this year differed from the previous ones in that we had more discussion sessions. In practice, these discussion sessions turned into marathon sessions with many very brief talks. Part of the reason for this is that we would have preferred the meeting to last 5 days rather than 4 days, but that wasn't doable with the budget we had available. So, in the end, we had the talks of 5 days squeezed into 4 days. There's a merit to short and intense meetings, but I'll admit that I prefer less busy schedules.