Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Wednesday, March 17, 2010

What I learned today

We had an interesting talk today by Antti Niemi from Uppsala University modestly titled "Can Theory of Everything Explain Life?" It was about string theory of a somewhat different kind. The string in this case is a protein and what the theory should explain is its folding. The talk was basically a summary of this paper: "A phenomenological model of protein folding." In a nutshell the idea is to put a U(1) gauge theory on a discretized string (the protein), define a gauge-invariant free energy and minimize it. The claim is that this provides a good match to available data.

I know next to nothing about protein folding, so it's hard for me to tell how good the model is. From the data he showed, I wasn't too impressed that one can fit a scatter plot with two maxima by a function that has 5 free parameters, but then that fit is not in the paper and I didn't quite catch the details. One thing I learned from this talk though is that PDB sometimes doesn't stand for Particle Data Book, but for Protein Data Bank. If you know more about protein folding than I, let me know what you think. I found it quite interesting.

Something else that I learned in the talk is that the DNA of the bacterium Escherichia Coli is a closed string rather than an open string (see picture). I think I had heard that before. There's enzymes that act on the DNA, so-called topoisomerases that don't change the DNA sequence but the topology of the string. In other words, these enzymes can produce knots. Simple knots, but still. I think I had also heard that before. However, I thought the topology-change of the DNA is a process that is useful for the winding/unwinding and reading/reproducing of the DNA. It seems however that the topology of the DNA affects the synthesis of proteins, in particular the folding and function of the proteins. This probably isn't really news for anybody who works in the field but I actually didn't know that the topology of the DNA, not only it's sequence, has functional consequences. Alas, that flashed by only briefly and wasn't really content of the talk. But I find it intriguing.

Monday, May 25, 2009

Networks

Networks are everywhere

The study of networks and the related branch of mathematics, graph theory, have received an increasing amount of attention during the past few years. It's a highly interdisciplinary area, connecting the natural sciences with the social sciences, maths and the computer sciences. Did I leave out anybody who is interested in anything? Not so surprising, since networks can be found everywhere throughout our closely connected world: from computer networks and electrical power grids, to social networks, neural networks, to food webs - even the bible has a network:
[Click to enlarge. Via.]


Just consider the amount of different networks woven through your life: there's the route maps of airlines, the internet and its virtual link-structure, the world wide web (for example the blogosphere), the production and distribution processes of consumption goods, electricity networks and, of course, the sexual networks. And though they are different in many aspects, they share a similar underlying fundamental structure that can be mathematically captured and analyzed. Needless to say, with our rapidly improving capacities for storing and handling large amounts of data, the study of networks has flourished tremendously within the last decade.

What is a network?

Pictorially speaking a network is a collection of dots connected by lines. Physicists tend to call the dots nodes and the lines links, mathematicians call it a graph with vertices and edges, but it's the same thing really. The number of connections a dot has is also called the 'degree' of the node or, if you prefer, the 'valency' of the vertex. The great beauty of networks is their generality. From such a simple mathematical description one arrives at a great variety of structures and phenomena. 

There are different properties a network can have:
  • The links between the nodes can just be connectors that are on or off, or they can be arrows indicating a preferred direction, called a 'directed graph'. Links on websites for example have a direction, friendship networks on Facebook don't.
  • Both the nodes and the links can carry information, eg about what is produced in the nodes or transmitted in the links.

There are two important special types of networks:

  • Random: A random network is one in which from a collection of nodes pairs are picked randomly and connected. 
  • Scale free: A scale-free network is one in which there is a specific relation between the number of nodes and their degree, called a 'power law'. In these networks, there are a lot of nodes with only a few links, and a few nodes with a lot of links - these are also called 'hubs'. Real world networks, eg the www or airline networks, turn out to be often scale-free (to some approximation) with consequences for their robustness and vulnerability e.g. the spread of viruses or the dissemination of information.
Other important information in a network's structure are highly connected clusters with sparse connections (gaps) between them (there are different algorithms for their identification), and flows around loops. Particularly interesting are growth mechanisms and phase transitions that can occur with that growth.
Network Science - what is it good for

So why am I telling you that? Because the analysis of networks can help us to understand many aspects of the inanimate and animate world whose interdependence obscures local analysis. In particular, the growth of specific structures and the dynamical properties of networks play an increasingly important role for managing large scale effects. Understanding the conditions necessary for resilience - may it be of a social, economical or ecological network - is essential to ensure stability of these networks that are vital parts of our lives.

While my enthusiasm about "complex systems" is limited due to the vagueness and ambiguity of a lot of of this research, network science is its backbone. Needless to say, there is some overenthusiasm also in this area. One thing I would like to know better for example is what the limits are of modeling systems as networks. With enough abstraction, I can probably describe everything as some sort of graph. But under which circumstances is that insightful?

This post was inspired by last week's colloquium by Raissa D'Souza from UC Davis on "Growing, Jamming and Changing Phase" that you find on PIRSA 09050004.



Further Reading

Thursday, March 12, 2009

GLOBE at Night

I've learned a new expression: Citizen Science. This means that interested people do contribute to a research project, either by collecting data, or by allocating computing time of their PCs to contribute to the analysis of huge sets of raw data - SETI@Home is an example of the latter kind of Citizen Science. Sabine and I talked about this some days ago, and just then, I came across a wonderful example of the former kind of Citizen Science: GLOBE at Night.

The idea of this project is to establish a map of "light pollution", the illumination of the night sky caused by artificial light sources on the ground. Light pollution is nuisance to everyone who wants to marvel at the stars, and it can be harmful to the biology and ecology of animals in the wild.

To map the extent of light pollution over the planet, participants of GLOBE at night are just asked to look at the constellation of Orion and report what they see. Yesterday night, what I could see from the patio was something like this:


which means visibility of stars corresponding to a Magnitude 3 Chart. But then, this result may have been skewed a bit, as there was a huge natural source of light pollution - the nearly full moon. To avoid this interference by the moonlight, the actual observation period is scheduled towards the next New Moon, between March 16 and March 28.

So, we all can become "Citizen Scientists", by reporting our view of Orion to the GLOBE at night! I just hope my view will soon be better again than tonight:

Cloudy Sky.




Tags: , ,

Monday, February 09, 2009

Singularities in your Kitchen

When Sabine was preparing her talk about black holes and information loss, we thought about other examples of singularities in physical theories besides the centres of black holes in General Relativity. Somehow, the topic seems to pursue me since then - the current issue of the Scientific American welcomes me with Naked Singularities on the title page, and there is even a newly created Singularity University.

Droplet SingularityPhotograph of a drop of a mixture of glycerol in water. The diameter of the drop is about 20 mm. The photo of the right shows the neck in detail. From "A Cascade of Structure in a Drop Falling from a Faucet" by X. D. Shi, Michael P. Brenner, and Sidney R. Nagel, Science 265 (1994) 219-222, via jstor.)
But I was fascinated most by what I've learned since then about singularities in fluid dynamics - singularities that actually occur in the kitchen, every time a drop of water falls off the tap.

A singularity in the mathematical formulation of a physical theory means that a variable which represents a physical quantity becomes infinite within a finite time. This is, actually, not that rare a phenomenon in non-linear theories. For example, in General Relativity, Einstein's field equations when applied to the gravitational collapse of a very massive star develop infinities in density and curvature at the centre of the system. Another famous example of a non-linear theory is fluid dynamics as described by the Navier-Stokes equations - and this is also a habitat of nice singularities.

For example, when a thin jet of water decays into drops, the breakup is driven by surface tension which tries to reduce the surface area. Such a reduction can be realised by diminishing the radius of the jet. Shrinking, triggered by tiny fluctuations of the surface, becomes more and more localised, end eventually, the jet breaks in finite time. The local radius goes to zero, local flow velocity and surface curvature diverge, and the surface is not smooth anymore. Something very similar happens when a drop forms and pinches off from a tap, as can be seen nicely in the photograph taken from the paper by Shi, Brenner, and Nagel. Breakup occurs just above the spherical droplet, where the radius of the thread of the fluid shrinks to zero and the surface becomes kinky.

Of course, a singularity in the Navier-Stokes equations at the pinch-off of a droplet doesn't mean anything mysterious. But it is a hint that in this situation and at small enough length scales, the equations do not make sense anymore, or at least disregard essential physics. In this case, we know of course that the molecular structure of matter becomes important, replacing the continuum description of matter implied by the Navier-Stokes equations. On the scale of molecules, the concept of a sharp and smooth surface is ambiguous, but already at length scales between 10 and 100 nanometer, van der Waals forces between molecules come into play which are not considered in the continuum formulation.

It's a bit of a stretch to say that some similar effect might remove the singularity at the centre of a black hole, but on a very general level a similar breakdown of the theory that predicts a singularity might occur. In this case it would be General Relativity to be replaced by a theory of quantum gravity that accurately describes the region of strong curvature and high density.



Here are a few paper about singularities in fluid dynamics I found interesting:

If you know of other examples of singularities in fluid dynamics, or in other physical systems, I'll be glad to collect them in the comments!



Tags: , ,

Tuesday, February 03, 2009

Corot-Exo-7b: A Venus in another World

German science blogs today are abuzz with reports about the discovery of an Earth-like planet around a Sun-like star in the constellation of Monoceros, at a distance of about 450 light years.


The newly discovered planet Corot-Exo-7b transiting in front of its star (left, illustration by Klaudia Einhorn), and Venus in front of the disk of the Sun on June 8, 2004 (right, photo by Martin Sloboda). As the sizes of both the stars and the planets are similar, a transit of Corot-Exo-7b would look very similar to the Venus transit.

The planet has a radius which is 1.75 times larger than that of the Earth, and has six to thirteen times the mass of Earth. The star is a main sequence star with roughly the same composition as the Sun, with slightly less mass and a slightly lower temperature. However, distance of the planet to the star is only 1.7 percent of the distance of the Earth from the Sun - hence the revolution period, or "year", of the planet is only 20 hours, and its surface temperature is estimated to be between 1,000 and 1,500 degrees Celsius.

The planet was discovered by the European satellite mission Corot - hence its name, Corot-Exo-7b, meaning the first planet in the 7th planetary system discovered by Corot. Corot uses the transit method to search for new planets: When a planet passes in front of the disk of a star, the light of the star is slightly dimmed.

Here is the light curve of Corot-Exo-7, the star around which the planet is in orbit, showing a drop in brightness of the order of 10-4:



Mass, radius, and orbital parameters of the planet could be extracted from this measurement, and further observations and data analysis using the radial velocity method, the method which had led to the first detection of an exoplanet back in 1995.

So, it's indeed the first Earth-like planet at a Sun-like star - unfortunately, at a temperature close to the melting point of iron.








Tags: , ,

Thursday, January 29, 2009

Water is blue ... because water is blue

  Pacific Ocean near Santa Barbara, California


One of the most appealing aspects of the ocean is the colour of the water, ranging from a greyish green to deep blue.

But wait a minute: When I pour water in a glass, it is a clear, transparent liquid. So, what is the cause of the blue colour of the sea? Is it the reflection of the blue sky, perhaps?

The answer is simple, and perhaps surprising: Water is blue, because water is blue.

Blue Oceans,


Actually, water is quite a transparent liquid, but not perfectly transparent. All substances to a certain degree absorb light, and as a consequence, the intensity of a beam of light spreading through matter drops exponentially with distance, as described by the so-called Beer-Lambert law. Pure water appears transparent because it takes a distance of the order of metres to reduce by half the intensity of light passing through it. And, what is most important for the apparent colour of water, the absorption depends on the wavelength of light, hence colour.

The blue curve in the following figure shows the so-called absorption spectrum of pure water (data via Optical Absorption of Water by by Scott Prahl).



The absorption spectrum gives, on the vertical axis, the so-called absorption coefficient as a function of the wavelength of light (as measured outside of the medium). The area marked in yellow corresponds to the range of visible light, reaching from deep blue (at a wavelength of about 380 nanometer) to red (at a wavelength of about 760 nanometer). At the left of the visible spectrum lies the ultraviolet, and at the right, where the absorption curve is climbing and going through several bumps, the infrared.

The absorption coefficient is the inverse of the distance along which the intensity of light drops by a factor of e = 2.718..., and is measured in "inverse centimetre". Hence, an absorption coefficient of a = 10−2 cm−1 means that it takes a distance of d = 1/a = 10² cm = 1 m for the intensity of light to drop by one e-folding.

Now, as we can see, the absorption coefficient is very different at the red end of the visible spectrum than at the blue end. The absorption coefficient is plotted in the figure on a logarithmic scale, and indeed, absorption is about one hundred times stronger at the red end of the visible spectrum than at the minimum of the curve, which at a wavelength just below 500 nanometer still lies in the range of blue.

But this, of course, explains the intrinsic colour of water: when light passes through large amounts of water, its red component is absorbed the strongest, and the blue component the least - and hence, pure water appears to be blue.

... Vibrations,


Actually, the strong increase of the absorption coefficient of water towards the infrared not only causes the blue colour of the ocean. It is also intimately linked to the molecular structure of water.

Molecules of water consist of two hydrogen atoms bonded to one oxygen atom in a kinked shape. Water molecules are not completely rigid, but they can vibrate in different ways. The most important ways of shaking, or "vibrational modes", are a symmetric stretching, called ν1, a symmetric bending, called ν2, and an asymmetric stretching, called ν3:



As with any oscillatory system, vibrations are possible not just for these three modes, but also for higher harmonics - that is, overtones - and for combinations of different modes of oscillation. Indeed, bumps in the absorption curve of water can be identified with a combination of all three modes ("ν1+ ν23"), with a combination of the first overtone of mode 1 and mode 3 ("2ν13"), and with a combination of the second overtone of mode 1 and mode 3 ("3ν13"). For the higher harmonics 2ν1 and 3ν1, the frequency of oscillation is higher, and hence, absorption occurs at shorter wavelengths.

... Heavy Water,


There is, interestingly, a very clever way to check experimentally this explanation of the blue colour of water by the vibration of its molecules: Just look at heavy water instead of normal water!

In heavy water, D2O, the hydrogen atoms contain deuterons instead of protons, and hence have double the mass of "normal" hydrogen atoms. The electromagnetic forces bonding the hydrogen to the oxygen, however, are the same for heavy water and normal water. But this means that the frequencies of the different vibration modes of the molecule shift to lower values. It's the same phenomenon as when different masses are fixed to a spring: the higher the mass, the lower the frequency of oscillation.

As a consequence, one could expect the excitation of vibrations for the molecules of heavy water happens at lower frequencies than for normal water, hence at longer wavelengths. The increase of the absorption coefficient towards longer wavelengths could be expected to set in further in the infrared, barely touching the visible spectrum. And this is exactly what happens!

The following figure shows a measurement of the absorption spectra of normal and of heavy water, taken from WHY IS WATER BLUE? by Charles L. Braun and Sergei N. Smirnov, reproduced from J. Chem. Edu. 70(8) (1993) 612. The scale of the figures is linear, and the curves to the left are just scaled-up for a better visibility of the shape of the spectrum.



One can see that the bump corresponding to the mode 2ν13 is shifted form a wavelength of about 1,000 nm in normal water to about 1,300 nm in heavy water. There is an analogous shift towards longer wavelengths in all other features, and as result, the absorption spectrum of heavy water in the visible spectrum is nearly flat.

But this means that there are no marked differences in the absorption of light of different colours by heavy water. Thus, heavy water, different from normal water, should be colourless. And indeed, as shown in this photo by Braun and Smirnov, this is really the case!

  While a long tube filled with normal water (left) looks blue due to the absorption of the red component of the visible spectrum, the tube filled with heavy water is colourless (from WHY IS WATER BLUE? by Charles L. Braun and Sergei N. Smirnov).


... and Real Oceans


Beautiful physics is hidden below the blue surface of the ocean. But when I tried to inform me a bit about all this, I've also learned that whole books have been written on the topic, and that "the complexity of sea water as a substance means that its optical properties are essentially different from those of pure water. Sea water contains numerous dissolved mineral salts and organic substances, suspensions of solid organic and inorganic particles, including various live microorganisms, and also gas bubbles and oil droplets. Many of these components [..] absorb or scatter photons." (Light Absorption in Sea Water by Bogdan Woźniak and Jerzy Dera, page 5).

Here is a comparison of the absorption spectra of samples of water taken from different places around the globe (Light Absorption in Sea Water, page 6).



Curve 5, which resembles most the absorption spectrum of water we have seen above, has been measured in a sample taken from the Tonga Trench in the Pacific Ocean, at a depth of 10,000 m. And curve 8, the uppermost flat one, has been measured in surface water from the Gulf of Riga in the Baltic Sea.

From the shape of this spectrum, I would guess the sea near Riga looks more grey than blue.



Edit: The first version of the post falsely claimed that a microwave oven heats up food by setting into vibration the molecules of water. That's not correct: Microwaves, with frequencies in the range between 0.3 GHz and 300 GHz, corresponding to wavelengths from 1 mm to 1 m, have not enough energy to excite the vibrational modes of the water molecule. What the electromagnetic field of microwave frequencies does is to shake the water molecules by grappling them by their electrical dipole moments, and to set them in rotation. A detailed explanation can be found on Martin Chaplins unique site, Water Structure and Science", under Water and Microwaves.

Actually, the wavelength of microwaves is about a factor of 1000 longer than in the infrared and far infrared region where the vibrational absorption bands can be found. The vibrational bands in the infrared, though, make water vapour a strong greenhouse gas.

Thanks to all our readers who have pointed out the mistake to me, especially CIP and Jay!




Tags: , ,

Monday, July 07, 2008

Recreating the Big Bang?

With the start of the Large Hadron Collider coming closer, the topic is present in the media more than ever. A commonly used motivation is the alleged recreation of the Big Bang (see illustration to the right).
Peter Woit recently mentioned that Martinus Veltman, winner of the '99 Nobelprize in physics, “described claims that the LHC will 'recreate the Big Bang' as 'idiotic', and as 'crap'. He said that this is 'not science', but 'blather', and that the field would come to regret this, arguing that if you start selling the LHC with pseudo-science, you will end up paying for it.”

I am totally with Veltman. But what is behind the story? What does the LHC have to do with the Big Bang?


The Making Of
It is interesting to trace back how this inaccurate description developed. In February 2000, BBC News wrote on CERN's SPS:
    'Little Bang' creates cosmic soup
    10 February, 2000

    Scientists have created what they describe as a "Little Bang" inside which are the conditions that existed a thousandth of a second after the birth of the Universe in the so-called Big Bang.

Two years later, CNN.com writes about RHIC:
    'Little' Big Bang stumps scientists
    November 20, 2002

    Smashing together atoms to produce conditions similar to those in the first cosmic moments, scientists came up with some startling results that could force them to reexamine their understanding of the universe.

Five more years later we can read on MSNBC about the LHC:
    Teams toil underground to re-create big bang
    March 2, 2007

    It is a $4 billion instrument that scientists at the European Center of Nuclear Research, or CERN, hope to use to re-create the big bang — believed to be the event that caused the beginning of the universe — by crashing protons together at high speed.

Within 7 years we have thus moved from a 'little bang' via a 'little Big Bang' to a complete recreation of the Big Bang, a story which catches on. The TimesOnline writes “The machine, the Large Hadron Collider (LHC), aims to recreate the conditions of the Big Bang, when the universe is thought to have exploded into existence about 14 billion years ago.”, the German magazine Stern titles "Large Hadron Collider" - Urknall im Labor (Big Bang in the Lab), and for the Telegraph the LHC turned into a “Big Bang Machine” that “could destroy the planet” [1].

True, the LHC speeds up particles to higher energies than SPS, but still this is far off from anything similar to the Big Bang.



The Big Bang

The Big Bang is believed to be the first moment of the universe. Technically seen, it takes place at arbitrarily high energy density. It is commonly expected however that in this regime quantum gravity becomes important, and the density is neither infinitely high nor is the volume arbitrarily small. But still the temperatures for this to happen would be somewhere in the Planckian regime, that is at average energies of about 1016 TeV.

To our best current understanding, the universe then undergoes a rapid phase of expansion during which all energy densities drop and all matter cools. With dropping temperature, we pass the scale above which we expect Grand Unification and the three forces of the standard model separate. This is believed to be somewhere at 1013 TeV. Then around a TeV there is the electroweak phase transition. At some hundred MeV, that is about 10-4TeV, quarks start to form bound states like protons and neutrons. This is commonly called hadronization. It is this transition that we can now hope to study in appropriately designed collider experiments [2].

After hadronization, at temperatures around one MeV (10-6TeV), atomic nuclei can form - a process that is called 'nucleosynthesis'. Around this temperature also the only weakly interacting neutrinos decouple [3]. At temperatures of the oder eV (10-12TeV) atoms form and photons decouple. These photons have been traveling freely since this so called `freeze-out'. We can observe them today in the cosmic microwave background with an average temperature of around 3K (10-3eV) because they have been further redshifted by a factor 1000 since the freeze-out. After freeze-out, structure formation sets in, first stars, galaxies, solar systems and planets form. Some of these planets might carry intelligent life, some might even have a blogosphere.

For a useful illustration of the universe's timeline, see here.



The Little Bang

There are several important differences between the conditions created at the LHC and the Big Bang.
  1. The LHC main program is proton-proton collisions. There is no sensible way in which one could understand the conditions created in these particle collisions as a thermal density distribution. These are scattering experiments. (Though some of the data obtained in these experiments can have thermal characteristics, this does not mean it was indeed similar to the early universe.) The LHC will also have a heavy ion program in which lead nuclei are collided which each other. In these circumstances it is more appropriate to speak of actually creating an intermediate state with a high density and energy density.


  2. However, in such heavy ion collisions, the produced state of high density from the two nuclei expands much more rapidly than would be the case in the early universe. Everything is over within the time span needed for light to cross a few diameters of the colliding lead nuclei, or a few 10-22 seconds. In fact, the expansion is so rapid that it is not even clear from the outset if one can expect any thermalization. In contrast to this, in the early universe the hadronization transition happens after about the first microsecond, and the Hubble expansion is so slow compared to the back and forth of the quarks and gluons that it's granted the early universe is thermal. (Again, though some of the data obtained in heavy ion experiments has thermal characteristics, this does not mean it was indeed similar to the early universe.)


  3. Also, in the early universe the expansion of the matter is due to the expansion of space itself. In the laboratory, it is the matter that expands in an to very good approximation flat and static background. Though this might not make a difference for the cooling of the matter, it is conceptually very different.


  4. The typical temperature that is created in heavy ion collisions is some hundred MeV. That is about 19 orders of magnitude below the temperature we expect at the Big Bang.



Bottomline

The LHC is not a Big Bang machine. It is more accurate to say that with the heavy ion program at the LHC we will be able to create conditions closer to that in the early universe than ever before. This sounds more boring, but at least it isn't blatantly wrong. Aside from this, it is more useful to think of the LHC it as the world's largest microscope, that will help us to peer into the structure of elementary matter to a resolution better than ever before.



[1] For extensive explanation why it is implausible the LHC will cause the end of the world, see: Black Holes at the LHC - The CERN Safety report, Black Holes at the LHC - again, and Black Holes at the LHC - What can happen?[2] Please note that we are here talking about temperatures. The energy scales usually quoted for the LHC (14 TeV for pp and about 1150 TeV for Pb-Pb) are total center-of-mass energies, not temperatures.
[3] Since neutrinos decouple considerably earlier than photons, measurement of the cosmic neutrino background could allow us to lock back further than the cosmic microwave background.


TAGS: , , ,

Thursday, July 03, 2008

The End Of Theory?

Chris Anderson, the editor in chief of Wired Magazine, wrote last week an article that you find at the Edge proclaiming


Anderson claims that our progress in storing and analyzing large amounts of data makes the old-fashioned approach to science – hypothesize, model, test – obsolete. His argument is based on the possibility to analyze data statistically with increasing efficiency, for example online behavior: “Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.”

This, he seems to believe, makes models entirely unnecessary. He boldly extends his technologically enthusiastic future vision to encompass all of science:

“Consider physics: Newtonian models were crude approximations of the truth (wrong at the atomic level, but still useful). A hundred years ago, statistically based quantum mechanics offered a better picture — but quantum mechanics is yet another model, and as such it, too, is flawed, no doubt a caricature of a more complex underlying reality. The reason physics has drifted into theoretical
speculation about n-dimensional grand unified models over the past few decades (the "beautiful story" phase of a discipline starved of data) is that we don't know how to run the experiments that would falsify the hypotheses — the energies are too high, the accelerators too expensive, and so on.

Now biology is heading in the same direction... ”

Examples he provides rely on statistical analysis of data. It doesn’t seem to occur to him that this isn’t all of science. It strikes me as necessary to actually point out the reason why we develop models is to understand. Fitting a collection of data is part of it, but we construct a model to gain insight and make progress based on what we have learned. The point is to go beyond the range in which we have data.

If you collect petabytes over petabytes about human behavior or genomes and analyze them running ever more sophisticated codes, this is certainly useful. Increasingly better tools can indeed lead to progress in varios areas of science, dominantly in those areas that are struggling with huge amounts of data and that will benefit a lot from pattern recognition and efficient data classification. But will you ever be able to see farther than others standing on the shoulders of a database?

If you had collected a googol of examples for stable hydrogen atoms, would this have lead you to discover quantum mechanics, and all the achievements following from it? If you had collected data describing all the motions of stars and galaxies in minuscule details, would this have lead you to conclude space-time is a four-dimensional continuum? Would you ever have understood gravitational lensing? Would you ever have been able to conclude the universe is expanding from the data you gathered? You could have assembled the whole particle data booklet as a collection of cross-sections measured in experiments, and whatever you do within that range you could predict reasonably well. Bould would this have let you predict the Omega minus, the tau, the higgs?

Anderson concludes

“The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all.”

With data analysis only, we might be able to discover hidden knowledge. But without models science can not advance beyond the optimal use of available data – without models the frontiers of our knowledge are set by computing power, not by ingenuity. Making the crucial step to identify a basic principle and extend it beyond the current reach is (at least so far) an entirely human enterprise. The requirement that a model be not only coherent but also consistent is a strong guiding principle that has pointed us into the direction of progress during the last centuries. If Anderson’s “kind of thinking is poised to go mainstream,” as he writes, then we might indeed be reaching the end of theory. Yet, this end will have nothing to do with the scientific method becoming obsolete, but with a lack of understanding what science is all about to begin with.

PS: I wrote this while on the train, and now that I am back connected to the weird wild web I see that Sean Carroll wrote a comment earlier with the same flavor, so did Gordon Watts. John Horgan wrote about problem solving without understanding, and plenty of other people I don't know added their opinion. This immediate resonance indeed cheers me up. Maybe, science will have a chance. Leaves me wondering whether writing articles that cross the line of provocation to nonsense is becoming fashionable.


See also: Models and Theories.

Saturday, June 07, 2008

Maps of Science

Interesting website visualizing scienctific research fields based on a co-citiation analysis of journals.




Unfortunately, one can't zoom into the fields, that would be really cool.

I came across the site via this paper
    Mapping the backbone of science
    By Kevin Boyackm Richard Klavans and Katy Boerner
    Scientometrics, Vol. 64, No. 3 (2005) 351.374

which is quite visionary in its aims. Here is a quote from their conclusions:

"The disciplinary map presented here is designed to support decision-making, e.g., the allocation of resources among/between disciplines. However, it also promotes the understanding and teaching of the general structure of science. Although it is a static map, and thus does not reveal how disciplines are born, evolve, or die, it is the broadest static map of science published to date, and thus constitutes another step forward in the study of the structure and evolution of science by scientific means.

Ultimately, maps of science could be based on a much broader set of data (such as scholarly journals, proceedings, patents, grants, and funding opportunities). Alternative units of analysis (clusters of journals, papers, authors, funding sources and/or text) could be generated to address different user needs. Instead of being static, dynamic maps could be generated that show high activity, scientific frontiers, and merging/splitting of scientific areas.

We believe that these global maps of science will enable researchers and practitioners to search for and benefit from results and expertise across scientific boundaries, counterbalancing the increasing fragmentation of science and the resulting duplication of work. These maps of science could also serve as a common data reference system for scholars from all disciplines - analogous to how geologists use the earth itself to index and retrieve data, documents, and expertise, or to how astronomers use astronomical coordinates. If such a reference system were to exist, all researchers could have a bird's eye view of the landscape of science, and could use this landscape to navigate to areas of interest, to communicate results, and to announce discoveries.

This global view - as opposed to doing keyword based searches on the Web or in digital libraries with very little information about the coverage of the queried database or the quality of the result - would give many more people access to scientific results. This, in turn, would lead to more informed citizens and a faster spread of results and practices benefiting all of humanity."

Monday, June 02, 2008

Scientist's Playground

Hey, I uploaded a video! The memory card of my digicam is full, so I downloaded what's collected on it upon which I found some movie clips I took during my recent visit in Germany. Stefan and I, we more or less stumbled across an interactive exhibition called Science on Tour, which I guess was designed with the purpose to convince children to study physics (and/or to make their parents regret they didn't).

Unfortunately, on their websites the English translations are missing, so apologies. Here are the experiments that I recognized

Especially nice were the equal time curves, but they are not on the video.

Monday, May 19, 2008

Flying Over Mars

Next Sunday, on May 25, the Phoenix Mars Mission is supposed to land on Mars.

In the meantime, here is fancy animation of a flight over the Columbia Hills on Mars, via the Astronomy Picture of the Day:



The animation, by Doug Ellison, Randolph Kirk (USGS), MSSS / MER / NASA, combines real topographical data from the Mars Reconnaissance Orbiter with information about the Spirit Mars Rover, making its appearance at 1:45 in the movie...

More about the Mars Exploration Rovers (MER) Spirit and Opportunity and Phoenix at the Planetary Society. The Planetary Society Weblog also has links to QuickTime versions of the movie.


Tags: , ,

Saturday, April 26, 2008

Spooky Action

Thursday I came across an article by Bruno Maddox, on the websites of Discover magazine. Maddox, author of the column 'Blinded by Science', writes about
In this article he describes his fascination with interactions mediated over distances that he shares with many famous physicists. Especially in cases when only the effect is accessible to our senses it seems mysterious and spooky - the needle on the compass turning North, the moon orbiting around the earth. How do they know what to do? Maddox describes how he read "Electronics for Dummies" (by Gordon McComb and Earl Boysen) to tackle the mystery, and 71 days later comes to conclude
"as far as I can tell, nobody knows how a magnet can move a piece of metal without touching it. And for another—more astonishing still, perhaps—nobody seems to care."

Bizarre, I thought. What exactly does he mean with 'knowing'? Is this a philosophical question? I looked Maddox up on Wikipedia, and learned he is 'best known for his satirical magazine essays'. So then maybe it's a joke, I wondered? Maddox continues that in the further pursuit of the topic he then read the 'Mathematics of Classical and Quantum Physics', from which he likely learned the term 'action at a distance', and that "virtual particles are composed entirely of math and exist solely to fill otherwise embarrassing gaps in physics". He eventually summarizes

"What I have learned, in other words, after 71 days of strenuous research, is that I and my fellow Dummies no longer have a seat, if we ever did, at the dinner table of science. If we’re going to find any satisfaction in this gloomy vale of misery and mystery, we’re going to have to take matters into our own hands and start again, from first principles."
I've honestly tried to figure out what he meant to say, but I just can't make sense out of it. You're all welcome to start again, and from first principles. But I think this article sheds a rather odd light on the status of theoretical physics. So here are some comments:

1. Electro and Magnetic

We have experimentally extremely well confirmed theories that allow us to describe electromagnetism to high precision, in the classical as well as in the quantum regime. Maybe that's not satisfactory for everybody. But at least I think the explanation that the electromagnetic interaction is mediated by something called the electromagnetic field is very satisfactory. After all, we are surrounded by electromagnetic waves all the time, and we use them quite efficiently to carry phone calls from here to there, or to maneuver satellites in outer space. To calculate the interaction between two macroscopic objects like a fridge magnet and the fridge, at least I wouldn't use perturbation theory of quantum electrodynamics, but good old Maxwell's equations.

"Electronics for Dummies" maybe isn't exactly the right book to read if you want to understand how electromagnetism works and how to understand the field concept. Since Maddox is concerned with magnets let me point out an often occurring linguistic barrier: Electrodynamics is the theory of the electric and magnetic interaction, as it turns out both are just aspects of the the same field, and parts of the same theory.

To use a well known example, consider two resting electrons. You'd describe their field by the Coulomb interaction without magnetic component. Yet when you move relative to them, you'd assign to them a magnetic field since moving charges cause magnetic fields. This is no disagreement, it just means that under a transformation from one restframe to another the field components transform into each other. It was indeed this feature of Maxwell's equations that lead Einstein to his theory of Special Relativity ("Zur Elektrodynamik bewegter Körper", Annalen der Physik, 17 (1905), p .891–921).

I didn't read "Electronics for Dummies", but browsing the index on Amazon it seems to contain what you'd think, namely what a transistor is and how you outfit your electronic bench. To understand the basics of theoretical physics I would maybe recommend instead


2. The Standard Model

The interaction between a fridge magnet and the fridge is a macroscopic phenomenon that involves a lot of atomic and condensed matter physics. Ferromagnetism is an interesting emergent feature, and there are probably still aspects that are not fully understood. The Standard Model of particle physics describes the fundamental interactions between elementary particles. Complaining it doesn't describe your fridge magnet is completely inappropriate as said fridge magnet is hardly an elementary particle. You could as well say neuroscience doesn't describe the results of election polls.

See also my earlier posts on Models and Theories and Emergence and Reductionism.

3. Action at a Distance

Quantum mechanics has a spooky 'action at a distance', but of a completely different nature than the force between two magnets. In quantum mechanics there is no field that mediates it (at least nobody has ever measured one). Maybe even more importantly, this is an instantaneous 'action': the wave-function collapses non-locally. Very unappealing. That's why it's spooky (still). This well known problem of quantum mechanics however does not appear in classical electrodynamics, it comes in through the quantum mechanical measurement process.

Maxwell's theory that describes electric and magnetic interaction is local. Interactions between charges are mediated by the fields. The interaction needs to propagate, it doesn't happen instantaneously. The same is true for General Relativity. Yes, Newton called it a great absurdity that "one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one to another". But this is because in Newtonian gravity interactions were instantaneous. If you'd change the earth's mass, the moon would immediately know about it. It took Einstein to remove this great absurdity, and he taught us that gravity is mediated by spacetime itself. It propagates locally, there is no spooky action on a distance.

To get a grip on Quantum Electrodynamics I'd recommend



4. Virtual Particles

And yes, virtual particles are mathematical constructs that appear within the perturbation series and are handy devices in Feynman diagrams. The use of this mathematical tool however has proved to be correct to high precision. The effects of the presence of these virtual contributions have been measured, the best known examples are probably the Lamb-shift or the Casimir Effect.

It is a general problem which I encountered myself that popular science books use pictures, metaphors or generalized concepts to describe theories, and then the reader gets stuck with possibly inappropriate impressions that wouldn't occur if one had a derivation and thus a possibility to understand the limitations of these verbal explanations. One can e.g. derive the interaction energy between two pointlike sources as being the Fourier transformation of the propagator, the propagator being what describes also the virtual particle exchange in Feynman diagrams. This interaction energy for the photon propagator is just the Coulomb potential as you'd expect. (If the exchange particle is massive, you get a Yukawa potential). How seriously one should take the picture with the virtual particle is a different question though. The interaction between the fridge and the magnet is hardly a scattering process with asymptotically free in- and outgoing states.

I too like to ponder questions like what actually 'is' a particle, much like one can wonder what actually 'is' space-time. However, I admittedly fail to see what the point is of this rambling about "embarrassing gaps in physics" besides expressing the author's confusion about the books he read.

For an introduction into quantum field theory I recommend

(You can download the first chapter which explains very nicely the relations betwen particles, fields, and forces here.)

Bottomline

If you’re going to find "any satisfaction in this gloomy vale of misery and mystery", you’re going to have to take matters into your own hands and read the right books before abandoning the Standard Model.

PS: My husband lets me know he finds my writing very polite, and wants me to refer you to the Dunning-Kruger effect.


TAGS: , , , , ,

Tuesday, April 22, 2008

On the Emergence of Lies

lie -- pronunciation [lahy] noun,
verb: lied, ly·ing.

noun:
1. a false statement made with deliberate intent to deceive; an intentional untruth; a falsehood.
2. something intended or serving to convey a false impression.

3. an inaccurate or false statement.
4. the charge or accusation of lying.


verb:
5. to speak falsely or utter untruth knowingly, as with intent to deceive.
6. to express what is false; convey a false impression.

[Source: Dictionary.com]



I've been browsing recently through the references in the previously reviewed book "Complex Adaptive Systems" by Miller and Page about the use of agent based computational models for social interactions. While doing so, I came across a paper that I found quite interesting:

In this paper the authors examine the role that communication plays in the development of strategies. They use a very specific model, but the results they find have the potential to be more general. And since this is a blog, I want to speculate somewhat about it.


The Model

In the model examined in the paper the agents play in the prisoner's dilemma. This is a fairly simple game, in which the players receive payoffs depending on whether they cooperate or defect. Wikipedia summarizes the classical prisoner's dilemma as follows:
    Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal: if one testifies for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only six months in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must make the choice of whether to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?

Or, in table form:
Prisoner B Stays SilentPrisoner B Betrays
Prisoner A Stays SilentEach serves 6 monthsPrisoner A: 10 years
Prisoner B: goes free
Prisoner A BetraysPrisoner A: goes free
Prisoner B: 10 years
Each serves 5 years


In this game, regardless of what the opponent chooses, each player always receives a higher payoff (lesser sentence) by betraying, and thus betraying is the strictly dominant strategy.

In the model examined in the paper, the players can now in addition exchange communication tokens, where one of the tokens signals that the player selected a move. Their exchange continues until either both players indicate they have made a decision, or until the communication exceeds some chat limit. The additional payoffs from this possibility are that a player who has not chosen a move before reaching the chat limit obtains a punishment (a negative payoff), a player who picks a move but his opponent fails to does receive a payoff between that of mutual defection and mutual cooperation.

As far as I understand it, in each round each player plays with every other player. Payoffs are summed up, and then the player's strategies undergo a selection and mutation process, in which the best strategies have a survival advantage, plus some amount of randomness. And then the next round starts. I think for the following results it is crucial that too long communication without outcome has a negative payoff, whether or not that has to be included in exactly this form I don't know. I would have thought e.g. that those players who talk too much just get to play less in each round, which would also amount to a disadvantage. Either way, interpret it as you wish, the point is that blahblah without outcome sucks.



The Results

So here is in a nutshell the result of running this model a lot of times; I summarized it in the figure below. It sketches the hypothesis the authors put forward to interpret the data that they have collected.


The rounded boxes indicate the dominant strategy, and the arrows are some learning processes.

Suppose we start at the top, in a world in which there is no communication and the players in the prisoner's dilemma thus mutually defect. There might be the occasional mutant that tries to communicate, but if the other player doesn't listen, or doesn't understand this doesn't have any effect. But worse than not having an effect: if the talkative players get trapped into chatting to much, they receive a punishment. The authors further point out that the reason why no communication and mutual defection is a stable strategy is that communicating players are more vulnerable to mutations, which in addition with chatting too much being a disadvantage leads them to suspect this reduces the survivability of the communicating players.

The situation changes if two players meet each other who communicate and understand each other. They can then choose to cooperate, receive a higher payoff, and have a survival advantage. This leads to a rather sudden increase in communication and cooperation.

However, this emergence of communication and cooperation sows the seeds of its own destruction: It doesn't take a large mutation from the cooperative players to those that pretend to cooperate, but then defeat - which results in a higher payoff for them, to the disadvantage of the cooperative ones. Now one could suspect that the cooperative players will try to use some code to identify each other as being of the same type, and the others as being mimics. But whatever the code is they come up with, again it takes only a small mutation to turn it into a mimic.

This then leads to a lot of communication with decreasing cooperation. In the course of this the players will come to notice that talking without outcome is a disadvantage, so to improve the strategy it is beneficial to not talk at all. This leads to a gradual decay of communication back to the initial state.

These outbreaks of communication and cooperation with following decay are neither periodic, not are the outbreaks of equal size.



And We

Now I find this kind of interesting, as I think the development of sophisticated communication among humans, and the possibility to exchange information efficiently, is one of the most important evolutionary advantages. Of course the investigated model is a very simplistic one, and there is no good reason to believe it tells us something about beings playing such complex games as The-Game-Of-Human-Life. Maybe most importantly, in the examined case the players are not able to consider long-term effects of their actions, neither can they learn over the course of various cycles. But it's intriguing to speculate about the analogy.

I am completely convinced the amount of advertisement and commercials is an indicator for the certain decline of civilization. Above all other things it signals a culture of betrayal that we get more or less used to. Thus, we learn to some extend to mistrust information we receive. How many of the pills that you can buy on the internet will actually hold their promise? How many of these lotions will actually make you look younger? How much of what is 'guaranteed' is actually 'guaranteed'? How much of the stuff they try to convince you you can't live without is actually a completely unnecessary waste of resources?

Can you trust your used car dealer? Will the candidate hold his promises after the election? Do you believe what they write in the newspaper, or do they just sex up stories to obtain more attention, higher payoffs? (13 years old boy corrects NASA! - Fact or Fiction?). Are these boobs real?

What can we do to deal with this emergence of deceit, originating in the larger individual advantage? Well, we make up laws to punish lies* that can lead to damage. And make up religions to scare those who lie. In this way, we essentially incorporate the long-term effects of our actions.

However, dishonesty for the own advantage, and the resulting mistrust is a serious political problem on the global scale that corrupts our efforts to address the challenges we are facing in the 21st Century. Jeffrey Sachs said that very aptly:
    "Despite the vast stores of energy, including nonconventional fuels, solar power, geothermal power, nuclear power, and more, there is a pervasive fear of an imminent energy crisis resulting from the depletion of oil. The scramble of powerful countries to control Middle East oil or newly discovered reserves in other parts of the world, such as West Africa and the Arctic, has surely intensified, while investments in alternative and sustainable energy sources have been woefully insufficient. This is an example of a vicious cycle of distrust. The world could adopt a cooperative approach to develop sustainable energy supplies, with sustainability in the dual sense of low greenhouse gas emissions and long-term, low-cost availability. Alternatively, we can scramble for the depleting conventional gas and oil resources. The scramble, very much under way today, reduces global cooperation, spills over into violence and risks great power confrontations, and makes even more distant the good-faith cooperation to pool R&D investments to develop alternative fuels and alternative ways to use nonconventional fossil fuels.
    The Bush administration has been more consumed by the scramble rather than by cooperative global investments in a long-term future [...]"

~Jeffrey D. Sachs, in "Common Wealth", p. 45, typos and emphasis mine.

So, where are we in the diagram?


* I here refer to lie as with being of the intention to deceit the communicating partner to the own advantage. In other instances, lies serve various social purposes, as e.g. politeness, simplification or to cover lack of knowledge.


TAGS: , ,
See also: Communication

Saturday, April 19, 2008

Ninetynine-Ninetynine

"Just ninetynine-ninetynine!" is what they tell me every time I fail to switch the radio station fast enough, is what they print in the ads, is what the shout in the commercials.

When I was about six years old or so, I recall I asked my mom why all prices end with a ninetynine. Because they want you to believe it's cheaper than it is, I was explained. If they print 1.99 it's actually 2, but they hope you'll be fooled and think it's "only" one-something.

I found that a good explanation when I was six, but twentyfive years later I wonder if even six year's old know that can it be a plausible reason? Why keep stores on doing that? Do they really think customers are that stupid? Or has it just become a convention?

Now coincidentally, I recently came across this paper

via Only Human. The study presented in this paper examines the influence of a given 'anchor' price on the 'adjusted' price that people believe to be the actual worth of an object if the only thing they know is the adjusted price is lower than the retail price. A typical question they used in experiments with graduate students sounds like this

"Imagine that you have just earned your first paycheck as a highly paid executive. As a result, you want to reward yourself by buying a large-screen, high-definition plasma TV [...] If you were to guess the plasma TV’s actual cost to the retailer (i.e., how much the store bought it for), what would it be? Because this is your first purchase of a plasma TV, you have very little information with which to base your estimate. All you know is that it should cost less than the retail price of $5,000/$4,988/$5,012. Guess the product’s actual cost. This electronics store is known to offer a fair price [...]"

Where the question had one of the three anchor prices for different sample groups: a rounded anchor (here $5,000), a precise 'under anchor' slightly below the rounded anchor, and a precise 'over anchor' slightly above the rounded anchor. Now the interesting outcome of their experiment is that consistently people's guess for the adjusted price stayed closer to the anchor the higher the perceived precision of this price, i.e. the less zeros in the end. Here is a typical result for a beach house, the anchors in $, followed by the participants' mean estimate

    Rounded anchor: 800,000
    Mean estimate: 751,867

    Precise under anchor: 799,800
    Mean estimate: 784,671

    Precise over anchor: 800,200
    Mean estimate: 778,264

What you see is that the rounded anchor results in an adjustment that is larger
than the average adjustment observed with the precise anchors. Now you might wonder how many graduate students have much experience with buying beach houses, or plasma TV's for 5,000. But they used a whole set of similar questions, in which the measure to be estimated wasn't always a price, but possibly some other value like the protein value of a beverage. There even was a completely context-free question "There is a number saved in a file on this computer. It is just slightly less than 10,000/9,989/ 10,011. Can you guess the number?". The results remain consistent, the more significant digits the anchor has, the less the adjustment. For the context free question the mean estimate was 9,316 (rounded) 9,967 (precise under) 9,918 (precise over).

The paper further contains some other slightly different experiments with students to check other aspects, and it also contains an analysis of behavior in real estate sales. The author's looked at five years of real estate sales somewhere in Florida, and compared list prices with the actual sales prices of homes. They found that sellers who listed their homes more precisely (say $494,500 as opposed to $500,000) consistently got closer to their asking price. The buyers were less likely to negotiate the price down as far when they encountered a precise asking price.

I find this study kind of interesting, as it would indicate that the use of ninetynineing is to fake a precision that isn't there.

Bottomline: The more details are provided, the less likely people are to doubt the larger context.


TAGS: , ,

Sunday, April 13, 2008

Emergence and Reductionism

My last week's post on 'Theories and Models' was actually meant to be about emergence and reductionism. While wring however, I figured it would be better to first explain what I mean with a model since my references to sex occasionally seem to confuse the one or the other reader.

Brief summary of last week's post: we want to describe the 'real world out there' by using a model that has explanatory power. The model itself captures some features of the world, it uses the framework of a theory, but should not be confused with the theory itself. I found it useful to think of this much like a function (the theory) acting on a set (some part of the real world out there) to give us a picture (the model).


The model describes some objects and the way they interact with each other (though the interaction can be trivial, or the system just static). To complete the model one usually needs initial conditions and some data as input (to determine parameters). In the following I will refer to the part of the real world out there that the model is supposed to describe as 'the system'.

To reiterate what I said last week: I don't care whether you like that use of words or not, it's just to clarify what I mean when I use them.


I. Emergence

Today's topic is partly inspired by the book on "Complex Adaptive Systems" I just finished reading (see my review here), and partly by Lee's lecture on "The Problem of Time in Quantum Gravity and Cosmology" from April 2nd (PIRSA 08040011 and 08040013). Please don't ask me what happened in the other 13 lectures because I wasn't there.

Hmmm... I missed the first ten minutes on April 2nd. After watching the video I can now reconstruct what was written on the blackboard before I came and what the not completely wiped-off words said. I feel a bit like a time-traveler who just closed the loop. Either way, here is a brief summary of min 11:38 to 20:24. Lee explains there's three types of emergence:
  1. Emergence in scale:
    In which a system described on larger scale has a property that it wouldn't have on smaller scales. As an example he mentions viscosity of fluids that isn't a property which makes sense for an atom, and the fitness of biological species that wouldn't make sense for molecules. "Atoms don't have gender but living things have gender."

  2. Emergence in contingency:
    In which a system develops a property only under certain circumstances. As an example he mentions the temperature dependence of superfluidity.


  3. Emergence in time:
    In which a system develops a property in time. As an example he mentions biological membranes, and that more than 3.8 billions years ago it wouldn't have made sense to speak of these.

As somebody in the audience also pointed out, these are basically different order parameters to change a system (e.g. scale, temperature, or time).

I was somewhat confused by distinguishing between these three cases, and that not only because I don't know what the plural of emergence is. (It can't be emergencies, can it?) No, because I always understood emergence vaguely as a feature the whole has but its parts don't have. Not that I ever actually thought about it very much, but that would be an order parameter like the number of constituents and and their composition - which could, or couldn't fall among point one or two.

Part of my confusion arises because in practical circumstances it isn't always clear to me which of the three cases of 'emergence' on actually has at hand. For example, take the formation of atoms in the early universe. Is this an emergence in time? Or is this an emergence in contingency? After all it's the temperature that matters I would say. Just that the temperature is related to the scale factor, which is a function of time. Also, in most experiments we change the contingent factors in time - like e.g. the cooling of the superfluid medium. So, the second and third cases seem to be very entangled. I think then I should understand emergence of a system's properties in time as it taking place without being caused by a time dependent change of environmental conditions of the system. Like e.g. the emergence of emoticons in the written language ;-) or that of the red spot on Jupiter - cases in which it 'just' takes time.

Here is a nice example for patterns that I'd say emerge in time, an oscillating chemical reaction:



    [An example for a particularly pretty oscillating chemical reaction with emerging patterns. Unfortunately, the video description doesn't contain information about the chemicals used, instead it provides a very bizarre connection to migraine and 'stimulus points'. Either way, this sort of reaction is called a Belousov-Zhabotinsky-Reaction.]



II. Strong and Weak Emergence

Okay, so after some back and forth I figured out why I was feeling somewhat uneasy with these three cases. Besides that I think - as said above - in practical circumstances distinguishing one from the other is difficult, in the second and third case I'd have said a property might be emergent in the sense that it 'arises' and becomes relevant, but was present already in the setup of the model (and if not, you should come up with a better model). E.g. Bose Einstein condensation was predicted to arise at low temperatures. Likewise, I'd say if one knows the initial conditions of a system and its evolution then one knows what will happen in time - it might turn out only later emergent properties become noticeable and important, but it's a predictable emergence. Like e.g. stars that have formed out of collapsing matter or something like this.

Either way, to come back to my rather naive sense of 'emergence' by increasing the number of constituents. If you look at a part of some larger system, specifying or examining its properties just might not be enough to understand how the whole system will behave: it can just be an incomplete description. It can be one needs further information that is the interaction with other parts of the system. As an example take one of these photographic mosaics:


[Picture Credits: Andrea Planet, Click to enlarge.]

If you'd only look at one of the smaller photos you'd have no chance of ever 'predicting' that something will 'emerge' if you zoom out.

After looking at the Wikipedia entry on Emergence I learned that this essentially is the difference between 'strong' and 'weak' emergence. In the danger of expressing my total ignorance of various words and names in that Wiki entry I've never heard before and am presently not in the mood to follow up upon, let me say that weak emergence is - at least "in principle" - already contained in a model you use to describe the system, and is thus at least "in principle" predictable, whereas strong emergence isn't.



III. Reductionism

If you want to go back to Lee's lecture, fast forward to min 37:00, where the topic emergence and reductionism comes up again. Somebody in the audience (I believe it's Jonathan Hackett), asks (min 40:00): "Is there a phenomenon which is emergent which is not derivable and is not expected to ever be derivable from something else?" This is essentially the question whether strong emergence actually exists.

Let me paraphrase reductionism as the believe that a system can "in principle" be understood entirely by understanding its parts. Then the argument of whether or not reductionism can "in principle" explain everything is that same question: does strong emergence actually exist? Or are all emergent features 'weakly' emerging, in that they are "in principle" predictable?

Now you might have noticed a lot of "in principles" in the previous paragraphs. I'd think that most physicists believe there is no strong emergence. At least I don't believe it. As such, I do think reductionism does not discard any features. However, this believe is for practical purposes often irrelevant since the models that we use, however sophisticated, are never complete descriptions of reality anyway. Even if you had a 'theory of everything', and there was no strong emergence, it wouldn't automatically provide a useful "model for everything". If we'd find the one fundamental theory of elementary matter it wouldn't describe all of science for the same reason why specifying the properties of all atoms in a car doesn't help you to figure out why the damned thing doesn't want to start. And I doubt we'll be able to derive the 'emergence' of, say, blog memes from QCD any time soon.


[Even if we had a Theory of Everything, it wouldn't give us a model for everything we want to describe. Cartoon XKCD.]

But besides these practical limitations that we encounter when making models that still have to be useful, there is the question whether it is possible to ever figure out if a system has the potential for a 'weak emergence' of a new property. Since it's impossible to rule out something unpredictable will happen, I'd say we can never know all the 'non-relevant' factors or 'unknown unknowns' as Homer-Dixon put it in his book. For example I'd say it is possible that tomorrow the vacuum expectation value of the Higgs flips to zero and that's the end of the world as we know it. Not that I am very concerned this will actually happen, but what the bleep do we know? Anybody wants to estimate the risk this happens and sue somebody over it because we irresponsible physicists might all have completely overlooked a lot of unknown unknowns? Just asking.

I'm not actually sure what Lee is saying later about Stuart Kaufman's view since I didn't read any of Kaufman's books (got stuck in 'At Home in the Universe' somewhere around the history of the DNA or so). But I guess this argument points in the same direction: "What [Stuart] claims is that if you know all the properties that are relevant to compute the fitness function of all the species at some time, you do not know [...] enough to predict what will be the properties that will be relevant for the fitness function 100 million years later."

Thus, no matter whether there is some fundamental theory for everything or not, or whether strong emergence exists or not, we will be faced with systems in which features will unpredictably emerge. Like probably in the evolution of species on our planet, possibly in the climate, but hopefully not in the global economy.

Besides this, since it's impossible to prove that our inability to accurately make a prediction is due to the system and not due to the limitations of the human brain, the hypothesis that strong emergence doesn't exist is unfalsifiable (In other words: if you find an emergent feature you can't explain, you can't prove it can never be explained within any model). So I think I leave this domain over to philosophy.


VI. Summary

Properties of system can emerge in various ways, they can emerge by changes in scale, under certain conditions, or in time. One can distinguish between strong and weak emergence, where weakly emergent features are in principle predictable and strongly emerging ones aren't. However, this difference is a rather philosophical one as all of our models are incomplete descriptions of the real world anyway, so there always can be a 'strong emergence' simply because the description is incomplete. Further, it is of little use to know whether a feature is in principle unpredictable or if it is in practice unpredictable. Weak emergence is not in conflict with reductionism.


TAGS: , , ,