Showing posts with label Sociology of Science. Show all posts
Showing posts with label Sociology of Science. Show all posts

Monday, March 17, 2014

Do scientists deliberately use technical expressions so they cannot be understood?

Secret handshake?
Science or gibberish?
“[E]xisting pseudorandom and introspective approaches use pervasive algorithms to create compact symmetries. The development of interrupts would greatly amplify Byzantine fault tolerance. We construct a novel method for the investigation of online algorithms.”

“[T]he effective diminution of the relevant degrees of freedom in the ultraviolet (on which morally speaking all approaches agree) is interpreted as universality in the statistical physics sense in the vicinity of an ultraviolet renormalization group fixed point. The resulting picture of microscopic geometry is fractal-like with a local dimensionality of two.”
IEEE and Springer recently withdrew 120 papers that turned out to be random generated nonsense and Schadenfreude spread among the critics of commercial academic publishing. The internet offers a wide variety of random text generators, including the one used to create the now withdrawn Springer papers, called SciGen. The difficult part of creating random academic text is the grammar, not the vocabulary. If you start with a grammatically correct sentence it is easy enough to fill in technical language.

Take as example the above sentence
“The difficult part of creating random text is the grammar, not the vocabulary.”
And just replace some nouns and adverbs:
“The difficult part of creating completely antisymmetric turbulence is the higher order correction, not the parametric resonance.”
Or maybe
“The difficult part of creating parametric turbulence is the completely antisymmetric resonance, not the higher order correction.”
Sounds very educated, yes? I have some practice with that ;o)The problem is that if you don’t know the technical terms you can’t tell if the relations implied by the grammar make sense. There is thus, not so surprisingly, a long history of cynics abusing this narrow target group of academic writing, and this cynicism spreads rapidly now that academic writing has become more widely available. With the open access movement there swells the background choir chanting that availability isn’t the same as accessibility. Nicholas Kristof recently complained about academic writing in an NYT op-ed:
“[A]cademics seeking tenure must encode their insights into turgid prose. As a double protection against public consumption, this gobbledygook is then sometimes hidden in obscure journals — or published by university presses whose reputations for soporifics keep readers at a distance.”
Kristof calls upon academics to better communicate with the public, which I certainly support. At the same time however he also claims professional language is unnecessary and deliberately exclusive:
“Ph.D. programs have fostered a culture that glorifies arcane unintelligibility while disdaining impact and audience. This culture of exclusivity is then transmitted to the next generation through the publish-or-perish tenure process.”
Let me take these two issues apart. First deliberately exclusive, and second unnecessary.

Steve Fuller, who is a professor for Social Epistemology at the University of Warwick, argues (for example in his book “Knowledge Management Foundations”) that the value of knowledge is related to the scarcity of access to it. For that reason, academics have an incentive to put hurdles in the way of those wanting to get into the ivory tower and make it more difficult than it has to be. It is a good argument, though it is hard to tell how much of this exclusivity is deliberate. At least when it comes to my colleagues in math and physics, the exclusivity seems more a matter of neglect than of intent. Inclusivity takes effort and most academics don’t make this effort.

This brings me to the argument that academic slang is unnecessary. Unfortunately, this is a very common belief. For example, in reaction to my recent post about the tug-of-war between accuracy and popularity in science journalism, several journalists remarked that surely I must have meant precision rather than accuracy, because good journalism can be accurate even though it avoids technical language.

But no, I did in fact mean accuracy. If you don’t use the technical language, you’re not accurate. The whole raison d’ĂȘtre [entirely unnecessary French expression meaning “reason for existence”] of professional terminology is that it is the most accurate description available. And PhD programs don’t “glorify unintelligible gibberish”, they prepare students to communicate accurately and efficiently with their colleagues.

For physicists the technical language is equations, the most important ones carry names. If you want to avoid naming the equation, you inevitably lose accuracy.

The second Friedmann equation, for example, does not just say the universe undergoes accelerated expansion with the present values of dark matter and dark energy, which is a typical “non-technical” description of this relation. The equation also tells you that you’re dealing with a differentiable, metric manifold of dimension 4 and Lorentzian signature and are within Einstein’s theory of general relativity. It tells you that you’ve made an assumption of homogeneity and isotropy. It tells you exactly how the acceleration relates to the matter content. And constraining the coupling constants for certain Lorentz-invariance violating operators of order 5 is not the same as testing “space-time graininess” or testing whether the universe is a computer simulation, to just name some examples.

These details are both irrelevant and unintelligible for the average reader of a pop sci article, I agree. But, I insist, without these details the explanation is not accurate, and not useful for the professional.

Technical terminology is an extremely compressed code that carries a large amount of information for those who have learned to decipher it. It is used in academia because without compression nobody could write, let alone read, a paper. You’d have to attach megabytes worth of textbooks, lectures and seminars.

In science, most terms are cleanly defined, others have various definitions and some I admit are just not well-defined. In the soft sciences, the situation is considerably worse. In many cases trying to pin down the exact meaning of an -ism or -ology opens a bottomless pit of various interpretations and who-said-whats that date back thousands of years. This is why my pet peeve is to discard soft science arguments as useless due to undefined terminology. However, one can’t really blame academics in these disciplines – they are doing the best they can building castles on sand. But regardless of whether their terminology is very efficient or not compared to the hard sciences, it too is used for the sake of compression.

So no, academic slang is not unnecessary. But yes, academic language is exclusive as a consequence of this. It is in that not different from other professions. Just listen to your dentist and her assistant discuss their tools and glues, or look at some car-fanatics forum, and you’ll find the same exclusivity there. The difference is gradual and lies in the amount of time you need to invest to be one of them, to learn their language.

Academic language is not purposefully designed to exclude others, but it arguably serves this purpose once in place. Pseudoscientists tend to underestimate just how obvious their lack of knowledge is. It often takes a scientist not more than a sentence to recognize an outsider as such. Are you be able to tell the opening sentences of this blogpost from gibberish? Can you tell the snarxiv from the arxiv?

Indeed, it is in reality not the PhD that marks the science-insider from the outsider. The PhD defense is much like losing your virginity, vastly overrated. It looms big in your future, but once in the past you note that nobody gives a shit. You mark your place in academia not by hanging a framed title on your office door, but by using the right words at the right place. Regardless of whether you do have a PhD, you’ll have to demonstrate the knowledge equivalent of a PhD to become an insider. And there’s no shortcuts to this.

For scientists this demarcation is of practical use because it saves them time. On the flipside, there is the occasional scientist who goes off the deep end and who then benefits from having learned the lingo to make nonsense sound sophisticated. However, compared to the prevalence of pseudoscience this is a rare problem.

Thus, while the exclusivity of academic language has beneficial side effects, technical expressions are not deliberately created for the purpose of excluding others. They emerge and get refined in the community as efficient communication channels. And efficient communication inside a discipline is simply not the same as efficient communication with other disciplines or with the public, a point that Kristof in his op-ed is entirely ignoring. Academics are hired and get paid for communicating with their colleagues, not with the public. That is the main reason academic writing is academic. There is probably no easy answer to just why it has come to be that academia doesn’t make much effort communicating with the public. Quite possibly Fuller has a point there in that scarcity of access protects the interests of the communities.

But leaving aside the question of where the problem originates, at prima facie [yeah, I don’t only know French, but also Latin] the reason most academics are bad at communicating with the public is simple: They don’t care. Academia presently very strongly selects for single-minded obsession with research. Communicating with the public, about one’s own research or to chime in with opinions on scientific policy, it is in the best case useless in the worst case harmful to do the job that pays their rent. Accessibility and popularity does for academics not convert into income, and even an NYT Op-Ed isn’t going to change anything about this. The academics you find in the public sphere are primarily those who stand to benefit from the limelight: Directors and presidents of something spreading word about their institution, authors marketing their books, and a few lucky souls who found a way to make money with their skills and gigs. You do not find the average academic making an effort to avoid academic prose because they have nothing to gain with that.

I’ve read many flowery words about how helpful science communication – writing for the public, public lectures, outreach events, and so on – can be to make oneself and one’s research known. Yes, can be, and anecdotally this has helped some people find good jobs. But this works out so rarely that on the average it is a bad investment of time. That academics are typically overworked and underpaid anyway doesn’t help. That’s not good, but that’s reality.

I certainly wish more academics would engage with the public and make that effort of converting academic slang to comprehensible English, but knowing how hard my colleagues work already, I can’t blame them for not doing so. So please stop complaining that academics do what they were hired to do and that they don’t work for free on what doesn’t feed their kids. If you want more science communication and less academic slang, put your money where your mouth is and pay those who make that effort.

The first of the examples at the top of this post is random nonsense generated with SciGen. The second example is from the introduction of the Living Review on Asymptotic Safety. Could you tell?

Sunday, January 19, 2014

Trouble in the Ivory Tower: Not an academic problem.

The Ivory Tower.
Image from The Neverending Story.
“Science is the only news,” Stuart Brand told us. And the news is that research misconduct is on the rise while reproducible results are in decline. Peer review, the process in which scientific publications are evaluated by anonymous peers, has become a farce as scientists’ existential worries make it an exercise in forward defense with the occasional backhand offense. Scientists produce more papers now than ever, and then hide them behind journal subscriptions so costly nobody can read them – a good idea because most published research findings are probably false, though that too is probably false. Measures for scientific success have been criticized ever since they began being used, and the academic system chokes on social effects like herding, pluralistic ignorance and groupthink.

Yes, science works, no need to call me names. But science doesn’t work as good as it could, not as good as it should, not as good as we need it to work.

Scientific institutions and scientific management are stuck in the last century. The academic system today is in no shape to cope with the demands of high connectivity in a global and increasing workforce, is unable to deal with complex trans-national and interdisciplinary problems, and can’t handle the amplification of social feedback that information technology has brought.

The academic system, in brief, has the same problem as our political, social and economic systems.

The biggest challenge mankind faces today is not the development of some breakthrough technology. The biggest challenge is to create a society whose institutions integrate the knowledge that must precede any such technology, including knowledge about these institutions themselves. All of our big problems today speak of our failure, not to envision solutions, but to turn our ideas and knowledge into reality.

It’s not that we lack creativity. It’s that the kind of creativity that comes to us naturally does not latch upon problems evolution didn’t endow us to register to begin with. We do not comprehend the interplay of large crowds of people and are unable to individually beat our own psychology, rooted in groups of tens to hundreds, not billions. To arrange our living together in groups larger than we can intuit, we agree on rules of conduct and incentives that align our individual actions with collective trends so that both are to our benefit. This requires systems design. It requires science. And before that it requires we acknowledge the problem.

But we watch. We watch with bewilderment as a video of sunrise is broadcast on Tiananmen square where thick smog forces onlookers to wear breathing masks. We watch with horrified fascination video footages of the big garbage swirl and of birds dying from indigestible plastic pieces. We watch, hypnotized, replays of negotiation failures that make our adaptation to climate change more costly by the day. The way we have arranged, organized, policed and institutionalized our living together leaves us to watch ourselves watching, stunned at our own inability to change anything about it.

And scientists, the ones who should be able to analyze the situation and to devise a solution aren’t any better.

Scientists, of course, know exactly what is wrong with academia. Leaving aside that no two of them can agree on how to do it, they know how to solve the problem. There’s no shortage of proposals for how to fix peer review and scientific publishing and for how to better distribute resources. Futures markets, auction markets, lottery systems, open peer review, and dozens of alternative metrics have been suggested, we’ve seen it all. They write papers about it and send them for peer review. The rest is the same old he-said-she-said.

So far, scientists miserably failed to adapt the academic system to the changing demands of the 21st century. They belabor the problem and devise solutions, but are unable to implement them. And in the ocean of conference proceedings they watch the giant abstract swirl.

Academia mirrors the problem of our societies in a nutshell. The members of the academe, they’re all talk but no walk. We are being told that scientists are studying now the interconnectivity of the multi-layered networks that govern our societies, and we ask for answers and advice, we ask to be informed about how to solve our problems. There’s nobody else to solve these problems.

Social systems adapt to changing demands much like organisms do, by gradual modification and selection. But this process takes time – a lot of time – and it’s time we cannot afford. The only way to accelerate this adaption is the scientific method: a targeted, controlled, and recorded series of modifications. Many existing projects today aim to track and analyze the complex interactions of our highly interwoven networked world. But not a single one of these projects addresses the real problem, which is how to use this knowledge in the very systems that are being studied. It is this feedback of knowledge about the system back into the system that is necessary for our institutions to adapt. It requires a self-consistent scientific approach to institutional design, an approach that doesn’t exist and is nowhere near existence.

We need scientists to help us create social systems that organize our living together in groups so large that our evolutionary brains, trained to deal with small groups, cannot cope with. Trial and error will take too long and the errors are too costly now. But scientists are like the overweight doctor preaching the benefits of blood-pressure regulation, evidently unable to solve their own problems first. They presently can’t help us solve any problems, and we shouldn’t listen to their advice until they’ve solved their own problems.

Science is the only news, but it’s not only news. It’s the canary in the coal mine. Better watch it closely.

Thursday, October 31, 2013

Science Marketing needs Consumer Feedback

It’s been a while since I read Marc Kuchner’s book “Marketing for Scientists”. I hated the book as I’ve rarely hated a book. I did not write a review then because it was a gift from an, undoubtedly well-meaning, friend who reads my blog. As time passed though, I changed my mind. Let me explain.

Product advertising and marketing is the oil on the gears of our economies. Its original purpose is to inform the customers about products and help them to decide whether they fit their needs. But marketing today isn’t only about selling a product, it’s also about selling a self-image. What we decide to spend money on tells others what we consider important and which groups we identify with.

In the quest to attract customers, advertisements often don’t contain a lot of information, and sometimes they bluntly lie. And so we have laws protecting us from these lies, though their efficiency differs greatly from one country to the next as Microsoft learned the hard way.

Everybody knows adverts make a product appear better than it is in reality, that microwave dinners never look like they do on the images, lotions won’t remove these eye bags, and ergonomic underwear will not make you run any faster. The point is not that advertisements bend reality but as that advertisements work regardless, just by drawing attention and by leaving brand names in our heads – names that we’ll recognize later. The more money a company can invest into good advertisement, the more likely they are to sell.

It isn’t so surprising that capitalistic thought is increasingly applied not only to the economy but also to academic research. Today, tax-funded scientists, far from being able to dig into the wonders of nature unbiased and led by nothing but their interests, are required to formulate 5 year plans and demonstrate a quantifiable impact of their work. And so scientists are now also expected to market themselves, their research and their institution.

Scientific knowledge however isn’t a product like a candy bar. A candy bar isn’t right or wrong, it’s right for you or wrong for you, and whether it’s right or wrong for you depends as much on you as on the candy bar. But the whole scientific process works towards the end of objective judgment, towards finding out whether a research finding should be kept or tossed. Scientific knowledge is eventually either right or wrong and academic research should be organized to make this judgment as efficiently as possible.

Marketing science is not helpful to this end for several reasons:
  • It puts at advantage those who are either skilled at marketing or who can afford help. This doesn’t necessarily say anything about the quality of their research. It’s not a useful selection criterion if what you are looking for is good science. Those who shout the loudest don’t necessarily sell the best fish.
  • Marketing of science advertises the product (research results), while what people actually want to sell is the process (the scientist’s ability to do good research). It draws attention towards the wrong criteria.
  • It has a positive feedback loop that gradually worsens the problem. The more people advertise their work, the more others will feel the need to also advertise their work as well. This leads, as with advertisement of goods, to a decrease of objectivity and honesty until it eventually nears blunt lies.
  • It takes time away from research, thus reducing efficiency.
In quantum gravity phenomenology, you will frequently see claims that something has been derived when in fact it wasn’t derived, or that something is a result, when in fact it is an ad-hoc assumption. I am aware of course, such exaggerations are advertisements, made to convince the reader of the relevance of a research study. But they’re not helpful to the process of science and even worse for science communication.

That’s why I hated Kuchner’s book. Not because his marketing advice is bad advice, but because he didn’t consider the consequences. If all researchers had Marc Kuchner’s “sell yourself” attitude, we’d end up with a community full of good advertisers, not full of good scientists. It’s the inverse of the collective action problem: A situation in which we would all benefit from not doing something (advertising), but each individual would put themselves at a disadvantage when behaving differently (not advertising), and so we all continue to do it.

Here’s why I changed my mind.

Researchers market and advertise because they have to, owing to the very real pressure of the collective action problem. There are too many people and not enough funding. Marketing might not be a good factor to select for, but standing out for whatever reason puts you at an advantage. The more people know your name, the more likely they’ll read your paper or your CV, and that’s not a sufficient, but certainly a necessary condition for survival in academia. And then there’s people, like Kuchner, who make money with that survival pressure. Sad but true.

Yes, this is a bad development, but collective action problems are thorny. Complaining about it, I’ve come to conclude, will not solve the problem. But what we can do is work towards balance. What we need then is the equivalent of customer reviews and independent product tests –  what we need is a culture that encourages feedback and criticism.

Unfortunately presently feedback and criticism on other people’s work is not appreciated by the community. Criticism is typically voiced only on very popular topics, when even criticism on other’s work is advertisement of one’s own knowledge, think climate change, arsenic life, string theory. But it’s a very small fraction of researchers who spend time on this, and it’s only on a small fraction of topics. It’s insufficient.

A recent nature editorial notes that “Online discussion is an essential aspect of the post-publication review of findings” but
In recent years, authors and readers have been able to post online comments about Nature papers on our site. Few bother. At the Public Library of Science, where the commenting system is more successful, only 10% of papers have comments, and most of those have only one.”
This really isn’t surprising. Few bother because in terms of career development it’s a waste of time.

In contrast to the futile attempt of preventing researchers from advertising themselves and their work however, the balance can be improved by appreciating the work of those who provide constructive criticism. By noting the community benefit that comes from researchers who publicly comment on other’s publications, by inviting scientists to speak not only for their own original work, but for their criticism of other people’s work, and by not thinking of somebody as negative who points out flaws. Because that consumer feedback is the oil on the gears that we need to keep science running.

Sunday, August 25, 2013

Can we measure scientific success? Should we?

My new paper.
Measures for scientific success have become a hot topic in the community. Many scientists have spoken out in view of the increasingly widespread use of these measures. They largely all agree that the attempt to quantify, even predict, scientific success is undesirable if not flawed. In this blog’s archive, you find me too banging the same drum.

Scientific quality assessment, so the argument goes, can’t be left to software crunching data. An individual’s promise can’t be summarized in a number. Success can’t be predicted on past achievements, look at all the historical counterexamples. Already Einstein said. I’m sure he said something.

I’ve had a change of mind lately. I think science need measures. Let me explain.

The problem with measures for scientific success has two aspects. One is that measures are used by people outside the community to rank institutions or even individuals for justification and accountability. That’s problematic because it’s questionable this leads to smart research investments, but I don’t think it’s the root of the problem.

The aspect that concerns me more, and that I think is the root of all evil, is that any measure for success feeds back into the system and affects the way science is conducted. The measure will be taken on by the researchers themselves. Rather than defining success individually, scientists are then encouraged to work towards an external definition of scientific achievement. They will compare themselves and others on these artificially created scales. So even if a quantifiable marker of scientific output was once an indicator for success, its predictive power will inevitably change as scientists work specifically towards it. What was meant to be a measure instead becomes a goal.

This has already happened in several cases. The most obvious examples are the number of publications or the number of research grants obtained. On the average, both are plausibly correlated with scientific success. And yet a scientist who increases her paper output doesn’t necessarily increase the quality of her research, and employing more people to work on a certain project doesn’t necessarily mean its scientific relevance increases.

A correlation is not a causation. If Einstein didn’t say that he should have. And another truth that comes courtesy of my grandma is that too much of a good thing can be a bad thing. My daughter reminds me we’re not born with that wisdom. If sunlight falls on my screen and I close the blinds, she’ll declare that mommy is tired. Yesterday she poured a whole bottle of body lotion over herself.

Another example comes from Lee Smolin’s book “The Trouble with Physics”. Smolin argued that the number of single authored papers is a good indicator for a young researcher’s promise. He’s not alone in this belief. Most young researchers are very aware that a single authored paper will put a sparkle on their publication list. But maybe a researcher with many single authored papers just a bad collaborator.

Simple measures, too simple measures, are being used in the community. And this use affects what researchers strive for, distracting them from their actual task of doing good research.

So, yes, I too dislike attempts to measure scientific success. But if we all agree that it stinks why are we breathing the stink? Why are not only funding agencies and other assessment ‘exercises’ using these measures, but why are scientists themselves using them?

Ask any scientist if they think the number of papers shows a candidate’s promise and they’ll probably say no. Ask if they think publications in high impact journals are indicators for scientific quality and they’ll probably say no. Look at what they do, and the length of the publication list and occurrence of high impact journals on that list is suddenly remarkably predictive of their opinion. And then somebody will ask for the h-index. The very reason that politically savvy researchers tune their score on these scales is that, sadly, it does matter. Analogies to natural selection are not coincidental. Both are examples of complex adaptive systems.

The reason for the widespread use of oversimplified measures is that they’ve become necessary. They stink, all right, but they’re the smallest evil among the options we presently have. They’re the least stinky option.

The world has changed and the scientific community with it. Two decades ago you’d apply for jobs by carrying letters to the post office, grateful for the sponge so wouldn’t have to lick all these stamps. Today you apply by uploading application documents within seconds all over the globe and I'm not sure they still sell lickable stamps. This, together with increasing mobility and connectivity, has greatly inflated the number of places researchers apply to. And with that, the number of applications every place gets has skyrocketed.

Simplified measures are being used because it has become impossible to actually do the careful, individual assessment that everybody agrees would be optimal. And that has lead me to think that instead of outright rejecting the idea of scientific measures, we have to accept them and improve them and make them useful to our needs, not to that of bean counters.

Scientists, in hiring committees or on some funding agency’s review panel, have needs that presently just aren’t addressed by existing measures. Maybe one would like to know what’s the overlap of some person’s research topics with those represented at a department? How often have they been named in acknowledgements? Do you share common collaborators? What administrational skills does the candidate bring? Is there somebody in my network who knows this person and could give me a firsthand assessment? Have they experience with conference organization? What’s their h-index relative to the typical h-index in a field? What would you like to know?

You might complain these are not measures for scientific quality and that’s correct. But science is done by humans. These aren’t measures for scientific quality, they’re indicators for how well a candidate might fit on an open position and into a new environment. And that, in return, is relevant for both their success and that of the institution.

Today, personal relations are highly relevant for successful applications. That is a criterion which sparks interest that is being used in absence of better alternatives. We can improve on that by offering possibilities to quantify, for example, the vicinity of research areas. This can provide a fast way to identify interesting candidates that one might not have heard of before.

And so I think “Can we measure scientific success?” is the wrong question to ask. We should ask instead what measures serve scientists in their profession. I’m aware there are meanwhile several alt-metrics being offered, but they don’t address the issue, they merely take into account more data sources to measure essentially the same.

That concerns the second aspect of the problem, the use of measures in the community. For what the first aspect is concerned, the use of measures by accountants who are not scientists themselves: The reason they use certain measures for success or impact is that they believe scientists themselves regard them useful. Administrators use these measures simply because they exist and because scientists, in lack of better alternatives, draw upon them to justify and account for their success or that of their institution. If you have argued that the value of your institute is in the amount of papers produced or conferences held, in the number of visitors pushed through or distinguished furniture bought, you’ve contributed to that problem. Yes, I’m talking about you. Yes, I know not using these numbers would just make matters worse. That’s my point: They’re a bad option, but still the best available one.

So what to do?

Feedback in complex systems and network dynamics have been studied extensively during the last decade. Dirk Helbig recently had a very readable brief review in Nature (pdf here) and I’ve tried to extract some lessons from this.
  1. No universal measures.
    Nobody has a recipe for scientific success. Picking a single measure bears a great risk of failure. We need a variety so that the pool remains heterogeneous. There is a trend towards standardized measures because people love ordered lists. But we should have a large number of different performance indicators.
  2. Individualizable measures.
    Measures must be possible to individualize, so that they can take into account local and cultural differences as well as individual opinions and different purposes. You might want to give importance to the number of single authored papers. I might want to give importance to science blogging. You might think patents are of central relevance. I might think a long-term vision is. Maybe your department needs somebody who is skilled in public outreach. Somebody once told me he wouldn’t hire a postdoc who doesn’t like Jazz. One size doesn’t fit all.
  3. Self-organized and network solutions
    Measures should take into account locations and connections in the various scientific networks, may that be social networks, coauthor networks or networks based on research topics. If you’re not familiar with somebody’s research, can you find somebody who you trust to give you a frank assessment? Can I find a link to this person’s research plans?
  4. No measure is ever final.
    Since the use of measures feeds back into the system, they need to be constantly adapted and updated. This should be a design feature and not an afterthought.
Some time between Pythagoras and Feynman, scientists had to realize that it had become impossible to check the accuracy of all experimental and theoretical knowledge that their own work depended upon. Instead they adopted a distributed approach in which scientists rely on the judgment of specialists for topics in which they are not specialists themselves; they rely on the integrity of their colleagues and the shared goal of understanding nature.

If humans lived forever and were infinitely patient then every scientists could trace down and fact-check every detail that their work makes use of. But that’s not our reality. The use of measures to assess scientists and institutions represents a similar change towards a networked solution. Done the right way, I think that measures can make science fairer and more efficient.

Monday, September 17, 2012

Research Areas and Social Identity

Last year, when I was giving the colloquium in JyvÀskylÀ, my host introduced me as "leading the quantum gravity group at Nordita." I didn't object since it's correct to the extent that I'm leading myself, more or less successfully. However, the clustering of physicists into groups with multiple persons is a quite interesting emergent feature of scientific communities. Quantum gravity for example is usually taken to mean quantum gravity excluding string theory, a nomenclature I complained about earlier.

In the literature on the sociology of science it is broadly acknowledged that scientists, as other professionals, naturally segregate into groups to accomplish what's called a "cognitive division of labor": an assignment of specialized tasks which allows the individual to perform at a much higher level than they could achieve if they had to know all about everything. Such a division of labor is often noticeable already on the family level (I do the tax return, you deal with the health insurance). Specialization into niches for the best use of resources can also be seen in ecosystems. It's a natural trend because it's a local optimization process: Everybody dig a little deeper where you are and get a little more.

The problem is of course that a naturally occurring trend might lead to a local optimum that's not a global optimum. In the case of scientific communities the problem is that knowledge which lies at the intersection of different areas of specialization is not or not widely known, but there is a potential barrier preventing the community from making better use of this knowledge. This is unfortunate, because information relevant to progress goes unused. (See for example P. Wilson, “Unused relevant information in research and development,”. Journal of the American Society for Information Science, 45(2), 192203 (1995).)

So this is the rationale why it's necessary to encourage scientists to look out of their box, at least on occasion. And that takes some effort because they're in a local optimum and thus generally unwilling to change anything.

This brings me back then to the grouping of researchers. It does not seem to me very helpful to reach a better global optimum. In fact, it seems to me it instead that it makes the situation worse.

Social identity theory deals with the question what effect it has to assign people to groups; a good review is for example Stryker and Burke “The Past, Present, and Future of an Identity Theory”, Social Psychology Quarterly, Vol. 63, No. 4 (Dec., 2000), pp. 284-297. This review summarizes studies that have shown that the mere act of categorizing people as group members changes their behavior: When assigned a group, one that might not even be meaningful, they favor people in the group over people outside the group and are trying to fit in. The explanation that the researchers put forward is that "after being categorized of a group membership, individuals seek to achieve positive self-esteem by positively differentiating their ingroup from a comparison outgroup."

This leads me to think, it cannot be helpful to knowledge discovery to assign researchers at an institute to a handful of groups. It is also very punched-paper in the age of social tagging.

A suggestion that I had thus put forward some years ago at PI was to get rid of the research groups altogether and instead allow researchers to chose keywords that serve as tags. These tags would contain the existing research areas, but also cover other interests, that might be black holes, networks, holography, the arrow of time, dark matter,  phase transitions, and so on. Then, one could replace the groups on the website with a tag cloud. If you click on a keyword, you'd get a list of all people who've chosen this tag.

Imagine how useful this would be if you were considering to apply. You could basically tell with one look what people at the place are interested in. And if you started working there, it would be one click to find out who has similar interests. No more browsing through dozens of individual websites, half of which don't exist or were last updated in 1998.

I was thinking about this recently because Stefan said that with better indexing of abstracts, which is on the way, it might even be possible in the not-so-far future to create such a tag-cloud from researcher's publication list. Which, with an author ID that lists institutions, could be mostly automatically assembled too.

This idea comes with a compatibility problem though, because most places hire applicants by group. So if one doesn't have groups, then the assignment of faculty to committees and applicants to committees needs to be rethought. This requires a change in procedure, but it's manageable. And this change in procedure would have the benefit of making it much easier to identify emerging areas of research that would otherwise awkwardly fit neither here nor there. Which is the case right now with emergent gravity and analogue gravity, just to name an example.

I clearly think getting rid of institutional group structures would be beneficial to research. Alas, there's a potential barrier that's preventing us from making such a change, a classic example of a collective action problem. However, I am throwing this at you because I am sure this restructuring will come to us sooner or later. You read it here first :o)

Saturday, September 01, 2012

Questioning the Foundations

The submission deadline for this year’s FQXi essay context on the question “Which of Our Basic Physical Assumptions Are Wrong?” has just passed. They got many thought-provoking contributions, which I encourage you to browse here.

The question was really difficult for me. Not because nothing came to my mind but because too much came to my mind! Throwing out the Heisenberg uncertainty principle, Lorentz-invariance, the positivity of gravitational mass, or the speed of light limit – been there, done that. And that’s only the stuff that I did publish...

At our 2010 conference, we had a discussion on the topic “What to sacrifice?” addressing essentially the same question as the FQXi essay, though with a focus on quantum gravity. For everything from the equivalence principle over unitarity and locality to the existence of space and time you can find somebody willing to sacrifice it for the sake of progress.

So what to pick? I finally settled on an essay arguing that the quantization postulate should be modified, and if you want to know more about this, go check it out on the FQXi website.

But let me tell you what was my runner-up.

“Physical assumption” is a rather vague expression. In the narrower sense you can understand it to mean an axiom of the theory, but in the broader sense it encompasses everything we use to propose a theory. I believe one of the reasons progress on finding a theory of quantum gravity has been slow is that we rely too heavily on mathematical consistency and pay too little attention to phenomenology. I simply doubt that mathematical consistency, combined with the requirement to reproduce the standard model and general relativity in the suitable limits, is sufficient to arrive at the right theory.

Many intelligent people spent decades developing approaches to quantum gravity, approaches which might turn out to have absolutely nothing to do with reality, even if they would reproduce the standard model. They pursue their research with the implicit assumption that the power of the human mind is sufficient to discover the right description of nature, though this is rarely explicitly spelled out. There is the “physical assumption” that the theoretical description of nature must be appealing and make sense to the human brain. We must be able to arrive at it by deepening our understanding of mathematics. Einstein and Dirac have shown us how to do it, arriving at the most amazing breakthroughs by mathematical deduction. It is tempting to conclude that they have shown the way, and we should follow in their footsteps.

But these examples have been exceedingly rare. Most of the history of physics instead has been incremental improvements guided by observation, often accompanied by periods of confusion and heated discussion. And Einstein and Dirac are not even good examples: Einstein was heavily guided by Michelson and Morley’s failure to detect the aether, and Dirac’s theory was preceded by a phenomenological model proposed by Goudsmit and Uhlenbeck to explain the anomalous Zeeman effect. Their model didn’t make much sense. But it explained the data. And it was later derived as a limit of the Dirac equation coupled to an electromagnetic field.

I think it is perfectly possible that there are different consistent ways to quantize gravity that reproduce the standard model. It also seems perfectly possible to me for example that string theory can be used to describe strongly coupled quantum field theory, and still not have anything to say about quantum gravity in our universe.

The only way to find out which theory describes the world we live in is to make contact to observation. Yet, most of the effort in quantum gravity is still devoted to the development and better understanding of mathematical techniques. That is certainly not sufficient. It is also not necessary, as the Goudsmit and Uhlenbeck example illustrates: Phenomenological models might not at first glance make much sense, and their consistency only become apparent later.

Thus, the assumption that we should throw out is that mathematical consistency, richness, or elegance are good guides to the right theory. They are desirable of course. But neither necessary nor sufficient. Instead, we should devote more effort to phenomenological models to guide the development of the theory of quantum gravity.

In a nutshell that would have been the argument of my essay had I chosen this topic. I decided against it because it is arguably a little self-serving. I will also admit that while this is the lesson I draw from the history of physics, I, as I believe most of my colleagues, am biased towards mathematical elegance, and the equations named after Einstein and Dirac are the best examples for that.

Sunday, August 12, 2012

What is transformative research and why do we need it?

Since 2007, the US-American National Science Foundation (NSF) has an explicit call for “transformative research” in their funding criteria. Transformative research, according to the NSF, is the type of research that can “radically change our understanding of an important existing scientific or engineering concept or educational practice or leads to the creation of a new paradigm or field of science, engineering, or education.” The European Research Council (ERC) calls it “frontier research” and explains that this frontier research is “at the forefront of creating new knowledge[. It] is an intrinsically risky endeavour that involves the pursuit of questions without regard for established disciplinary boundaries or national borders.”

The best way to understand this type of research is that it’s of high risk with a potential high payoff. It’s the type of blue-sky research that is very unlikely to be pursued in for-profit organizations because it might have no tangible outcome for decades. Since one doesn’t actually know if some research has a high payoff before it’s been done, one should better call it “Potentially Transformative Research.”

Why do we need it?

If you think of science being an incremental slow push on the boundaries of knowledge, then transformative research is a jump across the border in the hope to land on save ground. Most likely, you’ll jump and drown, or be eaten by dragons. But if you’re lucky and, let’s not forget about that, smart, you might discover a whole new field of science and noticeably redefine the boundaries of knowledge.

The difficulty is of course to find out if the potential benefit justifies the risk. So there needs to be an assessment of both, and a weighting of them against each other.

Most of science is not transformative. Science is, by function, conservative. It conserves the accumulated knowledge and defends it. We need some transformative research to overcome this conservatism, otherwise we’ll get stuck. That’s why the NSF and ERC acknowledge the necessity of high-risk, high-payoff research.

But while it is clear that we need some of it, it’s not a priori clear we need more of it than we already have. Not all research should aspire to be transformative. How do we know we’re too conservative?

The only way to reliably know is to take lots of data over a long time and try to understand where the optimal balance lies. Unfortunately, the type of payoff that we’re talking about might take decades to centuries to appear, so that is, at present, not very feasible.

In lack of this the only thing we can do is to find a good argument for how to move towards the optimal balance.

One way you can do this is with measures for scientific success. I think this is the wrong approach. It’s like setting prices in a market economy by calculating them from the product’s properties and future plans. It’s not a good way to aggregate information and there’s no reason to trust whoever comes up with the formula for the success measure knows what they’re doing.

The other way is to enable a natural optimization process, much like the free market prices goods. Just that in science the goal isn’t to price goods but to distribute researchers over research projects. How many people should optimally work on which research so their skills are used efficiently and progress is as fast as possible? Most scientists have the aspiration to make good use of their skills and to contribute to progress, so the only thing we need to do is to let them follow their interests.

Yes, that’s right. I’m saying the best we can do is trust the experts to find out themselves where their skills are of best use. Of course one needs to provide a useful infrastructure for this to work. Note that this does not mean everybody necessarily works on the topic they’re most interested in, because the more people work on a topic the smaller the chances become that there are significant discoveries for each of them to be made.

The tragedy is of course that this is nowhere like science is organized today. Scientists are not free to choose on which problem to use their skills. Instead, they are subject to all sorts of pressures which prevent the optimal distribution of researchers over projects.

The most obvious pressures are financial and time pressure. Short term contracts put a large incentive on short-term thinking. Another problem is the difficulty for researchers to change topics, which has the effect that there is a large (generational) time-lag in the population of research fields. Both of these problems cause a trend towards conservative rather than transformative research. Worse: They cause a trend towards conservative rather than transformative thinking and, by selection, a too small ratio of transformative rather than conservative researchers. This is why we have reason to believe the fraction of transformative research and researchers is presently smaller than optimal.

How can we support potentially transformative research?

The right way to solve this problem is to reduce external pressure on researchers and to ensure the system can self-optimize efficiently. But this is difficult to realize. If that is not possible, one can still try to promote transformative research by other means in the hope of coming closer to the optimal balance. How can one do this?

The first thing that comes to mind is to write transformative research explicitly into the goals of the funding agencies, encourage researchers to propose such projects, and peers to review them favorably. This most likely will not work very well because it doesn’t change anything about the too conservative communities. If you random sample a peer review group for a project, you’re more likely to get conservative opinions just because they’re more common. As a result, transformative research projects are unlikely to be reviewed favorably. It doesn’t matter if you tell people that transformative research is desirable, because they still have to evaluate if the high risk justifies the potential high payoff. And assessment of tolerable risk is subjective.

So what can be done?

One thing that can be done is to take a very small sample of reviewers, because the smaller the sample the larger the chance of a statistical fluctuation. Unfortunately, this also increases the risk that nonsense will go through because the reviewers just weren’t in the mood to actually read the proposal. The other thing you can do is to pre-select researchers so you have a subsample with a higher ratio of transformative to conservative researchers.

This is essentially what FQXi is doing. And, in their research area, they’re doing remarkably well actually. That is to say, if I look at the projects that they fund, I think most of it won’t lead anywhere. And that’s how it should be. On the downside, it’s all short-term projects. The NSF is also trying to exploit preselection in a different form in their new EAGER and CREATIV funding mechanism that are not at all assessed by peers but exclusively by NSF staff. In this case the NSF staff is the preselected group. However, I am afraid that the group might be too small to be able to accurately assess the scientific risk. Time will tell.

Putting a focus on transformative research is very difficult for institutions with a local presence. That’s because when it comes to hire colleagues who you have to get along with, people naturally tend to select those who fit in, both in type of research and in type of personality. This isn’t necessarily a bad thing as it benefits collaborations, but it can promote homogeneity and lead to “more of the same” research. It takes a constant effort to avoid this trend. It also takes courage and a long-term vision to go for the high-risk, high payoff research(er), and not many institutions can afford this courage. So here is again the financial pressure that hinders leaps of progress just because of lacking institutional funding.

It doesn’t help that during the last weeks I had to read that my colleagues in basic research in Canada, the UK and also the USA are looking forward to severe budget cuts:

“Of paramount concern for basic scientists [in Canada] is the elimination of the Can$25-million (US$24.6-million) RTI, administered by the Natural Sciences and Engineering Research Council of Canada (NSERC), which funds equipment purchases of Can$7,000–150,000. An accompanying Can$36-million Major Resources Support Program, which funds operations at dozens of experimental-research facilities, will also be axed.” [Source: Nature]
“Hanging over the effective decrease in support proposed by the House of Representatives last week is the ‘sequester’, a pre-programmed budget cut that research advocates say would starve US science-funding agencies.” [Source: Nature]
“[The] Engineering and Physical Sciences Research Council (EPSRC) [is] the government body that holds the biggest public purse for physics, mathematics and engineering research in the United Kingdom. Facing a growing cash squeeze and pressure from the government to demonstrate the economic benefits of research, in 2009 the council's chief executive, David Delpy, embarked on a series of controversial reforms… The changes incensed many physical scientists, who protested that the policy to blacklist grant applicants was draconian. They complained that the EPSRC's decision to exert more control over the fields it funds risked sidelining peer review and would favour short-term, applied research over curiosity-driven, blue-skies work in a way that would be detrimental to British science.” [Source:Nature]
So now more than ever we should make sure that investments in basic research are used efficiently. And one of the most promising ways to do this is presently to enable more potentially transformative research.

Wednesday, October 13, 2010

test* the hypothes*

I recently came across a study in the sociology of science and have been wondering how to interpret the results:
    Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data
    By Daniele Fanelli
    PLoS ONE 5(4): e10271. 1
There are many previous studies showing that papers are more likely to get published and cited if they report "positive results." Fanelli now has found a correlation between the likeliness of reporting positive results and the total number of papers published in a sample of papers with a corresponding author in the USA, published in the years 2000 - 2007, across all disciplines. The papers were sampled by searching the Essential Science Indicator's database with the query "test* the hypothes*" and then the sample was separated into positive and negative results by individual examination (both by the author and by an assistant). The result was as follows:
In a random sample of 1316 papers that declared to have “tested a hypothesis” in all disciplines, outcomes could be significantly predicted by knowing the addresses of the corresponding authors: those based in US states where researchers publish more papers per capita were significantly more likely to report positive results, independently of their discipline, methodology and research expenditure... [T]hese results support the hypothesis that competitive academic environments increase not only the productivity of researchers, but also their bias against “negative” results.

When I read that, I was somewhat surprised about the conclusion. Sure, such a result would "support" the named hypothesis in the sense that it didn't contradict it. But it seems to me like jumping to conclusions. How many other hypothesis can you come up with that are also supported by the results? I'll admit that I hadn't even read the whole paper when I made up the following ones:
  • Authors who publish negative results are sad and depressed people and generally less productive.

  • A scientist who finds a negative result wants more evidence to convince himself his original hypothesis was wrong, thus the study takes longer and in toto less papers are published.

  • Stefan suggested that the folks who published more papers are of the sort who hand out a dozen shallow hypothesis to their students to be tested, and are likely to be confirmed. (Stefan used the, unfortunately untranslatable, German expression "DĂŒnnbrettbohrer," which means literally "thin board driller.")

After I had read the paper, it turns out Fanelli had something to say about Stefan's alternative hypothesis. Before I come to that however, I have to say that I have an issue with the word "positive result." Fanelli writes that he uses the term to "indicate all results that support the experimental hypothesis." That doesn't make a lot of sense to me, as one could simply negate the hypothesis and find a positive result. If it was that easy to circumvent a more difficult to publish, less likely to be cited, summary of ones research results, nobody would ever publish a result that's "negative" in that sense. I think that in most cases a positive result should be understood as one that confirms a hypothesis that "finds something" (say, an effect or a correlation) rather than one that "finds nothing" (we've generated/analyzed loads of data and found noise). I would agree that this isn't well-defined but I think in most cases there would be a broad agreement on what "find something" means, and a negation of the hypothesis wouldn't make the reader buy it as a "positive result." (Here is a counter-example). The problem is then of course that studies which "find nothing" are equally important as the ones that "find something," so the question whether there's a bias in which ones are published is important.

Sticking with his own interpretation, Fanelli considers that researchers who come to a positive result, and in that sense show themselves correct, are just the smarter ones, who are also more productive. He further assumes that the more productive ones are more likely to be found at elite institutions. With his own interpretation this alternative hypothesis doesn't make a lot of sense, because when the paper goes out, who knows what the original hypothesis was anyway? You don't need to be particularly smart to just reformulate it. That reformulation however doesn't make a non-effect into an effect, so let's better consider my interpretation of "positive result." Fanelli argues the explanation that people smart enough to do an experiment where something is to be found are also the ones who publish more papers generally doesn't explain the correlation for two reasons: First, since he assumes these people will be at elite institutions, there should be a correlation with R&D expenditure, which he didn't find. Second, because this explanation alone (without any bias) would mean that in states where 95% - 100% of published results were positive, the smart researchers hardly every misjudged in advance the outcome of an experiment and the experiment was always such that the result was statistically significant, even though other studies have shown that this is not generally the case.

To the alternative hypothesis that Stefan suggested, Fanelli writes:
A possibility that needs to be considered in all regression analyses is whether the cause-effect relationship could be reversed: could some states be more productive precisely because their researchers tend to do many cheap and non-explorative studies (i.e. many simple experiments that test relatively trivial hypotheses)? This appears unlikely, because it would contradict the observation that the most productive institutions are also the more prestigious, and therefore the ones where the most important research tends to be done.
Note that he is first speaking about "states" (which was what actually went into his study) and then later about "institutions." Is it the case indeed that the more productive states (that would be DC, AZ, MD, CA, IL) are also the ones where the most important research is done? It's not that I entirely disagree with this argument, but I don't think it's particularly convincing without clarifying what "most important research" means. Is it maybe research that is well cited? And didn't we learn earlier that positive results tend to get better cited? Seems a little circular, doesn't it?

In the end, I wasn't really convinced by Fanelli's argument that the correlation he finds is a result of systematic bias, though it does sound plausible, and he did verify his own hypothesis.

Let me then remark something about the sample he's used. While Fanelli has good arguments the sample is representative for the US states, it is not clear to me that it is in addition also representative for "all disciplines." The term "test the hypothesis" might just be more commonly used in some fields, e.g. medicine, than in others, e.g. physics. The thing is that in physics what is actually a negative result often comes in the form of a bound on some parameter or a higher precision of confirming some theory. Think of experiments that are "testing the hypothesis" that Lorentz-invariance is broken. There's an abundance of papers that do nothing than report negative results and more negative results (no effect, nothing new, Lorentz-invariance still alive). Yet, I doubt these papers would have shown up in the keyword search, simply because the exact phrase is rarely used. More commonly it would be formulated as "constraining parameters for deviations from Lorentz-invariance" or something similar.

That is not to say however I think there's no bias for positive results in physics. There almost certainly is one, though I suspect you find more of it in theoretical than in experimental physics, and the phrase "testing the hypothesis" again would probably not be used. Thing is that I suspect that a great many of attempts to come up with an explanation or a model that, when confronted with the data, fails, do never get published. And if they do, it's highly plausible that these papers don't get cited very much because it's unlikely very many people will invest further time into a model that was already shown not to work. However, I would argue that such papers should have their own place. That's because it presently very likely happens that many people are trying the same ideas and all find them to fail. They could save time and effort if the failure was explained and documented once and for always. So, I'd be all in favor of a journal for "models that didn't work."

Thursday, October 08, 2009

Intellectual Elitism? You get what you give.

The other day I found out Steve Fuller has joined the bloggers! Fuller is a prof for sociology at the University of Warwick, UK, and probably one of the most prominent figures in the realm of the sociology of science, knowledge discovery and management, and something called "social epistemology" (I apologize for my intellectual insufficiency, but I have no clue what that is). His blog is called "Making the university safe for intellectual life in the 21st century."

Steve Fuller was one of the participants of our last year's conference "Science in the 21st Century," though he could unfortunately only take part via a video link, due to prior commitments. (Worse than that, the recording failed. Shame on the IT staff.) I had been expecting a charming British accent, but as Wikipedia tells us Fuller is actually from the East Coast. He has also been among the advocates of an initiative called "Academics for Academic Freedom," fighting for the right of academics to offend. You see, he has some experience with offending people. Blogging is also an excellent tool to that end.

Reason why I'm telling you that

a) I think the man could need some traffic to his blog, so go and give him a welcome to the blogoshpere, and have an eye on his writing. Or go ask him what "social epistemology" is.

b) He had a post today that briefly touched upon the question of how to measure scientific success (output/impact/whatever), triggered by a comment in this week's THE by Adam Corner, claiming that the "desire to see research prised away from pragmatic objectives risks a return to intellectual elitism."

"Intellectual elitism" is one of these words that I find offensive. It is frequently used to express the conviction that academics, if there indeed was such a thing as "academic freedom," would not care whether their work was of any use for the society they live in. They would just levitate above the clouds and waste the taxpayer's money. Thus, so the argument, they need to be forced to produce useful outcome, call it "pragmatic objectives". And "useful" needs to be quantified by a metric, in the best case economical, but in any case something that you can put into Excel. You see, if it doesn't end up being something you can buy at Walmart, then what's all the research good for?

Everybody who has ever in their life had any actual contact with researchers in academia knows that this picture of the academic mindset is completely wrong. Academics do not feel more or less responsibility to contribute to the social good than any other part of our societies. In fact, the more "basic" their research is and thus the more detached from the average persons day-to-day live, the more they are painfully aware their work is not of immediate use, and there is a high risk it will never be of use. That is not a pleasant position to be in. You wouldn't believe how often I have talked to friends and colleagues about this. Some leave academia because they want their work to be of more immediate use. And for those who stay it will be a question that comes back.

If you need an example, read what Daniel over at Cosmic Variance wrote just yesterday, when he was comparing his work as a physicist to that of economists. He "confess[es] to a certain amount of envy" because, unlike theoretical physics, "what economists do and say really matters, in an immediate and tangible way."

That is not to say "intellectual elitism" doesn't exist. It surely does. But it's caused by rather than being a reason for social detachment. The "elitism" you see, hear, and frequently criticize on this blog is not more than a forward defense that is amplified by exactly that criticism. It is a difficult job to work on basic research: non-profit, a very very long-term investment of your society, a job that brings a high risk that nothing of what you do will ever be good for anything.

In addition to that, most of them have to live in an atmosphere where academic research is over and over again discredited as a waste of money. Long-term investments are never easy to justify in politics. It is even worse if your research is hard to communicate to the general public. As a consequence, researchers start telling themselves and everybody else that they are special, and form communities that are to some extend exclusive to enhance their group identity. They might chose to engage in public outreach to better embed their research into our societies and offer their knowledge - to make themselves more useful. And they make jokes about their own irrelevance, as Daniel ends with saying "maybe I’d rather not have to worry about destroying Iceland while looking for a bug in my code."

But the fact is, those working in academic research are special. "Elitism" isn't a good word though, maybe one should call it "expertism." Academic research differs from other jobs in many ways, but it is certainly not the only job where people feel special. Politicians I guess suffer from a particularly difficult sort of "specialness." Policemen do too. Look at any job-related community, and you'll find some in-group behavior, some commonly shared ideals that they are proud of. Serving the public. Save neighborhoods. What do academics have to be proud of other than their intellectualism?

The bottomline is, "Intellectual Elitism" is nothing but a word that's being used to justify limiting academic freedom. Or to express anger about not being part of the "elite" community. But the "elitism" that you see is merely a defense by people in a socially difficult position, who have to cope not only with the knowledge that their work and life can eventually be completely useless, but also with constant public criticism. You get what you give.

    "This whole damn world can fall apart
    You'll be ok, follow your heart"
    ~ New Radicals, You get what you give

Sunday, October 04, 2009

Soundbites from the Atlanta Conference

As previously mentioned, I spent the last days on the Atlanta Conference for Science and Innovation Policy. To apply for a talk, one had to submit a paper already in February. At that time, it wasn't clear to me I'd be living in Sweden when the conference actually took place. Flying from Europe to America for only a few days is very exhausting, plus I have to cover expenses myself. But I am glad I went. The conference has been very interesting, I learned a lot, and got useful feedback on my talk.

It was a very interdisciplinary meeting with a diverse audience. People were here from engineering, over economics, psychology to various disciplines of the social sciences, from institutions outside academia and from funding agencies. And then there was the theoretical physicist. Participants came from all over the world, many from developing countries, since science policies in the developing world was one of the topics on the program. I spoke to a PhD student who studies science policies in Argentina and Chile, esp. their telescope programs, and talked to some people about Neil Turok's initiative AIMS. I learned about a project called ARGO, a truly international project, which maintains an array of floats to measure water temperature and salinity. I've also never been at a conference where the woman to men ration was so close to 1/2.

It is pretty much impossible to summarize the conference. You will get a good impression of topics if you look at the lists of talks and abstracts, which you find here. What I found somewhat annoying was the high number of parallel sessions, up to 7, which means no matter what you'd miss several of the talks you wanted to hear.

The organizers also tried out a new sort of session called "roundtables," that for all I can tell worked badly. They took place in one room with, guess, round tables, with about 8 seats each and a different topic for each table. While this created a nice atmosphere for discussion, the catch was that the people leading the discussion (at least at the tables I was) had applied for a talk and only learned two days earlier they were supposed to be on a round table instead. As a result, they just printed and handed out their presentation or, worse, put their laptop on the table and pointed to it. That sort of format might have some potential though if the topics for discussion are chosen differently. It very efficiently bridges the gap between the speaker and the audience.

I went to a couple of talks on collaboration- and coauthorship networks, both in the scientific community and for patents, investigating the change in these networks over time and the development of new fields or collaborations. These studies in the area of scientometrics are one of the fields at the intersection of the social, natural and computer sciences that have only become possible within the last decade, because data wasn't available or couldn't be handled before that. I find them tremendously interesting, as they tell a lot about the process of knowledge discovery with the prospect to better understand which conditions are beneficial or counterproductive. Needless to say, funding agencies have a certain interest in this research and in fact, some of these studies were commissioned by the NSF.

Also here was Bela Nagy from the Santa Fe Institute, who spoke about Comparing forecasts of technological progress. I missed the first half of his talk, but he set up an open database at pcdb.santafe.edu where you can download or play with the data he analyzed yourself. You can also upload your own data. You find an introduction to his research on YouTube.

Some instances were very amusing. For example, in a panel discussion the first day one of the speakers, Caroline Wagner from SRI International, began with saying she recently heard on the Science Channel that we live in a world with 11 dimensions. Then she compared that to the "multidimensonality" in her field of work. She asked the audience how many had heard of these 11 dimensions. From about 200 people, maybe 5 raised their hand.

Another speaker, Diana Hicks, renamed normal- and power-law distributions into "hill" and "pipe" distributions, explaining that a quarter pipe is the only real world example for a power law that she could find. Since the curve of said pipe actually falls to zero at a finite value, and certainly has no "fat tail" of any sort, it was somewhat unwillingly comic. In any case, the talk raised an interesting question. Hicks pointed out that the relation between the number of scientists and their output is not a normal distribution, but that instead a few scientists (or institutions respectively) are top and carry a load of the output, and then there are a lot who don't contribute much. The question is then whether funding should be distributed proportionally (to any measure of such scientific output), or more equally.

Besides "power law," other buzzwords of the conference were "vertical disintegration," "transdisciplinarity" and "complex systems." It is amazing how easily one can get a speaker to stumble by simply asking what they mean with "complex." I also learned that what the NSF calls "transformative research" is called "Frontier Science" in Europe. Somebody pointed out in his talk that support of innovation is almost exclusively on the supply side, by funding basic science, and argued that one should stimulate also the demand side. It's an interesting thought. Presently it seems to me the supply- and demand side for basic science is pretty much identical.

Several people spoke about their initiatives, institutes, or conferences with the purpose to get science policies across to the governments of various countries. What I find puzzling though is how completely disconnected these studies about science policies, collaboration, group dynamics, interdisciplinarity, and so on, are from the researchers who actually work in these fields. This disconnect was one of the reason for our last years' conference on Science in the 21st Century.

My own talk on "The Marketplace of Ideas" went very well, despite a cold that I seem to have caught on the plane. I will give you a summary later. Listening to my coughing and sneezing, somebody recommended a homeopathic remedy in the coffee break. She would take it before the fist symptoms set in, and it had helped her to avoid getting a cold in the first place several times. Makes one wonder.

I'm flying back to Sweden tonight, just in time for the announcement of the Nobel Prize.

Wednesday, August 26, 2009

Paper Zapping

A nice quote from Strategic Reading, Ontologies, and the Future of Scientific Publishing, by Allen H. Renear and Carole L. Palmer (Science, 14 August 2009, p.829), on how scientists make use of the literature:

Now, as scientists search and browse, they are making queries and selecting information in much tighter iterations and with many different kinds of objectives in mind, almost as if they were playing a fast-paced video game. […] In a compelling analogy, Nicholas et al. describe a "slightly irritated" father watching his young daughter flick from channel to channel while watching television:

[the] father asks … why she cannot make up her mind and she answers that she is not attempting to make up her mind but is watching all the channels … gathering information horizontally, not vertically.

And they conclude

Now we see what the migration from traditional to electronic sources has meant in information seeking terms. We are all bouncers and flickers, and the success of Google is a testament to that, with its marvelous ability to enhance and amplify this flicking and bouncing (like a really good remote) … […]

Just as the aim of channel surfing is not to find a program to watch, the goal of literature surfing, is not to find an article to read, but rather to find, assess, and exploit a range of information by scanning portions of many articles.

Thursday, November 13, 2008

Chaos, Solitons and Self-Promotion

Recently, our attention was drawn to the Elsevier Journal Chaos, Solitons & Fractals, and its Editor-in-Chief M. S. El Naschie. Over the course of the years, El Naschie himself published about 300 mostly single authored papers in this journal, with abstracts of the kind:
“Von Neumann’s continuous geometry has been considerably developed by Connes and is characterized by two fundamental concepts. First it is formulated without any direct reference to points and second it possesses a dimensional function. The present work explores the relevance of these two points to string theory as well as E-infinity theory. In particular we show that point-lessness and dimensional function implies fractality. In turn fractality leads to the concept of average or fuzzy symmetry and the elimination of gauge anomalies.”

Now neither of us in an expert in Solitons or Fractals. So we instead want to ask the completely unrelated questions whether being an editor at Elsevier allows one to circumvent peer review. In case you are suspicious about the scientific merit of El Naschie's work, you are not alone. John Baez gave it a closer look in his recent post The Case of M. S. El Naschie and finds the result wanting.

The reason we got interested in this topic is that El Naschie lists himself on his website as a “distinguised Fellow of the Physics Institute of the Johann Wolfgang Goethe University, Frankfurt” - the Institute where we both made our PhD. However, this “Fellowship” has not been awarded by the physics department, but by a private association, called the “Frankfurter Förderverein fĂŒr physikalische Grundlagenforschung” (Frankfurt association for the support of basic research in phyiscs). Gossip that we would never spread says the guy has money. Zoran Ć koda wrote in an earlier comment:
“I was told that there is an investigation about using this affiliation now. I contacted some of the associate editors, most of whom did not respond to my question how such a behaviour is allowed. Two of them told me that they will quit from the editorial board, and one that his name was put on the editorial page without his consent!”

It is thus good to read that Herman van Campenhout, Elsevier CEO Science & Technology, writes in the Publishing Ethics Resource Kit: “Monitoring Publishing Ethics is a major aspect of the peer-review process, and as such lies within the area of responsibility of the Editor-in-Chief [].” And, he adds, “Fortunately, the area of science publishing is reasonably good at self-correcting, albeit sometimes later rather than earlier.”

Saturday, October 25, 2008

The Lightcone Institute

After you have stared at the link to the Lightcone Institute in my sidebar for a year or so, I think the time has come to tell you what it is about. It's what I spend my time on that is not occupied by physics - which is typically not much, and presently not any, but has added up over the last decade to make this a more concrete project which managed to attract a moderate but existent amount of interest. It's my way to make constructive use of my desperation about the state of the world, and an antidote to the nagging feeling that what I work on isn't particularly useful for the vast majority of people on this planet.

Sure, I can give you a long speech about the purpose and importance of fundamental research and how it is interesting for the broader public, not to mention that it is where my personal interests are. But it remains a fact that investing into fundamental research is a luxury of societies at a very advanced level. And if I open a newspaper after a day sitting through seminars it tells me the world really has other problems than axion-dilaton coset SL(2,R)/SO(2) 7-branes or similar fun. Sometimes more, sometimes less so. Presently more so.

But hey, as I told you previously, to me science is more than a profession, to me science is a worldview. And thus my interpretation of the problems we are currently facing on a global scale is a lack of scientific method, which has resulted in an erosion of trust in the systems that govern our lives. We are failing to update these systems and their institutions so they be able to deal with our increasingly complex global problems.

Finishing the Scientific Revolution

Science is as old as mankind. We analyze the world we observe to better understand it, and to make our lives more pleasant. The scientific method has proven to be extremely useful to achieve this; this method being nothing but inventing a model for the world based on previous knowledge, and testing how well it works. If it works well or at least better than available models, we call that progress and use it for further examinations. If not, we discard it and look for something better. At least that's the idea. A lot can be said about how this so straight-forwardly sounding manner has worked out during our history in less straight-forward ways, but to say the very least, it has worked tremendously well for the natural sciences.

Science in its organized form has taken off in the 16th and 17th century, and has changed our world dramatically. This period in our history during which we saw a tremendous amount of progress in the fields astronomy, physics, biology, medicine and chemistry is often called the “Scientific Revolution” - a revolution of thought rather than a revolution of governance that kick-started the development of technologies and established scientific research as one of the most important drivers of progress in our societies. We find during this period the names of great thinkers like Copernicus, Brahe, Kepler, Galileo, Bacon, Newton, Franklin and Descartes, to only mention a few.

Today we study as sophisticated areas as neuroscience, nanotechnics, immunology, microbiology or endocrinology. Don't worry if you don't know the latter, I didn't know it either, I Googled it and found it's the study of the glands and hormones of the body. And then there are of course the computer sciences, which are possibly the most impressive outcome of the technological developments altogether and the advancements of computing power itself has had a large impact on the possibilities in scientific research.

I'm not a historian and this isn't an essay about the history of science. I'm just telling you that because this revolution doesn't include the social sciences. Most notably, academic research in the fields that we would today call sociology, politics and economy are still waiting to obtain the attention they deserve. In these areas, the dark middle ages of trial and error in applications have lasted some more centuries, with the situation only slowly changing today. The reason for this is not hard to find. Understanding political or social systems is much more complicated than understanding the motion of planets, since the latter is a system that can very easily be simplified to a model that is computable even by hand. In the political and social sciences, arguments are lead mostly in the narrative, and have for long been detached from what actually was happening in politics. Neither did much of these studies reach the broad public for most of it is not part of the standard school education, as is physics, biology and chemistry.

It is only now, in the 21st century, that the advances have gotten far enough so we begin to understand some aspects of systems as complex as for example our global economy. In fact, the economical system is probably the best investigated case that falls into this category, for there is money to make there. The political system lags behind. This lag is is crucial because it is needed to deal with the progress driven by the natural sciences. What we are running into is a dangerous imbalance in which new technologies change our societies faster than the governing institutions can deal with these changes.

Results from the natural sciences are today very well integrated into our daily lives. Think about architecture, engineering, drug tests, health checks, and numerous investigations behind every single consume item, from your car to canned food.

In the last decades one also finds increasingly more examples for a similar integration of the social sciences. Think about architecture again, but take into account the question what group of people the building will host and what amount of interactivity you want to create. A lot of thought has been put into this for example with PI's building. Or think about city planning in general. Economic modeling too has become quite common, though it is tainted by ideological believes and lacks scientific rigor. And then there are the cases where governments commission models to better understand the outcome of planned regulations, like various forms of carbon taxes. These are all cases where one sees some first glimpse of a development I am sure will speed up rapidly in the coming years: an increasing application of insights from the social sciences to our daily lives, in a more organized manner.

And after four centuries, it is really about time to finish the scientific revolution.

Reestablish Trust

And why is this necessary? It is necessary because we simply are no longer able to deal with the problems we are facing. Just look at the present economic crisis. If you stop for a moment trying to find somebody to blame, then the problem comes down to:
  1. Lack of understanding how the system works, i.e. studies that would have been necessary are missing.

  2. Paying more attention to ideology than to scientific argumentation, i.e. failure acknowledge the importance of objectivity.

  3. Failure of our political system to incorporate knowledge in a timely manner.

You can say the same with regard to the question why climate change is so slow to be addressed. You can say the same about lots of other outstanding problems, may that be the increasing gap between the rich and the poor, water shortages, or even your country's inability to come to any conclusion of how to address coming energy scarcity. These are processes that happen, but they happen excruciatingly slow and are hindered by unnecessary rhetoric and psychological games.

Is it really surprising then that many people have lost trust in what politicians say? Is it really surprising that we are now facing several years of aftermath of a economic crisis because of lacking faith in this system?

The conclusion that I draw from this is that the most important thing we need is a solid basis for arguments, and a way to integrate won insights. We need to improve the systems we are operating in, the systems that are meant to allow us to live together with a minimum amount of friction and a maximum amount of progress.

I neither believe that human behaviour is predictable, nor do I think the goal can be to replace human decisions with 'scientifically correct' decisions - this is plain nonsense. What I think however is possible, and necessary, is to make sure decisions can be reached and incorporated fast and easily. One should make a clear distinction here between opinion and the process to reach and implement a decision from opinions. What I am talking about is to set up the system, based on scientific insights, to provide a better environment for those living within it to pursue individual goals without being hindered by outdated institutions.

Or, in short, make sure the system can correct its own mistakes.

The Lightcone Institute

So that's what the institute is about. It is about bridging the gap between the natural, the social, and the computer sciences to initiate this change. And since it is a change in which the scientific community plays a pivotal role, one can't do it without addressing the problems of the academic system itself. The problems of the academic system are in many ways reflections of the larger problems we see in our societies: We have a system that is hindering progress, and knowledge about this dysfunctionality is not incorporated. The system is outdated and unable to correct itself.

You see the above discussed points reflected in the four pillars of the Institute's research. There is the interdisciplinary research to make these connections between the different areas of science, there is the basic research to provide the fundamental pieces that might be missing, there is researching research to address the role of the scientific community. And then there is the essential public outreach to get the hopefully won insights to where they needs to be. The latter point is meant to include communication to the public, as well as to private, academic, and governmental institutions.

You will find that on the website the areas of research are populated with some possible research topics that fall into these categories, like Social-Ecological Systems, Network Science, or the Future of Scientific Publishing.

As to the operation of the Institute, it is a directed research in that the Institute has a clearly defined mission that studies should be dedicated to. Here are the mission statements:
  • The Institute's research is to be beneficial and relevant for society.

  • The research is focused on interdisciplinary work between the natural and social sciences, fundamental research, and the sociology of science.

  • The Institute aims to strengthen the public outreach of the scientific enterprise and actively communicate its research endeavors.

  • The Institute will collaborate closely with political institutions, businesses and academia.

All that's missing is money and people.
“I am not an advocate for frequent changes in laws and constitutions. But laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance to keep pace with the times.”

~Thomas Jefferson