Neurotycho – A farewell to an old man

Tycho Brahe was once a Danish Nobleman. Do not think that I am very much impressed by that as a title of nobility, but it meant a lot to Brahe. He cared nothing for nobility itself, in fact he disliked it, but he learned painfully and thoroughly that it pays the bills. Tycho had been observing the orbits of planets for over 20 years. He was really quite good at it. He was so good that the Danish King had been funding his observatory for this entire period, including galleries and libraries and printing presses and a papermaking works and assistants. I never met anyone as conceited as him. You heard that he got his nose permanently cut off. It was the same attitude that led him to lose his observatory and estate. After the King cut his funding, he had been thinking for months about leaving Denmark. He hadn’t done it because it would be too cruel to deprive her of himself, so it was a very healthful shock when King Christian finally made it clear that he was no longer welcome in the realm.

The next two years, Tycho spent in travel, first to Wandsbeck, near Hamburg, then to Prague, having left his family in Dresden. Tycho was in Prague on the invitation of Rudolph II, Holy Roman Emperor. Tycho was made Imperial Mathematician – a misnomer if I ever saw one – after an audience with the Emperor who provided new funding to build an observatory. You have to understand that in those days, Prague was not the city that it is today. It is only the stench of the streets, not the city walls, that kept the Turks at bay. You can imagine how a man like Tycho felt about all that. He set up his observatory in a remote castle three dozen miles northeast of the capital. That way, he also avoided the plague that was ravaging the Prague at that time.

The funding source

The funding source

That is where we first met. You see, my father Heinrich Kepler was a traveling mercenary. He had a distinct fondness of hard liquor. My mother Katherine much believed in the healing powers of herbs. She was successfully prosecuted as a witch. When I was a child, I caught smallpox, which permanently left me with poor eyesight. Under these circumstances, I am not at liberty to be too particular about my employ. So I make do as a theorist and modeler, also casting horoscopes on the side to supplement my income.

I first met Tycho in February of 1600, he was almost twice my age. You would think this to be a promising meeting. Tycho had assembled the most comprehensive and accurate body of astronomical data in history. Yet, he had never analyzed any of it. Tycho needed to, in order to create a new set of tables for planetary positions, the “Rudolphine Tables”, named in honor of the Emperor. I had never in my life laid my hands on any actual data, but was confident in my abilities to model them. I also had a burning desire to get a paying job. I tried to convince Tycho to appoint me as his equal in the quest for the tables. Then, he could simply give me his collection of data and I could model the planetary orbits. But things did not work out that way. All I could manage after a year of negotiations was to attain a position as his assistant. Also, he did not give me free access to his data. I cannot imagine why. He knows of my skill. On top of all this, he calls me ungrateful behind my back and to my face.

I first became aware of his grave condition when I was summoned to his deathbed on October 24th, 1601, less than a year into my appointment.

Tycho and I

Tycho: “I don’t trust you, and you are ungrateful, but you have the greatest mathematical ability of all my assistants. I want you to complete the Rudolphine Tables. You have full access to my entire collection of data. I compel you to put it to good use”.

I: “Thank you, Tycho”

Tycho: “Kepler, you must ensure that I did not live in vain”.

I: “I will, Tycho”

Immediately after this exchange, Tycho died.

Such is the relationship between modelers and experimentalists. It really is an intriguing story. Tycho collected data for 40 years without ever analyzing any of it. Before the pressures of conferences, before publish or perish, before grants. He then is forced to giveit to a modeler who – despite his own preconceived notions – figures out what is going on.

Edit: Apparently, “Neurotycho” is a real thing. Who knew?

Posted in Misc, Science | Leave a comment

The shining city

The city of light was attacked by the forces of darkness, but the lights prevailed.

Beaming brighter than ever.

The shining city

Encouraging. Perhaps reason and virtue will prevail after all, checking the barbarians at the gate as well as subduing the barbarians within.

Posted in Misc | Leave a comment

A statistical analysis of Olympic outcomes of the past 28 years

Now that the excitement over the Olympics has abated a bit, it is time to reflect on the outcomes to see if we can discern any long term trends.

Most casual observers seem to be most interested on the outcomes (=medals) as a function of nationality. This is not surprising, as it is perhaps taken as a proxy for relative competitiveness of the home country of the individual. Therefore, we will focus our analysis on these metrics.

Based on medal counts alone, it can be hard to factor out extra-athletic political developments, which is why we limit the time frame to the last 7 olympics, going back to 1988. Before that, cold war events such as the large scale boycott of Moscow in 1980 or Los Angeles in 1984 by the “other” political block led to obvious distortions of the outcomes. For instance, the US won 88 gold medals in 1984, or more than 35% of the total – a highly unusual result, which is likely owed to the absence of most athletes from the eastern bloc. This yields a analysis period of 28 years, sufficiently long to discern long term trends.

For these last 7 Olympic Games, figure 1 shows the basic results.

Figure 1. Total medals per nation and year

It may look complicated, but there are some clear trends. Before we go further into that, let’s discuss some necessary preliminaries. Seven countries were picked, based on them being relatively successful (being in the top 10 of countries at least once during the time period), and them showing some development. Nations that were extremely stable, e.g. South Korea, which has been consistently around 30 medals have been omitted. As have been others that basically show the same general trend as those shown, e.g. Romania and Hungary show the same trend as Bulgaria, which is shown. Restricting it to seven nations allowed the use of seven distinct and easily recognizable colors. I did want to include more, but the graph was already busy enough as it is. Some other remarks about the mechanics of the figure: In the spirit of keeping politics at bay as much as possible, “Soviet Russia” includes the Soviet Union in 1988, the “United Team” in 1992 and the Russian team post 1992 (to indicate legal and organizational continuity). “Germany” combines both East and West Germany in 1988 and unified Germany thereafter. The “UK” refers to the team that calls itself Great Britain, further increasing the confusion between the Great Britain/UK distinction. The designation was made in 1908 to avoid an Irish boycott, but Ireland has fielded its own team for a long time now and it is unfair to athletes from Northern Ireland, which is why this anachronism is incorrect.

Two major trends become apparent:

1. Communism/central planning boosts medal counts. It is hard to distinguish between the two, as basically all remaining central planning regimes are also communistic in nature. The dramatic decline in medals from countries that abandoned communism (Germany, Bulgaria, Soviet Russia as well as others not shown) contrasts with the continued success of countries that retained it (China, and others not shown, e.g. Cuba, North Korea). The effect is very real, but it is unclear how to interpret it. Do communist countries value the olympics more (perhaps as an outward sign of pride)? Does sports performance lend itself to central planning? The German results highlight that this might be so. In 1988, East Germany won 2.5 times the medals that West Germany did, despite the West having almost 4 times the population at the time. In other words, based on population alone, East Germany was outperforming West Germany by a factor of almost 10:1. Note that this happened in spite of the famed abundance of economic resources in the west, compared to the east.

2. The outcomes of the UK, Australia and China show that countries that host the olympics see their results boosted not only in the year in which they host the olympics itself (which could be due to some kind of home-advantage), but also in the run-up to the olympics (maybe reflecting an increased allocation of resources on athletics in a given country in preparation for the big event, perhaps not uncorrelated with the fact that they were awarded the olympics in the first place). This is consistent with a steep rise of the UK performance even before the 2012 olympics was given to London in 2005 and likely due to direct investment from the proceeds of a lottery program. Also, the performance of Australia suggests that hosts might be able to put some persistent sports infrastructure in place, which allows them to retain most – but not all of their gains. In this sense, it might be interesting to watch the Chinese performance in the future. The rise of China has led to much insecurity in the US, but it remains to be seen whether this rise is sustainable, or if the strong Chinese showing in 2008 was simply due to a confluence of several of these factors.

Could it be argued that a focus on total medals is perhaps misleading? Is a bronze medal really as reflective of top athletic performance as a gold medal? Is second best good enough? Purists would say no. The standard of the IOC itself is unequivocal: When used in rankings, a gold medal is infinitely more valuable than a silver one. Gold is always first. A nation which wins 1 gold medal but nothing else will always be ranked above all countries that didn’t win a gold medal, regardless of the number of silver or bronze medals won. However, this is likely to be mostly Spiegelfechterei. There is scant evidence that the distinction between medals is even reliable. Intuitively, in a field of extremely talented and competitive athletes, sheer luck or daily form might give the last 0.01% in performance differential that makes the difference between gold and silver these days. This intuition is substantiated by statistical analysis. As figure 2 shows, the correlation between total medal count (of a country) and total number of gold medals (of a country) is extremely close to perfect: r = 0.97, p = 4.47e-63. Looks marginally significant.

Figure 2. Total medals vs. Gold medals

With this in mind, looking at the total medals is indeed justified, as there is 3 times the amount of data available, increasing statistical reliability. Looking at only gold medals will increase volatility of results. To check these notions empirically, we look at the same trends as above, but only at gold medals.

Figure 3 shows the same data as in figure 2, but only fore gold medals. Indeed, we can ascertain that the same trends as the ones we observe in figure 2 do hold, but they are more pronounced, exaggerated, e.g. the home advantage (which is now evident for the US numbers, even more evident for China), the decline of (East)Germany is even more evident, as are host effects across the board, etc.

Figure 3. Gold medals per nation per year.

Figure 3. Gold medals per nation per year.

To allow for a fair comparison over time, what remains to be done is to normalize the absolute numbers by the number of medals given out at the events. This number has gone up dramatically over the years, by over 30% percent in the last 28 years alone. So looking at absolute numbers can be somewhat misleading. If the current trend holds, more than 1000 medals may be given out in the near future, as can be seen in figure 4 below (although this trend has somewhat slowed in recent years).

Figure 4: Total medals awarded per year

Figure 4: Total medals awarded per year

Taking this into account yields figure 5. Broadly speaking, the same trends hold.

Figure 5. Total medals per nation and year, normalized.

Figure 5. Total medals per nation and year, normalized.

This leads us to a simple model (that of course leave a lot of variance unaccounted for), but from these graphs, two things matter for olympic success of high performing nations:

1. The means to train top athletes, e.g. population and/or financial resources devoted to athletics. However, this is not in itself enough or sufficient for olympic success. This is best exemplified by India, which has a very large population and increasing available economic resources, but in spite of this only earned 5 medals at the 2012 olympics, less than Ethopia.

2. The (political?) will to do so. It is not enough for resources to be available in a country. For Olympic success, these resources need to be mobilized and focused, which perhaps happens in the runup to an olympic event and which might be important to communist regimes and lend itself to central long-term planning.

We will see if this analysis holds for the future.

So far the – mostly descriptive – side of things. I am not unaware of the larger debate about the cult around the Olymp Games as such. Some people legitimately question whether athletes exhibit the qualities and virtues we should admire as a society. This is not a trivial question in the age of rampant doping scandals. A second, related position notes that the IOC essentially leads a monopolistic and parasitic existence, feeding off the naive vanities of nations, cities and individuals alike. Totalitarian states certainly do seem to be fond of them, as it gives an opportunity to demonstrate the manifest superiority of their system before a world audience and at the same time give their oppressed populations some pride, solace and reassurance. This is a tried and true method. It worked for Hitler (Germany won by far the most medals in 1936), the Soviets, the east Germans and others. Even today, North Korea and Cuba punch far above what could reasonably be expected from them. To say nothing of China. Finally, the oft-touted inspirational effects of the Olympics, supposedly encouraging physical fitness in the general population might be largely fictional. I am not aware of a correlation between Olympic success of the athletes of a country and the BMI in the general population of that country.

These are all valid concerns, but I must respectfully insist that these issues need to be addressed at a different time as they require a different level of analysis entirely.

Posted in Science | 1 Comment

Meditations on the proper handling of pigs

Popular wisdom is not short on advice on how to handle pigs. This goes back to at least biblical times, which counseled that one  should

Neither cast ye your pearls before swine.

Matthew, 7:6.

The point here is that the pigs – being pigs – will not be able to appreciate the pearls for what they are. They mean nothing to them. But one’s own supply of pearls will be rather limited and on top of that, one probably expects something in return for parting with a pearl. In this scenario, no one wins. Not the pigs, and not the pearl caster. So don’t do it.

Actual pigs

Actual pigs

Our cultural obsession with pigs does not stop with the bible. On the contrary, pigs are the metaphorical animal of choice for morally and cognitively corrupt characters such as Stalinists, popularized by Orwell’s “Animal Farm”. Of course, none of this is limited to pigs. We like to use animals metaphorically, from black swans (the book is *much* better than the movie) to hedgehogs and foxes.

But back to pigs.

As an old Irish proverb has it:

Never wrestle with a pig. You’ll both get dirty, but the pig will like it.

Observing most of what passes for public discourse these days, this seems to have been forgotten. It is important to remember this as I do not believe that the rules of engagement for public fights with pigs have changed all that much since time immemorial. I do understand that it drives ratings, but it is not all that helpful. Or at least, one should know what to expect.

A corollary of this – as observed by Heinlein – is that one should

Never try to teach a pig to sing. You waste your time and you annoy the pig.

Robert A. Heinlein (1973)

To be clear, the idea is that one wastes one’s time because it won’t work, due to inability or unwillingness on behalf of the pig. And it is not restricted to singing, either. Other have pointed out that it is equally pointless – and even harmful – to try to teach pigs how to fly. A lot of this has been summarized and recast into principles by Dale Carnegie.

So how *should* one interact with pigs? As far as I can tell, the sole sensible practical advice – bacon nonwithstanding – is that the only way to win is not to play, when it comes to pigs.

How relevant this is depends on how many pigs there are and how easy it is to distinguish them from non-pigs. Sadly, the population of proverbial pigs seems to be ever growing, likely due to the fact that societal success shields them from evolutionary pressures imposed by reality (to make matters worse, a lot of the pigs also seem to have mastered the art of camouflage). In this sense, a society that is successful in most conceivable ways is self-limiting, as it creates undesirable social evolution because the pigs can get away with it, polluting the commons with their obnoxious behavior (although the original commons was a sheep issue). This will – in the long run – need to be countered with a second order cultural evolution in order to stave off the inevitable societal crash that comes from the pigs taking over completely.

This is a task for an organized social movement. In the meantime, how should individuals handle pigs? Probably by recognizing them for what they are (pigs) and recognizing that they probably can’t help being pigs. And not have unreasonable expectations. It could be worse. In another fable, the punchline is that one should not be surprised if a snake bites you because that is what snakes do.

The point is that we now live in a social environment that we are not well evolved for. Does our humanity scale for it? In a social environment many individuals and diverse positions, verbal battles can be expected to be frequent, but there are no clear victory conditions. It is commonly believed that the pen is mightier than the sword, but in an adverse exchange (particularly on the internet), arguments are very rarely sufficient to change anyone’s mind, no matter how compelling. So the “swine maneuver” can be used to flag such an exchange, and perhaps allow the parties to disengage gracefully, disarm the tribalist primate self-defense systems that have kicked in and perhaps meaningfully re-engage. Perhaps…

Will an invocation of the “swine maneuver” forestall adverse outcomes? Does the gatekeeper go away if he is called out on his behavior? Does it amount to pouring oil into the fire (not unlikely, as most pigs can read, these days. Although it could go either way)? That is an empirical question.

On a final note, teaching pigs to fly hasn’t gotten any easier in the internet age. Preaching to people who already share your beliefs is easy. What is hard is to have a productive discussion with someone who empathically does not share your fundamental premises and perhaps effect positive change. Mostly, these exchanges just devolve into name-calling. Not useful.

To be clear, we are talking about proverbial pigs here. Actual pigs are much more cognitively and socially adept (not to mention cleaner and less lazy but perhaps less happy) than most cultures – and religions – give them credit for (they are probably maligned to help rationalize eating them).

NOTE: Putin himself (however one feels about his politics) has weighed in on the issue and aptly characterized some exchanges as fruitless:

I’d rather not deal with such questions, because anyway it’s like shearing a pig – lots of screams but little wool“.

A somewhat similar – but not quite the same – situation seems to be given in the avian world:

If you want to soar with the eagles, you can’t flock with the turkeys” (as they say)

Posted in Optimization, Social commentary, Strategy | Leave a comment

Low contrasts shed light on the neural code for speed

The effects of stimulus contrast on visual perception have been known for a long time.  For example, there is a consensus that at lower contrasts objects appear to be moving slower than they actually are (Thompson, 1982). Several computational models have been posited to account for this observation, although up until now there has been a paucity of neural data with which to validate them.  A recent article in the Journal of Neuroscience seeks to address this issue, combining single-cell recordings in a motion sensitive region of visual cortex with psychophysics and modeling to better elucidate the neural code for speed perception (Krekelberg et al, 2006).
It has been well established that the middle temporal (MT) cortical area in the macaque monkey is of central importance for both  motion processing and perception (for review, see Born and Bradley, 2005).  Importantly, it has recently been shown that lowering stimulus contrast produces a qualitative difference in the extent to which MT cells integrate information over space (Pack et al, 2005).  Hence, it is only natural to wonder how the joint manipulation of stimulus contrast and speed affects both perceptual reports and MT responses.
To answer this question, Krekelberg et al. recorded neural activity from single units in area MT of awake-behaving macaques in response to patches of moving dots that could vary both in terms of speed and contrast. The purpose of the electrophysiological recordings was to elicit neural speed tuning curves at various levels of contrast. In these trials, experimenters presented a single patch that was centered on the receptive field and moving in the preferred direction of the neuron while the monkey maintained fixation.  In separate sessions, the authors had human observers and a monkey subject perform a psychophysical speed discrimination task, thus allowing them to compare neural and psychophysical performance. In the psychophysical task, observers had to judge which of two simultaneously presented patches of moving dots appeared faster – one patch was presented at a fixed speed but variable levels of contrast while the other was presented at a fixed contrast level but at a variable speeds. This procedure yielded psychometric functions quantifying the shift in apparent speed at lower contrasts.
Figure 1

Figure 1

Using these methods, the authors report several major findings.
Consistent with previous reports (Thompson 1982), the authors show that there is a substantial effect of contrast on speed perception, insofar as the perceived speed of the random dot stimuli is drastically reduced at low contrasts – reducing contrast by one octave reduces the perceived speed by about 9%. Moreover, the psychophysical data from the monkey suggest that there is – at least qualitatively – a corresponding effect in macaques.( See Krekelberg et al., Figure 2).
Further, reducing contrast had several effects on the neural activity of speed tuned MT neurons: Generally, speed tuning curves shift such that the peak firing rate is reached at slower speeds (Krekelberg et al., Figure 5A, 6A) – this shift in preferred speed is more pronounced in neurons preferring faster speeds (Figure 5B). Also, most – but not all – cells respond less vigorously at lower contrasts (Krekelberg et al., Figure 4A). It is of particular interest to note that due to the shifted peak, some cells (about 30% of neurons) respond more strongly in response to low contrast than to high contrast motion at slow speeds. (See Krekelberg et al., Figure 3C).
Finally, the authors use these neural data to test models attempting to account for the observed psychophysical effects.  Specifically, they test two exemplars of a family of labeled line models, each of which had previously been shown to account for human speed perception. In the vector average model, a population of MT cells effectively computes a weighted average, where each neuron’s contribution is proportional to its preferred speed and a weight given by its normalized firing rate. Surprisingly, when fed with their neural data, the authors find that this model predicts an increase in perceived speed with lower contrast, inconsistent with the psychophysical data, (Krekelberg et al., Figure 7).
Similarly, the authors point out that a “ratio model”, in which perceived speed corresponds to the ratio of activity in a fast and a slow channel, also cannot account for the psychophysical effects in terms of the neural data (Krekelberg et al., Figure 8B).
Hence, the authors state that their data is fundamentally inconsistent with existing labeled-line models of speed coding in area MT. They conclude that it is likely that the code for speed in MT will differ from the established labeled-line codes that can account for direction perception.
While there are several potential problems which will need to be addressed in further work, specifically: stimulus size was held constant (potentially covering a dynamic surround), not only contrast but also luminance varied across conditions, and finally, there was a profound lack of psychophysical data in the macaque, insofar as we can tell it appears that none of  these issues threatens the interpretations proffered by the authors.
The authors characterize their effect as the result of imperfect contrast-gain control and to a large degree, the differences in firing rates at different contrasts can be accounted for by simple contrast gain mechanisms (Krekelberg et al, Figure 4C). However, before one can attribute the observed effects to such a mechanism, it is beneficial to first rule out other plausible explanations.
A more exciting view, highlighting the functional role of non-veridical speed perception has recently been proposed. In this Bayesian approach, it is adaptive, when signal strength is low in proportion to the noise, to rely less on current sensory input and more on prior experience, which recent evidence suggests corresponds to slower speeds (Stocker & Simoncelli, 2006).
Of course, this doesn’t detract from the theoretical significance, or the main thrust of the paper, which is highly provocative as it highlights deficits in current models of speed perception. To improve on these models, it might be beneficial to simultaneously gather neural and psychophysical data from animals performing a similar speed discrimination task. This would allow a more direct comparison of neurometric and psychometric performance measures as well as the elucidation of neuro-perceptual correlates. Additionally it would provide modelers with valuable data on the issue of speed coding in area MT.
In conclusion we believe that this paper proposes a formidable challenge to both the neurophysiological as well as the modeling community.

References

Born, R. T. & Bradley, D. C. (2005) Structure and function of visual area MT. Annu. Rev. Neurosci., 157-189.

Krekelberg, B., van Wezel, R.J.A., & Albright, T.D. (2006). Interactions between Speed and Contrast Tuning in the Middle Temporal Area: Implications for the Neural code for Speed. J. Neurosci., 8988-8998.

Pack, C. C., Hunter, J. N., & Born, R. T. (2005)  Contrast dependence of suppressive influences in cortical area MT of alert macaque. J. Neurophysiol., 1809-1815.

Stocker, A. A. & Simoncelli E. P. (2006). Noise characteristics and prior expectations in human visual speed perception. Nat. Neurosci., 578-585.

Thompson, P. (1982) Perceived rate of movement depends on contrast. Vision Res., 377–380.

Acknowledgments: This piece was written with substantial input from Andrew M. Clark

Posted in Journal club, Science | Leave a comment

How to successfully attend a major scientific conference

Most professional societies hold an annual meeting. They are an important venue for the exchange of ideas as well as for professional development.

SfN 2009

However, given their scope and scale, these can be overwhelming, particularly – but not exclusively – to the novice. This leads to suboptimal yields, which can be quite disappointing given the substantial investment in terms of time and money, to say nothing of effort.

There is a way to get a lot more out of these conferences – but it requires a lot of knowledge as well as a set of highly specific skills. As you will see, it is well worth it to develop this particular skill set. I already wrote about this topic extensively, using the Annual Meeting of the Society for Neuroscience as a paradigmatic example of a major conference. Here is a compilation:

1. Be extremely selective about what you attend. Your time and attention are quite limited.

2. Be sure to preview the venue before getting there. It is easy to get lost.

2. Use cutting edge media, even in your posters. Create a podster or padster.

4. Make sure you have readily available food with you. Fasting is a bad idea.

5. Understand the difference between poster and slide sessions as well as symposia.

6. Understand the relationship between science and society.

7. Be aware of temporal constraints. How much time is actually available at the meeting?

8. Understand the history and development of the meeting you are attending.

9. Understand that the meeting can be extremely physically taxing.

10. Can the meeting take so much out of you that you get sick?

Hope this helps. Have a good meeting.

Posted in Conference | 2 Comments

A video says much more than a thousand words – rapid communication of scientific concepts via the “Podster”, a historic waystation towards a truly dynamic presentation surface

Background
The efficient communication of complex scientific concepts at an academic conference poses a challenge that is as old as it is formidable. The use of dynamic visual stimuli and intricate experimental designs have exacerbated the problems associated with this issue in recent decades, particularly in the area of visual neuroscience. It is not an easy task to communicate experimental procedures and results to an audience unfamiliar with the study, often under conditions of sleep deprivation and within a short period of time, as people typically visit a multitude of presentations. These issues are partially alleviated in slide sessions, during which the speaker can show the audience his actual stimuli, no matter how sophisticated. Yet, posters are generally much more ubiquitous than talks – at SfN 2006, posters outnumbered talks 10.5:1. This number is fairly typical of scientific conventions in general. Hence, the problem is to incorporate the positive aspects of a talk into the poster format.
Previous solutions typically involve a laptop computer. In one version of this approach, the laptop is held by the presenter while in another, it is suspended from the poster board. Both of these solutions are sub-optimal. In the former case, opportunities  to gesture are reduced, possibly impairing cognitive processing in both audience and presenter. Executing the latter approach is technically challenging and typically results in a display that is not on eye level and that crowds out the actual poster. Problems common to all incarnations of the laptop approach involve the lack of batteries that last an entire poster session, restricted viewing angles inherent to many notebook displays, thus limiting the number of people who can view the display at the same time and a fundamental lack of interactivity, as it is awkward for both presenter and audience to use the laptop controls in this setup.
To summarize, the common laptop solutions to the presentation challenge solve some problems but introduce others.
Concept
The goal remains to combine the audio-visual advantages of a talk with the interactivity and closeness with the audience that is afforded by a poster presentation.
This goal can be achieved by attaching one or more video iPods to the poster surface. The iPods can be seamlessly integrated in the poster as dynamic figures. These currently feature a 2.5” screen with a resolution of up to 640 x 480 pixels at very broad viewing angles. Efficient power management allows a battery life that lasts the entire poster session. More importantly, the small size of the unit makes it easy to place multiple iPods at the appropriate places on the poster. The controls make for a very interactive experience, as the presenter can focus the attention of the audience on what is relevant at a given time in the poster narrative. This allows to implement and augment psychologically appropriate presentation techniques.  Moreover, the hands of the presenter are free for gesturing and pointing, enhancing the learning experience of the audience. As in a talk, visual displays allow the audience to utilize their powerful visuospatial systems to maximize information transmission. As far as we can tell, the concept introduces no obvious drawbacks. This Podster concept was first implemented at SfN 2006, with overwhelmingly positive audience feedback, see Figure 1.

Figure 1: The first Podster, at SfN 2006

Practical considerations
Overall, the implementation of the concept is very simple. Yet, there are some things to consider to maximize the impact.
• In principle, other devices other than iPods can be used to achieve the same effect. Yet, these devices should be white, light and flat to be suitable as dynamic figures that are integrated with the rest of the poster. Moreover, battery power and resolution should be sufficient. Current video iPods fit these specifications, but other devices that do too are also suitable.
• One reason why video iPods are particularly useful for the implementation of this concept is the fact that they are already available in large numbers among the general public, allowing for a dual use at no additional cost.
• Try to place the iPod figures at eye level – this will facilitate audience interaction.
• The iPods can be easily attached with tesa® poster powerstrips. Two per iPod are sufficient.
• After mounting the iPdos, wipe the screen with alcohol swabs for clarity.
• If audio is desired, the earplugs can be mounted next to the iPod with tacks, see figure 1. Make sure to also wipe them with alcohol swabs between each use to prevent the transmission of ear diseases.
• To ensure a smooth removal of the iPods after the presentation, the poster should be laminated. Otherwise, the probability that the poster will rip is high.
• The Podster really affords the flexibility of a talk. In other words, one can update the dynamic figures up until the point of the presentation.
• In practice, this allows the re-use of posters with continuously updated data figures. This consideration is not immaterial with professional poster printing costs currently being around 200$.
Summary and Outlook
The biggest advantage of a poster presentation is the direct exchange with the audience. One of the biggest drawbacks is the lack of dynamic visual images to illustrate experimental stimuli, designs and results. Placing small portable video screens on the poster to yield a “podster” overcomes these problems. Hence, the podster combines the visual flexibility of a talk with the interactive narrative of a poster at low cost and little effort. Thus, the podster effectively constitutes a significant advance in the rapid communication of sciencific information.
The next thing to look for in the practical implementation of the podster is the full screen video iPod, featuring a 3.5” screen and is scheduled to launch within 6 months.
In the long run, technologies such as electronic ink or convention centers equipped with flat screens or touch screens instead of poster boards are likely to replace conventional poster presentations altogether.
Yet, it is unclear when this bright future will arrive. In the meantime, the podster is a viable and valuable bridge technology towards a dynamic presentation surface, augmenting the rapid communication of scientific information at poster sessions.

Note: This is a reproduction of something I wrote in late 2006, after the debut of the podster concept in a real life setting (SfN, if that counts as “real life”). How times can change – first we went from the podster to the padster, and now truly dynamic posters are not far off. Soon, hordes of scientists taking up all the overhead bins of a plane headed to or from SfN will be a thing of the past.

Posted in Conference, Optimization, Science, Technology | Leave a comment

The need for sleep

Western culture – the US in particular – is pervaded by the notion of achievement through hard work. This has many benefits, but like everything, it comes at a steep cost.

One of the things that is typically shortchanged in the relentless drive for achievement is the need for rest and recovery, sleep being a particular instance of this need. If you want to run the world and be part of the elite, you naturally don’t have time or even a need to sleep.

Thus, it should not come as a surprise that sleep is to self-proclaimed high achievers like studying is to high-school and college students – it is essential to long term success, but no one admits to doing much of it.

Just ask Tiger Woods – he admits to sleeping a mere 4-5 hours per night, even though it is common knowledge that elite athletes have increased sleep needs, on the order of 10 hours or more. Of course, we now all know what he is doing during the rest of the time.

Be that as it may be, it is common knowledge that great men have better things to do than to sleep – Da Vinci, Napoleon*, Edison have claimed sleep durations of well under 5 hours per night. After all, there is plenty of time to rest after one is dead, as common knowledge has it. This is not an exclusively American trait, either. A lot of cultures that place a similarly great value on achievement have similar sayings. But why?

That is quite simple. The downside to sleeping is obvious – for practical purposes, physical time is a completely nonelastic commodity. In other words, as a resource, time as a resource really does live in a zero-sum universe. Everyone has the same amount of time, and as one becomes more and more accomplished, one’s time becomes more and more valuable. Sleeping might be the single most expensive thing – due to forgone earnings – these people do on any given day. It would make good economic sense to cut back on it. If one is sleeping one can’t do anything and can’t react to anything. Indeed, the absence of motor output is one of the defining characteristics of sleep.

The upside of sleep is much harder to pin down. Ask any scientist why one needs to sleep and few will give a non-hedged answer that doesn’t amount to some form of “we don’t really know”.

The problem is that with this readily apparent downside and no clear upsides, the rational response is to limit sleep in whichever way possible.

Many people do exactly that – in the past 100 years, the average sleep duration per night in the US has dropped from around 8 hours to under 6.75 hours. This is achieved with the pervasive use of artificial lighting in combination with using all available kinds of stimulants, from Modafinil to Caffeine. People try to further cut corners with gimmicks like polyphasic sleep. The problem with this approach is that it is neither efficient, nor sustainable.

Why not? This is best illustrated by a concept called “sleep debt”. Like all debt, it tends to accumulate and even accrue interest. In other words, every time sleep is truncated or delayed with a stimulant like caffeine, a short-term loan is taken out. It will have to be repaid later.

But why? Because we sleep for very good reasons. It is important to realize that most scientist hedge their answers for purely epistemological reasons. And just because they do that doesn’t mean that very good reasons don’t exist. They do.

A strong hint comes from the fact that all animals studied – with no known exceptions – do in fact sleep. What’s more, they typically sleep as much as they can get away with, given the constraints of their lifestyle and habitat. As a general rule, predators sleep longer than prey animals (think cats vs. horses). Animals that are strict vegetarians can’t afford to sleep as long, as they have to forage longer to acquire their food which is less energy dense. But they all do sleep and they all sleep as much as they reasonably can.

And this is in fact quite rational. One significant constraint is posed by the need to minimize energy expenditures. An animal in locomotion expends dramatically more energy per unit weight and time than a sedentary one. This is an odd thought from the perspective of the modern world, with its ample food supplies and readily available refrigerators. But there are very few overweight animals in the wild. In other words, every calorie that is potentially obtained by foraging or predation has to be gained by expending and investing calories in locomotion. This is a precarious balance indeed. All of life depends on it.

Now if you were to design an organism that has to win in life (aka the struggle for survival and reproduction) would you arrange things such that the “engine” runs constantly at the same level or would you rather build it so that it periodically overclocks itself to unsustainable levels in order to best the competition temporarily then recover by downgrading performance during periods when little competition is expected?

Modulating performance by time of day might be beneficial

The rational and optimal answer to that question depends on outside conditions – can this period of little competition be predicted and expected? As it turns out, it can. Day and night cycles on this planet – dominating the external environment by establishing heat and light levels – have been fairly stable for several hundreds of millions of years. Consequently, pretty much all animal life on the planet has adapted to this periodicity. It is only for the past 200 years that man tried in earnest to wrest control over this state of affairs.

And we did. As warm blooded animals, we no longer strictly depend on heat from the sun to survive, but air conditioning and heating systems are quite nice to keep a fairly stable “temperate” environment. The same thing goes for light. I can have my light on 24/7 if I am able to afford the electricity and if I so choose. Lack of calories is also not a problem whatsoever. On the contrary.

So what is the problem? The problem is that evolution rarely lets a good idea go to waste. Instead, full advantage is taken of it for other purposes (exaptation). The forced downtime that was built into all of our systems by a couple hundred million years of evolution was put to plenty of other good uses. Just because locomotion is prevented doesn’t mean that the organism won’t take this opportunity to run all kinds of “cron jobs” in order to prepare it for more efficient future action. To further elaborate on the metaphor, the brain is not excluded from running a bunch of these scripts. On the contrary.

If you deprive the body of sleep, you are consequently depriving it of an opportunity to run these repair and prepare tasks. You might be able to get away with it once, or for a while, but over time, the system will start to fall apart. Ongoing and thorough system maintenance is not optional if peak performance is to be sustained. Efficiency will necessarily degrade over time if one keeps cutting corners in this department.

Of course, this is a vicious cycle. No one respects sleep (sleeping is for losers, see above), so sleeping is not an option. Instead, one resorts to stimulants to prop up the system and get tasks done. That makes the need for sleep (think maintenance and restoration) even greater, but the ability to sleep even less so, as tasks keep getting done less and less efficiently. This is truly a downward spiral. At the end of all of this, there is a steady state of highly stimulated, unrested and relatively unexceptional achievement. This is the state to be avoided, but probably the one that a high percentage of people finds themselves in, at any given time.

There is ample evidence that the cognitive benefits of sleep are legion. There is also a wide range of metabolic and physiological benefits. Here, I would to mention some extremely well documented cognitive ones explicitly. Briefly, sleep is essential for learning and memory consolidation, creative problem solving and insight as well as self control. This is not surprising. Peak mental performance is costly in terms of energy. Firing an action potential requires quite a bit of ATP (actually mostly the sodium-potassium pump that restores the status quo ante). It has been argued that these rapid energy needs cannot plausibly be met by glucose alone. Conversely, they must be met by glycolisis, which is also what supplies the energy to the muscles of a sprinter. It is precisely this brain glycogen that is depleted by prolonged wakefulness. Without sleep, the message of this mechanism is clear: No more mental sprints for you. Stimulants can mobilize some reserves for a while, but in the long run, your well energy will necessarily run dry. This is undone by sleep.

Moreover, it seems that amyloid beta is accumulating during wake. This is important because levels of amyloid beta are also increased in Alzheimer’s disease. The jury is still out, but I would not be surprised if a causal link between chronic sleep deprivation and dementia was in fact established. In the meantime, it might be wise to play it safe.

Another important hint at the crucial importance of sleep for neural function comes from the fact that your neurons will get their sleep one way or the other, synchronized or not. Sleep is characterized by synchronized activity of large scale neural populations. Recently, it has been shown that individual neurons can “sleep” on their own under conditions of sleep deprivation. Perhaps this correlates with the subjective feeling of sand in the mental gears when there hasn’t been enough sleep. The significance of this is that sleep deprivation is futile. If the need for sleep becomes pressing, some neurons will get their rest after all, but in a rather uncoordinated fashion and maybe while operating heavy machinery. Not safe. There is a reason why sleep renders one immobile.

Finally, some practical considerations. How to get enough sleep?

Here are some pointers:

  • If you must consume caffeine, do so early in the morning. It has a rather long metabolic half-life.
  • Try to enforce “sleep hygiene”. No reading or TV in the bed. No reading upsetting emails before going to bed. Try to associate the bedroom with sleep and establish a routine. If you can’t sleep, get up.
  • If you live in a noisy environment, invest in some premium earplugs. Custom fitted ones are well worth it.
  • If at all possible, try to rise and go to bed roughly at the same time.
  • Due to the reasons outlined above, a lot of circadian rhythms are coordinated and

    Having breakfast

    synchronized with light. Of course, we are doing it all wrong if we ignore this. Light control is crucial. This means several things. Due to the nature of the human phase response curve, bright lights have the ability to shift circadian rhythms in predictable ways. Short wavelength (blue) light is a particular offender, as the intrinsically light sensitive ganglion cells in the retina are sensitive in this range of the spectrum. In other words, no blue light after sunset. This is particularly true if you suffer from DSPS (delayed sleep phase disorder – and the suffering is to be taken literal in this case). There are plenty of free and very effective apps which will denude your computer screen (bright enough to have a serious effect) from blue lights. I recommend f.lux. It is not overdoing it to replace the light bulbs in your house with those that lack a blue component. Conversely, you *want* to expose yourself to bright blue light in the morning. The notion that annoying sounds wake us up is ludicrous. The way to physiologically wake up the brain in the morning is to stimulate with bright blue lights. I use a battery of goLites for this purposes. Looking at them peripherally is sufficient. The photoreceptors in question are in the periphery.

Warning: If you have any bipolar tendencies whatsoever, please be very careful using bright or blue lights. Even at short exposure durations, these can trigger what is known as a “dysphoric mania” – a state that is closely associated with aggression against yourself or others (and one of the most dangerous states there is). If you try it at all, do not do so without close supervision. Perhaps ironically, those suffering from bipolar tendencies might be among the most tempted to fix their sleep cycle this way, as they are so sensitive to light and “normal” artificial light at night shifts their sleep cycle backwards. For this population, using appropriate glasses (these can even be worn over existing glasses) is a much more suitable – and safer – option. I repeat, there is *nothing* light about light. Nothing.

To conclude, how do you know that this effort is worth it? It might take a while to normalize your brain and mind after decades of chronic sleep deprivation. In the meantime, I recommend to monitor your sleep on a daily basis. There are now devices available that have sufficient reliability to do this in a valid way. Personally, I use the ZEO device.

A typical night, as seen by ZEO.

This approach has two advantages. First of all, it makes you mindful of the fact that very important work is in fact done at night. Your brain produces interesting patterns of activity. You are typically biased to dismiss this because you are not awake and aware of it when it happens. This device visualizes it. Second,  the downside of disturbed sleep becomes much more apparent. You can readily see what works and what doesn’t (e.g. in terms of the recommendations above). In other words, you can literally learn how to sleep. You probably never did. And once you’ve done so, you can sleep like a winner. And then maybe even go and do great things.

Sleeping like a winner. #beatmyZQ

__________

I recently delivered this content as a talk in the ETS (Essential tools for scientists) series at NYU. The bottom line is the importance to respect one’s sleep needs, even – and in particular – if someone happens to be a scientist. A summary of the talk and slides can be found here.

*There is very little question that Napoleon suffered from Bipolar disorder (making – by extension – the rest of Europe suffer from it as well). It is true that he slept 5 hours or less during his manic episodes, but he slept up to 16 hours a day during depressive ones.

Posted in Neuroscience, Science | 14 Comments

Charlemagne was a Neuroscientist

The exploits of Charlemagne are fairly well documented and widely known. He was both the King of the Franks and the founding emperor of the Holy Roman Empire (technically, the Carolingian Empire). In this capacity, he is renowned for a wide variety of things usually associated with wisdom, such as diplomatic as well as military triumphs, a cultural renaissance, successful economic and administrative reforms and so on. There is a reason why he is called great. Importantly, he died in 814 A.D., well over a thousand years before anything that we would today recognize as Neuroscience.

Charlemagne (742-814)

Thus, it might come as a surprise that he also had a keen interest in mental phenomena. Some of his observations on this topic have been preserved through the ages; for instance:

To have another language is to possess a second soul.

There are several recent papers on this very issue, for instance this one; lo and behold, it seems to be true. It looks like Charlemagne scooped these authors by about 1200 years. Or did he?

It is not the only noteworthy quote. This one is pure gold:

Right action is better than knowledge; but in order to do what is right, we must know what is right.

I could easily spin a dense yarn about how this expression anticipated the results of literally decades worth of intensive research on perception, cognition and action. It would be a marvelous feast. I could discuss dorsal vs. ventral streams of processing, perception vs. action, perception for action vs. perception for cognition and a great many other “hot” topics of contemporary research in sensory and motor systems. The possibilities are almost endless and without reasonable bound.

In fact, I could write a whole book about this. The title would be obvious. Recent history suggests that editors eat this kind of stuff right up and that it would also sell quite well.

So why don’t I?

Because it would be wrong. Charlemagne was not a Neuroscientist. He was not even a scientist. Not even by a stretch. Not a chance. Not even close.

Asserting this would grossly misrepresent the character of science and what scientists do.

Pretty much everyone has intuitions about the workings of the world, including the workings of the mind. Sometimes, these intuitions even turn out to be largely on target.

Charlemagne, German version

But that is not what science is about. For the most part, science is about turning these hunches into testable hypotheses, trying to then test these hypotheses to the best of our abilities (this part is extremely hard) and then trying to systematize these systematic observations into a coherent framework of knowledge that is aimed at uncovering the principles behind the phenomena under study. In other words, we are not trying to describe the shadows per se, but we are trying to triangulate and infer the forms from a multitude of multidimensional (usually) shadows, as hard as that might be in practice.

It is not surprising that smart people are curious about a great many things and that their intuition is sometimes correct. In that sense, everyone is a physicist, neuroscientist, psychologist, chemist and so on. In other words, Spartacus was a neuroscientist. But that utterly trivializes science to a point that is entirely ridiculous. Modern science is nothing like that. It is precisely this moving beyond intuition that defines modern science. A very simple – and early – example is the dramatic difference between our naive intuitions about how objects fall and scientific descriptions how they actually fall.

It works the other way around, too. Some sciences – psychology in particular – have a very big PR problem in the sense that most of their findings seem to be perfectly obvious (or in line with our intuitions) after the fact. This is an illusion, as people are actually not able to predict the results of social psychology experiments better than chance, if forced to do so from common sense and before knowing the outcome. It is important to distinguish this from bad science (or non-science), where the conclusions are not derived from empirical data, but follow from the premises a priori (analytic, not synthetic judgments). In any case, the fact that some scientific results are consistent with our intuitive a priori notions misses the point completely. That is not what science is about at all.

It may be forgivable if a non-scientist does not understand these subtle yet fundamental things and makes embarrassing claims, but this person cannot then claim being a scientific expert at the same time. One can’t have it both ways. Really.

Doing so would just be wrong. barefoot

Knowingly doing something wrong would be disingenuous. slave

So I don’t and I won’t.

Posted in Neuroscience, Pet peeve, Science, Social commentary | 2 Comments

One catchy tune, three different sets of values

I’m no musicologist, but I found this to be striking enough to address. Most people associate the tune in question with “God save the Queen“, the national anthem of the United Kingdom. What is much less known is that two other countries also adopted this tune as their national anthem, but with different lyrics. The reason this is less well known is that these countries have since changed their national anthems, one of them several times since. The other countries in question are Germany and the United States, and that is where the plot thickens. All of them were in use at the same time, mostly in the second half of the 19th and the beginning of the 20th century. I used Wordle to visualize the concepts mentioned in the respecting anthems, as well as their frequency. Here is the first, and most familiar one, “God save the Queen”:

WYSIWYG

Pretty straightforward, as they come. What you see is what you get, more or less. The German version provides an interesting contrast. We are of course talking about “Heil Dir im Siegerkranz” (Hail to Thee in Victor’s Crown). Technically a “Kaiserhymne”, but it was the de facto national anthem of the German Empire from 1871 to 1918. Obviously, we’ll have to deal with a translation here, for the sake of comparison. As in the other cases, the lyrics are taken from the Wikipedia page. The translation itself is quite faithful, as far as translations go.

That's a lot of words

To be fair, this is the only of the hymns that explicitly mentions science. What about the American one? That would be “My Country, ‘This of Thee“, the de facto national anthem of the US before 1931.

Even the rapture made it in

That about sums it up. I do think it cannot be denied that culture is a tremendously powerful force. It does matter a great deal what values are emphasized and which ones are de-emphasized. We better get this right.

P.S.: Adapting melodies is not as uncommon as it seems. For instance, Rhodesia used a version of the Ode to joy.

Posted in Social commentary | 3 Comments

The treasure hunters

Just outside of the big city is a giant and seemingly inexhaustible heap of trash. It might be hard to believe, but there are actual people living on this pile. Every day, year after year, they make their living sifting through the trash, hoping to find the treasures that are buried in the pile.

Objectively, this is nasty work, as it is dirty and tiresome,  but someone has to do it; in fact, a surprising number of people seem to like doing it. Perhaps some of them are idealistic, at least initially? Puzzling as it may seem, their numbers increase every year. Maybe it is so attractive because it is honest work with lots of tradition?

Because real treasures are rare, all that most of these people ever find is just trash, never any treasure. Some of them are disheartened by this and eventually – often after many, many years – give up in frustration. Others persist. These are the people who manage to convince themselves that what they found is in fact treasure, while everyone else knows that it is really just trash.

How it is done

Sometimes – very rarely – someone is very lucky and discovers a genuine piece of treasure buried in all that trash. Once this happens, there is great rejoicing throughout the land, vindicating the whole endeavor. Others will now start digging in the same spot in the hope that the treasures are not distributed randomly among the trash.

This is not surprising. Those that are the most successful at either finding genuine treasures, or those who are able to convince others that the trash they found is in fact really valuable are rewarded richly.

Despite its allure, they eventually stop digging through the trash themselves. Instead, they hire other people to dig through the trash for them, contenting themselves with overseeing in rather general terms how the searching, digging and sifting should occur.

One cannot begrudge them. Digging through the trash is – after all – a nasty business, and there are always plenty of people who want to get hands on experience in the hope of eventually finding some treasures of their own.

And that is ok, as long as everybody knows that this is how it is done these days, so that there are no disappointments or regrets.

Is this the best way of doing it? Is this really doing it justice? Is this how it should be done? That is a story for another day.

Posted in Science | 2 Comments

Evidence based medicine – IVF success rates and age

In recent years, “evidence based medicine” has become a popular buzzword in the medical community. More than anything, this just highlights the fact that most of medicine is precisely not sufficiently based on evidence, in sharp contrast to – for instance – science or engineering. In short, medicine is – in large parts – still more of an art than a science. Outcome evaluations are rare. This state of affairs in the medical enterprise is owed to many factors, including a strong belief in traditions as well as a culture that places a premium on authority. There are also many good reasons for this. For the most part, it is owed to the idiographic nature of the field itself. Medicine inherently deals with non-reducable complexity in the extreme, combined with enormous biochemical diversity in an applied and high stakes environment. Usually, there is a dearth of relevant data for any given case (typically, there are no baselines and few if any repeated measures after interventions, to say nothing of time-series or controls). This is not an indictment of medicine as a whole. One just needs to know what one is dealing with so that one knows what one can and cannot expect from it. Romantic notions are misplaced and singularly unhelpful.

Nevertheless, there is one field of medicine – reproductive medicine – where outcomes are discrete and countable. Thus, evidence is compiled on a nationwide level and disseminated publicly. The results are sobering. In a nutshell, nature rules supreme. This is a problem, as nature follows its very own relentless evolutionary logic which is as merciless as it is uncaring about individual desires, wishes or merits. The integrity of the y chromosome is a case in point. Most genetic damage – occurring for whatever reason – can be easily repaired by recombination, as there is an available replacement on the other chromosome, effectively a clean backup copy. This is not the case for the y chromosome. How does it repair itself? By natural selection. If important portions of the y chromosome are deleted or corrupted, hard infertility ensues. This is tragic for a given individual with a desire for children, but it maintains the integrity of the y chromosome in the population down the generations in the face of ongoing genetic drift. So much for a case in point of evolutionary logic and male fertility.

As far as women are concerned, it is common knowledge that the fertility of females is closely tied to chronological age and that it dips quite precipitously in the mid to late 30s. This surprises nobody. Likely, this is for the most part owed to the steady degradation of mitochondrial function, specifically the corruption of mitochondrial DNA. What is much less appreciated is that the outcomes of medical fertility treatments follow a similar trajectory. Put differently, medicine fails when it is needed most. This is not an unfair assessment of the profession. There is only so much medicine can do. This state of affairs is obscured by the many news stories celebrating women in their 50s, 60s or even 70s who gave birth due to the miracles of modern medicine. It is often overlooked that these are not pregnancies from their own eggs, and that there is a reason these stories are news…

While these anecdotal human interest stories are certainly inspiring, they do not tell the whole story. A closer look at the publicly available data on the success rates of In-Vitro-Fertilization (IVF) is rather sobering in contrast.

Blue: Pregnancy. Green: Live birth.

The granularity of the available data on IVF success rates is rather coarse, particularly in regard to age groups, but the trends are clear well enough. The last published data from 2009 show that an under 35 year old woman undergoing IVF has a chance of getting pregnant per cycle of close to 50%. Also, there is a 41.4% chance per cycle that the cycle eventually results in a live birth. Reasonable enough. In other words, within 5 cycles or less, 93% of these women will achieve a pregnancy that eventually results in the point of the exercise – a live birth. This is important, as each IVF cycle currently costs around $15,000 – on average. Fast-forward less than a decade. A women who is past the age of 42 when undergoing IVF has a chance of less than 10% of getting pregnant per cycle. The odds of having a cycle that results in a live birth are even less – a shocking 4.2%. Doing the same math as above, it now takes well over 60 cycles to ensure that 93% of women at that age can give birth from their own eggs via IVF.

Probability of live birth via IVF per cycle for simulated population of females over 42

Using the same numbers as above, this baby now costs close to a million dollars. Of course, these numbers are way too optimistic, as a steady fertility in the “above 42” group is assumed. In reality, fertility in that age group declines sharply and steadily. Women in that age group do not have these 5 years. Also, this probability calculation neglects the time it takes to recover from IVF cycles that resulted in pregnancy but not a live birth, which delays the entire endeavor further. In other words, in that age range, live births from own eggs can no longer be guaranteed, no matter the time or money invested. Something that can be pretty much guaranteed (for more than 90% of women) less than a decade earlier. On a sidenote, one can somewhat improve these odds by picking a center or hospital that is consistently beating the odds. While it is not suggested to compare hospitals (as their patient populations might differ) in terms of their IVF success rates, I couldn’t fail to notice some hospitals which did hundreds of cycles, year after year, in women over the age of 42 without any live births (some pregnancies) from IVF. I cannot imagine the pain and suffering involved. To say nothing of the heartbreak.

I am also not entirely sure if technological progress offers hope to those who need it most – females in the age group over 42 years – in the near term. As the figure below shows, the total number of cycles of all age groups has increased over 25% in the past 6 years, while the live birth rate per cycle has essentially stayed the same. Even the first successful IVF in 1977/1978 solved an infertility problem that was not related to age.

Left: Total number of IVF cycles performed, per year. Right: Success rate per cycle (live births) over the years. Gray lines: Reliability range.

Given these data on IVF success rates, and considering the mid-40s age range, it might make most sense to use donor eggs, donor sperm and a surrogate. But at that point, you probably should just adopt.

There is a bigger point here. Regardless of political correctness, society should learn to deal with biological realities. Barring some unforeseen scientific breakthrough, female reproductive success rates will remain closely tied to chronological age. While males are not immune from age related declines in fertility, these are generally thought to be more subtle and seem to involve a more elastic age range.

This matters. In many countries, childbearing is delayed later and later. Given the discussion above, it should come as no surprise that in some societies, fertility has now dipped far below replacement levels, particularly in Germany, Japan and Italy. Some societies managed to achieve balance – which corresponds to a fertility rate of 2.1 children per woman, on average, like France, the United Kingdom and especially the United States. In many modern westernized countries like Germany et al., the rate is closer to 1.4 children per female. This doesn’t sound like much, but adds up over time. The last time Germany had a more births than deaths was in 1971. Since then, there have been over three million more births than deaths, and the effect compounds over time. If it is true that demographics is destiny, this doesn’t bode well for the socio-economic future of these nations. This is particularly true as all alarmism about impending doom from population explosions has failed. This was true for Malthus and it is true now, including the ironically named Ehrlich. While alarmism surely sells books, predictions have to be taken to task. If they fail continuously, perhaps we should reconsider their validity. Most consensus estimates now expect world population to level off at around 10 billion by 2100. That number can easily be accommodated in the land mass of the United States alone, at population densities of less than Bangladesh – or the Maldives, if you want to consider a more posh example – has today.

However, the policy implications for societies are deeper than that. Reductions in fertility are not randomly or evenly distributed across the population. On the contrary, it seems to mostly affect the highly intelligent. At this point, IQ seems to be correlated with fertility at -.73. It is easy to see how, as the most intelligent are likely to delay childbearing in favor of education and careers. As we saw, delays imperil fertility, sometimes irretrievably. While there is nothing intrinsically wrong with this, the implications will shape societies, now and in the future.

Throughout the 20th century, the coming Idiocracy was staved off by the Flynn effect, the continuous rise in IQ scores over time. This now seems to be petering out, as most of the gains seems to have come from an adaptation to test-taking formats by a culture that is increasingly based on symbols and formal logic, as well as nutrition.

In the end, wise societies will have to devise cultural solutions in the face of biological realities in order to maintain reproductive success. Culture can be a powerful force. As we saw, putting this burden on medicine alone is too much to ask, at least at this point.

Posted in Science, Strategy, Technology | Leave a comment

In the footsteps of Theodore Roosevelt

Living in New York, one has the great fortune of never being very far from places where giants once walked the earth. One of these giants is Theodore Roosevelt, born and raised in the city.

Today, there is a prevalent feeling that US society is at a crossroads. Therefore, it is perhaps fitting to revisit the sites of a man who undoubtedly was one of the key players that gave birth to the self-identity of the modern US as a nation.

Theodore Roosevelt the man doesn’t need much of an introduction. In recent years, we had the great misfortune that our leaders were  – at best – either smart or strong, but never both. Things were different back then; Theodore Roosevelt can be legitimately called both strong and smart. Basically a scientist and scholar who ended up in politics – given his early publications on ornithology and the war of 1812, it was assumed he would just go for a faculty position in history or natural philosophy. Luckily, he never quite lost this streak – his writings are as eloquent as they are witty. For instance, as late as 1913, he invokes the “Brothers of the cosine” in a discussion of a modern art exhibit.

His achievements are legion, the stuff of lore – he is the only person to ever jointly earn a Medal of Honor as well as a Nobel Peace prize. At San Juan Hill, he personally banished the Spaniard from the New World, after over 400 years of oppressive colonialism (with hindsight from the 21st century, this doesn’t seem much of a big deal, but the small US army was in such disarray in 1898 that the smart money of the time was overwhelmingly expecting a Spanish victory). As undersecretary of the Navy, he established the conditions that ensured Dewey’s victory in the Battle of Manila Bay (perhaps the most decisive naval engagement of all time; the US suffered a single fatality due to heatstroke, while all Spanish units were sunk). He is responsible for the creation of the Panama Canal as well as the Republic of Panama itself. He personally discovered the River Roosevelt (formerly the River of Doubt). He is responsible for establishing most of the national parks that we enjoy to this day and founded the Bull Moose Party. Along with many, many other exploits. For a comprehensive rendering of his life, I recommend the excellent three volume biography by Edmund Morris.

While most people are focused on his achievements as president, I don’t think he himself would consider this as most significant – he did great things before and after the presidency. More importantly, he is someone with an incredible personal philosophy who became president pretty much accidentally. Also, most people focus too much attention on him as a person, some making the mistake of idolatry, something that Theodore Roosevelt himself would abhor. This is understandable. There is much to admire in Theodore Roosevelt. Besides personal bravery, he championed women’s suffrage and was the first president to invite a black man for dinner at the white house – that might not seem like much today, but at the time, newspapers chided him for “committing the most damnable outrage ever perpetrated by any citizen of the United States” and he lost the South forever. But he was not without fault, even made grave and unforced mistakes, for instance designating the disastrous Taft as his successor or his actions in the Brownsville affair. Others make the mistake of attacking a caricature. In fact, most people terribly misunderstand the historical Theodore Roosevelt. For instance, contrary to popular belief, he was not a warmonger. The point of the big stick is precisely that having it will make it much less likely that one will have to use it. The history of the last 100 years amply confirms the wisdom of this idea. Regardless, fascinating as he might have been, neither Theodore Roosevelt himself, nor his presidency are of primary interest per se.

His personal philosophy however is something to revisit in these times of need. While a comprehensive treatment of this philosophy will have to wait for another day, we do have space to address one of its cornerstones, one that stands very much in the intellectual tradition of Alexander Hamilton.

Front and center is the notion that the point of life is not to be as comfortable as possible; rather, the point is to welcome challenges, meeting them and overcoming them. Thus, they can be used for progressive and positive self-transformation. In short, the goal is to live the strenuous life, not one that is shirking away from the effort necessary to do one’s duty. There are no excuses. Roosevelt himself was troubled by asthma and bad eyesight (in an era where that was definitely not ok, at least out west). Instead of whining and complaining, the point is to focus one’s attention on things that one can control, changing them for the better by effort and action. This effort must not necessarily be selfish and just be exerted for the personal good. On the contrary, it is important that these efforts are dedicated to the spirit of public service, propelling the society at large forward.

His tremendous achievements are merely a byproduct of this practical philosophy. As you can see, this attitude is in critically short supply in the modern US. It is obvious that we as a nation and society need a restoration of this philosophy or face the consequences.

For inspiration, we will now revisit some of the sites of Theodore Roosevelt’s life as I found them on April 16th and April 17th of 2011. Some of these have been kept in the original state, others have been restored and can visited today, others have changed considerably in the past century. This New York Times article from 1905 gives an idea of just how much.

1. Theodore Roosevelt’s birthplace at 28 East 20th Street.

Theodore Roosevelt's birthplace at 28e 20th Street.

Theodore Roosevelt outside his birthplace. James Foote as TR

TR was born here on October 27, 1858 and lived in this house until 1873, when he was 15 years old.  Today, this is a National Historic Site, managed by the National Park Service. The inside is a museum, which can be toured. There are tours guided by actual park rangers that start once an hour. Very exciting.

Still giving a rousing speech. Inside the birthplace. James Foote as TR.

2. The house his father built at 6 West 57th Street.

6 West 57th Street today

Given what transpired here, some said that there was curse on this house. It was in this house that his father died at a young age while TR was in college. It was also in this house that both his wife Alice and his mother passed away on the same day, February 14th, 1884. Highly pregnant, Alice had moved in with his mother while he was away in Albany as a New York State congressman. The Valentine’s day deaths might not have been altogether uncorrelated, as a terribly dense fog that lasted for over a week had descended upon the city around that time, compromising health and well-being of everyone present. As you can see, TR was no stranger to tragedy. Today, this building seems to house a Club Monaco on the bottom floor whereas the top floors appear somewhat residential.

3. The small house he inhabited with his young wife at 55 West 45th Street.

55 West 45th Street today. Definitely commercial.

After TR got married, he moved here with his young wife, albeit only very briefly, as the wife died shortly afterwards during childbirth, as noted above. TR mostly spent the weekends here, as he was preoccupied as a minority leader in the Assembly in Albany during the week. In the meantime, Alice tried to start her own household here. Today, this building houses a Pizzeria, a barber, a software development company, a realtor as well as some others. Still seems quaint, but probably not the same house.

4. The house he maintained in New York at 422 Madison Avenue.

422 Madison Avenue. Looks much changed today.

Most of the time while TR maintained this residence – from 1884 to 1886, he was actually away at his ranch in what is now North Dakota, rebuilding himself after the double tragedy with wife and mother. For a while, this was the residence of his sister Bamie who had moved in to take care of his baby Alice Lee while he was away in the Dakotas. These stays were quite extensive and lasted months on end. While managing his ranch out west, he squared off with local bullies and arrested several boat thieves while reading Tolstoy. Today, it seems to house a spa, a laser hair removal and skin care place and a Pizzeria.

5. The house he maintained at 689 Madison Avenue.

689 Madison Avenue today.

Technically, this was the house of his sister Bamie, but TR lived in this house as civil service and police commissioner of New York City, along with his second wife Edith and a growing number of children. It would later play a pivotal role in his campaign for Governor of New York, as the Governor was required to maintain a 5 year continuous residency in the state prior to election. TR however lived in Washington at that time, being busy as Assistant Secretary of the Navy. Utilizing some legalistic footwork, Roosevelt was able to establish that this place had been his legal residence in the years in question and was able to secure the governorship.

6. Sagamore Hill, his house in Oyster Bay

Sagamore Hill, the summer white house

Built by TR himself, this house has aptly been named the “Summer White house” and is also the place where TR spent most of his post-presidential years (when not away on Safari). This is also a National Historic site, administered by the National Parks service. There are guided tours every hour on the hour, but photography inside the building is no longer permitted.

The grounds of the site also house a museum that does allow the taking of pictures

The charge up Sagamore Hill, led by Park Rangers

7. Theodore Roosevelt’s final resting place

The final resting place of Theodore Roosevelt and his wife

Appropriately, the gravesite of Theodore Roosevelt is close to the Theodore Roosevelt Nature Sanctuary, which is also the first national Audubon Sanctuary. The gravesite is part of the Youngs memorial cemetery. As you can see, he lies here with his second wife and childhood friend Edith.

Posted in Life, Misc, Philosophy, Social commentary | 4 Comments

An open letter to Zipcar: Your ad campaign lacks balls

Dear Zipcar, as you know, I have been a happy member for years. The concept simply makes a lot of sense, so please make every effort to get the word out. I know you believe your marketing to be edgy. You do get points for trying, but in these times, this is not enough. Please understand that we are at war. This has been done before. Here is how. Keep in mind who actually won that conflict. There is something to be said for boldness. Here is what a version for our times -mutatis mutandis – might look like.

Are you riding with Osama?

It happens to be true, too. The link is much clearer than the one between Hitler and driving ever was. It is not too much of a stretch to suggest that if you ride alone, you are in fact effectively riding with Osama. Recent events hint at even more drastic connections. Reducing the consumption of fossil fuels reduces the need to rely on nuclear energy. Anyway. The point is that if you want to keep winning, please grow a pair. You’re welcome. Sincerely, Pascal

Update 05/02/2011: As not everyone lacks boldness and courage, I guess you are now just riding alone when you ride alone. Never mind.

Update 05/02/2012: Apparently, I was not the only one who had this idea. I am *shocked*.

Posted in Misc | Leave a comment

What you should do

It actually doesn’t happen that infrequently that students seek me out for advice on this very question – what should they do with their life?

Do things that you both enjoy and are good at.

I’m usually happy to oblige, but there is now sufficient data (including long term feedback), allowing me to draw some more general conclusions about the emergent commonality structure.

The result is quite intriguing – a combination between an objective carrier structure which seems to apply pretty much universally and subjective instantiations of this structure that allow for considerable idiosyncrasies.

Of course it is ultimately up to you what you do with your life, but for the purpose of this post, we’ll pretend that you asked me for my advice on this question, so here we go.

The solution is actually quite simple. Do something that you enjoy, but something that you are also good at at the same time (ideally something that is valued by society).

That’s it.

Properly implemented, this places enough constraints on the space of possibilities to allow for a happy life.

Of course, it is much more complicated than that in real life. Not that I can claim any personal experience with this, but there are usually implicit or explicit, more or less subtle pressures and expectations from parents, grandparents and the always present aggressive and cocky friends. Just because they feel quite strongly about what you should be doing doesn’t mean that you should actually go ahead and do that. It simply means that you have to reexamine the relationship that you have with these people. If you don’t have this problem: Consider yourself blessed. It is important to keep in mind that it is ultimately your life to live. You – and you alone – are responsible. In the end, no one else cares as much as you (should).

Naturally, there are also material needs and pressures. But don’t sweat it. Above a certain – surprisingly low – income threshold, the correlation between subjective well-being and income is zero, if not negative. This makes good sense. How many cars can you drive? Also, with great wealth comes great responsibility (and usually a dearth of time). Last but not least, the variance in happiness that is accounted for by income is minuscule. So unless you are destitute or on the path to being destitute: Don’t sweat it. Ultimately, money is simply a token that allows you to buy the time of other people. There is no final reward for accumulating a frivolous amount of these tokens. Doing so appears as a serious misallocation of quite finite resources, lifetime chiefly among them.

Finally – and this is perhaps the biggest obstacle – we are facing the paradox of choice. In olden times, you by and large simply ended up doing what your gender-matching parent did. That’s it. Today, with the – at least theoretical – possibility of doing absolutely anything, you have to figure this one out yourself. We know this to be the source of considerable anguish.

That’s where the need for learning comes in. At the beginning of life, both what you are good at and what you enjoy are unknown, perhaps known only insofar as you are similar to other people (the preferences and skills of whom are known). Frankly, you don’t know what you will enjoy (and what not), and maybe even less what you are good at, to say nothing of having the actual skills. We learn that by trying many different things and getting feedback about what we enjoy (from our reward system) and what we are good at (from others). This approach virtually guarantees that most of the things we try in this exploration stage will fail. Thus, one better be accepting of failure. Failure needs to be ok, as it is necessary and healthy during this phase. It establishes the boundaries between the sets. Information which is very hard to come by in any other way.

Of course, these parameters are not independent. You tend to get good at that which you practice. And you tend to practice that which you enjoy. In addition, the concepts are linked due to the need for effort. Without enjoying what you are doing, it is unlikely that you will have the energy to do it every day, all day, for a prolonged period of time. We all have limits and can only force ourselves to do so much. Thus, over time, there will be a strong convergence between what you are good at and what you enjoy. That is a very good thing, but it comes later.

Illustrating the need for effort. Amazingly, in this image sign and signified are truly identical on a meta-level. Note that this is NOT SHOPPED.

Thus, my advice for the truly young: Have the boldness to use college to explore as broadly as you possibly can. This is not easy. In addition to audacity, one needs the awareness that this is the state of affairs as well as the humility to accept it. College is about growing – and shaping – the set of things that you do and do not enjoy. Be not concerned about the go-getters. While their behavior can induce all kinds of fear and loathing, they usually are just on a fast track to a miserable job (aptly called the “race to nowhere”). In the end, they will get what they deserve, either being stuck in a job that they do not enjoy or quickly crash and burn in Icarus-like fashion. These things work themselves out. They always do.

The situation at the beginning of college. You don't know what you will enjoy, and not only are you not good at anything, you also don't know what you are good at.

Graduate school is about getting good at those things that you enjoy. That’s why in reality, the things you are good at is usually a subset of the things you enjoy.

And that’s it. The things you are good and that you enjoy at the end of graduate school, that’s what you make your career. Quite simple, huh?

Now, it is up to you to do the actual exploring (step 1) and improving (step 2). No one can do that work for you. There is considerable diversity in both interests and abilities, and that is quite alright, as there is also a matching diversity of tasks. But no one knows a priori – including yourself – where you are located in the ability/interest space.

Of course, I can be accused of simplifying things, and I am. This analysis presumes a rational actor that is interested in optimizing their long term happiness and effectiveness. If you tend to neurotic self-sabotage, you will have to address these psychological issues as well. Moreover, it is easy to get too comfortable in a local extremum, effectively getting trapped in a particular, sub-optimal corner of this space. This happens to quite a few people and that is why it is important to continuously push yourself to optimize and also to keep exploring. There are a bunch of qualifiers like this, but in a nutshell, this is it.

A final word of caution: Unfortunately, you do have to commit to something, if you do want to do truly good work. Unless you cheat, it takes about 10,000 hours (roughly 10 years) of deliberate practice to achieve a somewhat competitive levels of skill.

This is what the ability/interest space of a mature individual should look like. Clearly delineated boundaries, but deciding to do only a subset of the enjoyable things. Those that one is good at/are valued.

Life is serious business and easy to screw up. Many people do just that. I think that is a terrible waste and completely needless, if understandable given the inherent complexities of this issue.

The good news is that if you follow this advice, you can do something that you are good at and that you also enjoy while being able to maintain your moral integrity because you don’t have to cheat. Everything else will fall into place. What more could you possibly want?

Posted in Life, Optimization, Psychology | 4 Comments

Coming to the US of A – 10 years on

I find it hard to believe, but ten years ago today, I arrived in the US for the very first time. I find it worth reflecting on this – personally quite meaningful and memorable – anniversary.

And I do remember the day quite well. February 19th, 2001 was a Monday, but a holiday. President’s day, to be precise. That’s when I made landfall at Boston’s Logan Airport. The reason for my visit was a research assistantship with Steve Kosslyn at Harvard University, which I started the very next day. I remember that I was classified as a “visiting scholar” in my J-1 visa.

The contrast to today cannot be exaggerated, both in terms of personal as well as world affairs. I might be justly accused of nostalgia, but it seemed to be a more innocent time.

The new millenium had just begun (contrary to popular belief, it started on 01/01/2001, not 01/01/2000). Notably, the twin towers in New York were still standing, peacefully. George Bush II had just taken office. Different kind of issues commanded public attention. The Y2k scare was still fresh in memory, AOL had just merged with Time Warner but the dot-com bubble was already deflating. Ironically, the internet as we know it today had not yet taken shape. Most people were using Yahoo instead of Google while Wikipedia was in its very early infancy. Facebook and Youtube had not even been conceived yet. To the general public, blogging appeared as exotic as using cell phones or shopping online. “The Matrix” was an – excellent – standalone movie. DVD’s were cutting edge and the “Mir” was still in orbit, albeit barely. It is hard to remember this, but Apple seemed to be in terminal decline (pre iPod, OS-X, iPhone, etc.). Dialup was still standard. While it might be a personal failing, I had yet to make a single powerpoint presentation. Up to that point, I had used actual slides and made handouts for my talks. I also went to the actual, physical library to find articles.

The World Trade Center in February 2001

The Germany I left had just introduced reality TV (anyone remember “Big Brother”?), had not yet introduced the Euro, was still using the same spelling it had always used (the new spelling had been introduced but de facto, no one was paying any attention – yet) and Hartz was associated with a mountain range, not employment status.

From todays perspective, all of this seems quite quaint, but in short: Those were very different times.
This is also true on a personal level. Apart from vacations, I had spent my entire life in Germany, first in a small village, then in Berlin. The previous year, I completed my undergraduate education at the Free University of Berlin, suffered through a bad breakup, became a German National Merit Scholar and got involved with a student initiative to reform higher learning. So while I couldn’t complain about a lack of excitement, the way ahead was wide open, yet entirely vague. The feeling is perhaps captured best by Rose in James Cameron’s Titanic (1997), who utters: “…with nothing out ahead of us but ocean.”, as the ship clears the coast of Ireland.
This highlights the central difference between then and now: I was young, really young. But I was entirely unaware of it.

Thus, I was facing the central dilemma of real youth: On the one hand, possibilities are as infinite as they are open. Yet, decisions about which of these opportunities to pursue (and by implication, which of them to forgo) need to be made. However, the necessary knowledge to make these decisions in a responsible way is simply missing, more or less completely. What’s worse, even the very awareness that this knowledge is absent or deficient is itself sorely lacking. Of course, this is only apparent with the hindsight of many years or even decades. In other words, this is a perfect recipe for making copious errors of omission *and* commission, and thus a perfect recipe for infinite regret.

What might be helpful in such a situation is the infusion of valid outside knowledge. But this is hard to come by. First of all, the person who will need it most is unlikely to seek it out, for reasons pointed out above. Second, a person who possesses such knowledge – like  a seasoned mentor who navigated this minefield himself at some point – is rare. In the worst case, his advice biases the decision making in ways that are not helpful. There are no guarantees that the same rules that the mentor learned during his life still apply. The world might have changed, and as we saw above, that is almost certainly the case. There are also no guarantees that he understands the situation fully or that he doesn’t have an agenda of his own.

Certainly, most of us do incorporate outside information, yet of a quite dangerous kind. I am of course talking about entertainment, be it in the form of books, movies or TV. Like it or not, our own (extremely limited, in youth) life experience is complemented by copious amounts of what might best be called “fantasy”. These unrealistic – scripted for entertainment value, dramatic effect and derived from the imagination of an author -models of reality have a measurable impact on our conception of reality. They do bias our prior unfavorably. Instead of citing the literature, I will just give a brief illustration.

I remember 02/19/2001 to be a bitterly cold day. Those who live in the New England area won’t be surprised by this. Neither am I, now. But I was – then. Of course, I had read guides about the climate in the area. But on a quite visceral level, I sincerely expected it to be warm and sunny. I also expected the streets to be more or less lined with California girls. Why? Because that’s what I had seen on TV all my life. On the other hand, I expected everyday life to be quite similar to what I was used from Germany. Yet, the cultural differences were shocking and plentiful. I won’t elaborate on them here, but they could fill a book or two.

In short, I was extremely naive, perhaps even clueless. At this point, I would like to express my sincere gratitude to those who put up with me those first few years while I was learning the lay of the land.

Back to the central dilemma, there seems to be no way out of it or to avoid it entirely. Worse, there are no do-overs in our shared reality. Some end up at the right place at the right time and are successful. Others are not. Should this be called luck? Does it qualify any achievement or failure?

The intervening decade was extremely eventful, to be sure. After briefly returning to Germany, I ended up staying in the US, getting my PhD, marrying, moving to New York, publishing a book and pursuing an academic career, among a plethora of other things. There were many, many alternatives to each of the particulars that were eventually instantiated in reality. On that day, I did not (could not?) anticipate at least 99.9% of the events that I was personally involved with in the ensuing decade.

Not even time will tell if my particular decision making was wise or not, as we don’t know the counter-factual, the outcome of all the roads not taken.

Thus, the most sensible philosophy to adopt from all of this is perhaps to deliberately keep gaining knowledge about action outcomes from all the experiences that are made, limit the impact of outside knowledge in a careful and explicit way, try to retain the flexibility of youth as much as possible while avoiding to beat oneself up about failings that are inherently associated with a particular situation, not person.

Also, it might be prudent to adopt a modest, yet aware stance. I might be in the same situation again, with the benefit of another 10 years’ hindsight…

Posted in In eigener Sache, Life | Leave a comment

Gas warfare: Could it be Inulin?

Inulin has the power to turn most people into producers of copious amounts of natural gas

The fact that fiber aids in digestive health, heart health, and regulates blood sugar levels, to say nothing of potential benefits for weight control are becoming increasingly well known in the general public. In addition, these benefits are extremely well supported by scientific research, at least by the standards of nutrition science. There are many reasons for this, none of which I will explore here.

While Americans are finally increasing their much needed and long deficient fiber intake, there is a potential downside, and it helps to be well aware of it. That is what this post is about. Kind of like a public service announcement. If you are playing with fiber, without knowing what you are doing, you risk all kinds of problems you didn’t bargain for, from nutrient deficiencies to social stigma.

The reason for this is that not all “fiber” is created equal.  In addition to quantity of dietary fiber intake, the quality of it matters a great deal. I’m not only talking about the crucial difference between soluble and insoluble fiber, but also the specific kind of fiber. As the thrust of this post is practical in nature, I don’t want it to devolve into a lecture on metabolic biochemistry, so we will stay clear of a discussion of saccharide polymers and the like.

The bottom line is that things are – as usual – more complicated than they seem.

Here is the take home message: Just because you cannot digest a particular kind of fiber does not necessarily mean that the bacteria in your gut can’t.

On the contrary.

There is no point to single out any one “bad” fiber, but one of them has recently become much more common than it used to be. Since the notion that fibers are good for you is becoming common sense, more and more foods tout their high fiber content. Predictably, the food industry is meeting this demand by simply adding fiber to processed foods. These days, this typically takes the form of Inulin, which is most commonly derived from chicory root extract. Worst offenders – at this point – are some popular “diet” bars, such as those by “Fiber One”, “Kashi” and “Kellog’s Fiber Plus”. As usual, it helps to read nutrition labels. The ingredients are listed in order of decreasing weight per serving.

There is nothing inherently wrong with chicory root or Inulin. As a prebiotic, it has quite a few documented health benefits. However, most people can tolerate it only in small quantities, even if they tolerate other kinds of fiber quite well. The reason for this is that Inulin provides a veritable feast for certain types of gut flora. Many people don’t have a well balanced gut flora to begin with. In the age of antibiotics, dysbiosis is common.

Feeding the wrong kind of bacteria can cause serious (no really, serious) amounts of gastro-intestinal upset, gas, explosive diarrhea and discomfort. This is obviously not good for nutrient absorption, either.

The actions of gut bacteria can have a profound effect on all bodily systems. What kind of effect that is depends on the kinds of bacteria you have in your gut as well as what you feed them.

In the spirit of keeping this practical, I will spare you links to the scientific literature on the topic and rather provide a connection to the best writeup of vox populi on this subject.

I strongly encourage you to read this. The comments are of particular interest.

As a psychologist, I cannot fail to notice that this topic seems to provoke a great deal of levity. This is quite curious, as I am not aware of any research on the issue.

Of course, few things are intrinsically good or bad. Valence usually depends on what one is trying to achieve. For instance, these can be quite effective for weight loss (given the double-whammy of low caloric content and likelihood of diarrhea), relieve constipation, make great gifts for bad neighbors and safely enrich the intestinal biome when introduced gradually and in small quantities. Perhaps, they could even be used to strengthen primal social bonds – one could imagine some kind of social gathering where all participants are required to consume large amounts of Inulin containing food.

In conclusion, there is such a thing as ingesting too much fiber. And – as always – it matters to know what one is doing. This is a responsibility that cannot be outsourced to agencies with a divergent incentive structure.

Update: When I wrote this, I would never have anticipated that it would turn out to be one of my most popular posts. I’ve also learned a lot from the comments. So if you had a lot of gas lately, particularly after starting to ingest Inulin, please do share you story below. It might be helpful to someone else.

PS: I understand that this may sound pretty outlandish to those outside the trade. I suggest you buy a box of the diet bars with chicory root extract as the primary ingredient (Fiber one will do), eat it, then relate the experience in the comments below.

Note: As pointed out above, there is plenty of evidence that the ingestion of fiber has positive health effects, including a reduction of all cause mortality. The message of this post is not that fiber is bad for you. Far from it. The message is that the source matters, as also pointed out in the research paper in the previous link. I personally make sure to eat at least 40 grams of fiber per day. However, I would like to emphasize that nutrition science is full of irreducible correlations. In other words, high fiber intake might just be a proxy for a certain kind of dietary style. If you eat something, you are less likely to eat something else. Hard to control for that, particularly in a questionnaire study. If you *do* want to increase fiber intake, most people have great results with Psyllium husks. 10 grams a day of that take care of most exhaust that is of a swampy/muddy consistency and generally allow to achieve – given adequate water supply – a consistent 4 on the Bristol stool scale. They do make a quite noticeable difference in achieving smooth outcomes (literally), to be sure. Don’t overdo that either, though. It tends to turn the whole thing into sort of a paste and can probably clog things up quite a bit. Everything in moderation and caveat lector.

Posted in Nutrition, Optimization | 150 Comments

Science and the Zodiac – a brief introduction to an epistemological placebo

It is somewhat of a cheap shot for a scientist to come down harshly on astrology. As a matter of fact, it is probably the lowest hanging fruit there is.

An easy target

Nevertheless, the undying popularity of astrology in general and horoscopes in particular cannot be denied. Most newspapers in most countries won’t do without a horoscope page. Most can do without a science and technology page. This is not a conspiracy. Generally speaking, the media aims to please – it will give people whatever they want to read or hear (in order to survive against its competition, until a stable state is reached). This should tell us something about the relative popularity of these topics in the general population. Of course, most hardcore scientists take this only as further evidence that the mass of men are – at best – stupid sheep and that their opinions can be dismissed out of hand. There might be something to that, but it does not take away from the fact that the continuing popularity of astrology – not to be confused with astronomy – is a fascinating phenomenon in its own right. As a matter of fact, this has fascinated me at least since high-school. Incredulous at the fact that one of my friends was looking up her horoscope, I asked her why she believes this stuff in this day and age. The response was as pragmatic as it was telling: “Oh, I only believe it, if it is good!” Recently, it was pointed out that due to the precession of the axis of the Earth, the ancient star signs are no longer in alignment with their current positions. In other words, the dates of the star signs of the Zodiac, familiar to all have to be adjusted. In addition, a new sign – Ophiuchus was introduced. Outrage predictably followed. Of course, this whole affair shows just how preposterous the whole notion of zodiac signs and zodiac based horoscopes really is. Axial precession is a phenomenon that did not just happen yesterday. It has been continuously going on for millions of years, including the couple of thousand years since the introduction of the Zodiac system. The value of these personality characterizations and predictions of the future can adequately be assessed by the fact that the fundamental misalignment was never noticed. Of course, this makes a perfect foil for people who always noticed some slight inaccuracies, but let’s be serious. Obviously, there is nothing to horoscopes. More on this later. But how could the people who came up with this stuff be so wrong? What were they thinking? Contrary to popular belief, people are not stupid. At least those who advance civilization by creating culture typically aren’t (as controversial as the zodiac might be, it is a cultural achievement; let’s be fair). How is this possible? Historically speaking, this is actually a quite common or even typical phenomenon: Something that once made sense in the original context of discovery/creation no longer makes sense in the context of justification. Between initial discovery and the current need for justification, millenia have passed. During this time, civilization has accumulated a lot more knowledge. In other words, the caravan of culture has moved on. This happens all the time. It is called progress. Note that if astrology and horoscopes didn’t already exist, it is very unlikely that – given what we know about the nature of stars, incomplete as our knowledge might be – we would introduce such a system today. So how and why did it come about way back when? Because it made sense. Carefully watching the sky – as the ancients are known to have done – can reveal a tremendous amount of real information about issues of tremendous importance to the civilization. For instance, observing an equinox allows a civilization to decide when to harvest and when to plant. This is absolutely crucial information for an agrarian society. Similarly, as the motion of heavenly bodies is closely tied to the progression of time, watching lights in the sky can provide quite a bit of information about the general kind of weather (e.g. storms, snow) one is likely to encounter. Moreover, the annual recurrence of the seasons and the close watch thereof some amount of Planungssicherheit to early societies. The progression of lights in the sky seemed even to directly influence events on earth, such as the tides and their relation to lunar cycles. In other words, the sky abounds with information, and ancient cultures spend tremendous efforts extracting it systematically, which led to the creation of calendars, etc. From this, it is a very short leap to astrology. In a world largely untouched by scientific knowledge, a world shrouded in mysticism, a world in which the night sky loomed much brighter than today (due to the lack of electric illumination), it might not be a leap at all. On the contrary. Given all the demonstrably useful information about the future that the sky does hold, the burden of proof that the sky does not hold more personally relevant information about the future might be on the skeptic (in the ancient world). As a matter of fact, it might have been tempting to link the constellations of the smaller night lights to personal matters, in contrast to the motions of the brighter and larger lights, which determine the future of the entire society. Also remember that there was plenty of time for the ancients to ponder these matters, in the absence of any electronic night entertainment whatsoever. Given what we know about human perception, it is no surprise that they were able to recognize patterns in the starry arrangements of the night sky.

The classical zodiac. Can you spot your sign?

Humans typically have a desperate desire to reduce uncertainty. Of course, the future introduces a tremendous amount of uncertainty. Hindsight is famously 20/20, whereas foresight is far from it. The insidious thing about the nature of reality is that decisions have to be made in the present – under uncertainty, whereas the consequences of these decisions occur in the future. Once the future reveals the outcome of the decision, there is no way to go back and change it, however desirable. Obviously, this is unacceptable and terribly unfair. Contemplating these matters, the desire to predict the future from things we can observe right now becomes overpowering. And so, Astrology is born. Of course, nothing changed about the nature of reality, nor the human desire to know the future since those days. And so, it lives on. It it can reduce the uncertainty even a little bit, it seems worth doing. Now, make no mistake. There is absolutely zero empirical evidence in favor of astrology or horoscopes. Nothing, nada, null. Conversely, there is plenty of positive evidence showing that it actually does not work. In scientific terms, this is a closed case. Open and shut. No ambiguity about it. If anything, the situation is eerily similar to homeopathy, which works by the power of the placebo effect. Thus, from now on, I will refer to astrology and horoscopes as an “epistemological placebo“. Reviewing the empirical evidence that astrology and horoscopes are really just instantiations of an epistemological analogue to a placebo effect would require a book-length contribution. That won’t and can’t happen today. Luckily, that is not necessary. A sampler will do. a) Horoscopes “work” (insofar as a sizable proportion of people feel themselves adequately described) because of Barnum statements. These are so general in their characteristics and so vague in their predictions that they could essentially fit anyone and anything. The remarkable thing is that people still believe that they apply specifically to them, particularly if the purported characteristics or events are positive. The psychological literature clearly shows that horoscopes work via Barnum statements, even across cultures. On another note, there is also no internal consistency. This is something you can try at home. Get 10 different horoscopes from 10 different newspapers for a given day and see if there is any overlap that goes beyond that expected by chance. Actually, this has now been done. Given the empirical evidence there is no merit in belaboring this point. To be perfectly clear, the situation truly is remarkable. To use an analogy: If people come to me throughout the day and ask me for the time and I always tell them the exact same time, and they always believe me. That is what it means to believe in astrology. This suggests the feedback mechanism is broken in those who do believe. Presumably, the want to believe overrides the feedback from reality, which is often more ambiguous as simply knowing what time it actually is. b) There is no plausible mechanism. I will make this one short. Even if one is generous, no known or conceivable mechanism exists that could explain why a particular constellation of heavenly bodies at the time of birth matters. And none has been put forward. That is only consistent. Any that is put forward will be shot down quickly. One example: Gravity. Even if one is generous, the gravitational force of the people in the room at birth is substantially larger than that of most distant heavenly bodies. Unless we live in a completely mystical universe, this one is a tough nut to crack for astrologists. The fact that science works, that nature is lawful, is evidence enough (for me) that we do not live in that place. c) Personality psychologists have worked hard, with sophisticated statistical methods, for a long period of time to quantitatively characterize the similarities and dissimilarities of people. What we came up with so far is different “inventories” with reliabilities and validities of varying respectability. It is a tough problem. This is not surprising. The brain is complex and is known to generate behavior that is not necessarily consistent in various temporal and situational contexts. What we did not find was that people could be classified in 12 (or 13) distinct categories so that the characteristics are distinct between categories but exhibit low variance within a category. Such a simple scheme is alluring, but none ever worked. People are just not that simple. It is preposterous to suggest – despite every evidence to the contrary – that they are. To be sure, timing of birth within the year does have an impact on personality and behavior (e.g. likelihood and method of suicide). There is no question about that. But there are many good reasons for these effects, none of which have to do with astrology: Amount of sunlight in the first life months, levels and type of airborne pollen, temperature, general mood, and so on. There is also a tremendous amount of artificial categorization which is imposed by society itself, which can have a considerable influence on personal future behavior. School enrollment cutoffs are typically between June and September (depending on the locality). For instance, it has been shown that – due to the fact that grouping kids by cohorts that span one year gives some that are among the oldest a considerable developmental advantage. This has been named the relative age effect or birth date effect. The final outcomes are dramatic: Elite athletes are much more likely to have birthdays in the first half of the year. Similar effects have been reported for intellectual outcomes. In short, there is absolutely no need to invoke mystical forces, however appealing, to account for such effects. So why do some people still believe this, even if there is not a shred of evidence in favor of it? I don’t think ignorance can account for everything. Motivational components have to be considered. To be sure, motives might differ. Some people might personally and financially profit from astrology. That is a pretty mighty motive. Others might be comforted by sharing beliefs in something that has been believed by many people for very long, a kind of cultural bedrock. The rest? Probably because they really, really want to. And it doesn’t seem to cost much. On the surface of it. But there is hope. I think that if anything, the “new” zodiac system will weaken some dearly held beliefs about astrology. After all, I’m no Pisces, come on now…

Posted in Psychology, Science | 8 Comments

Eponyms are stifling scientific progress.

An eponym is something that is named after a particular person. I would like to put forward a radical assertion: The habit of naming an idea or principle in honor of its purported discoverer or developer is holding Science back. Therefore, eponyms have no legitimate place in Science (with a capital S).

Eponyms are quite ubiquitous – particularly in Science – and seem innocent enough. They really are everywhere, from the classical Archimedian principle, the endearing Triangle and Wager of Pascal to the more preposterous “paradox of Moravec” (which reeks of equally odious and insufferable titles such as “The Bourne Supremacy”, “The Prometheus Deception”, “The Bernstein Presumption” or “The Sternberg Allegation”). So what’s the fuss?

It is actually prudent to take a solid stance on this issue, as infecting the scientific vernacular with eponyms introduces at least three serious problems:

  1. The issue of credit. It has been noted – and justly so – that it is the rare principle that it actually named after the guy (sorry gals, it is usually a guy, at least historically) who came up with it. This has been made into a sarcastic meta-eponym itself: Stigler’s law of eponymy, which is as self-consistent as it is self-defeating, as this principle was originally proposed by Robert Merton (and probably by others before him). These things can get rather thorny. Scientists are people, too. Misattribution of credit is one of the few things that can rile up the typical scientist so much that he forgets his good upbringing. A prominent instantiation of this spectacle can be observed at the Nobel prizes, pretty much annually. A scientist scorned truly it not a pretty sight. This issue also tends to confuse the serious student of the history of science, for no good reason. Moreover, we haven’t even touched the problems associated with using common names such as Smith or Miller for eponyms (to say nothing of the potential confusion that looms once Chinese science make a genuine comeback).
  2. It perpetuates outdated views of the scientific enterprise. As of this writing, the 21st century is well underway. Romantic notions that the lone hero, the renaissance man of yore who is doing it all by himself (in his basement lab, no less) are extremely outdated. This is simply not how science is done these days, no matter how accomplished the individual in question, even if it makes for excellent copy. True advances in a particular field are usually associated with large teams or even groups of teams. Even if there is a primus inter pares (as in the last sentence), it seems unproportional to attribute all of the success to one or two individuals. This is well reflected in the increasing number of authors on a given paper – to the point that a lone author is starting to raise eyebrows. There is no question that the public still has the “mad scientist in his basement” view of science, which is also being kept alive by movies (the only question remaining whether the scientist in question is sinister, crazy or both). No matter how much this view flatters the vanities of the denizens of the ivory tower, which are otherwise terribly starved of recognition, it is time to do away with it. It is simply no longer accurate, if it ever was.
  3. It actually slows down progress. This is the most serious of the charges, and we will spend the remainder of this essay elaborating on it.

This problem is not intuitive, so we will have to develop it. I will call this the “parable of the Berlin taxi driver”. As in other places, prospective taxi drivers in Berlin (and other German cities) have to obtain a government license before they can go about their business. Part of this licensing procedure is the so-called “Ortskundeprüfung”. Innocently named, it is a serious obstacle to becoming a taxi driver in Berlin (and in other major German cities). In this exam, one has to demonstrate appropriate knowledge of the local roads and how to get to a particular destination. Typical prospective cabbies in Berlin are known to study for several intense months for this exam, even attend schools that are specialized to prepare for the exam (at considerable fees). Yet, the failure rate at the actual exam still is substantial. The reason why it is so hard to learn how to get around in Berlin is twofold:

a) The physical layout of the city is pretty much random (constrained by some geographic features, but otherwise without organizing principle).

b) Superimposed on this randomness is a random naming “convention” of streets. Any given street (and there are plenty of them in Berlin) can have pretty much any name. Due to the fact that Berlin was once divided, one can have even the same street name in different districts, but that is a negligible idiosyncracy.

So the long preparation time and high failure rate of the exam is not surprising. Both spatial and linguistic long term memory are taxed to the hilt. The prospective cabbie has to learn – by heart – thousands upon thousands of otherwise meaningless associations.

Contrast this with the situation in Manhattan. The actual physical layout of the city is systematic – save for the extreme lower end, Manhattan famously is composed of blocks which are arranged in a grid. On top of the physical grid, a meaningful naming convention is superimposed. This is incredibly efficient, as “how to navigate Manhattan” (as a cab driver or otherwise) can be communicated in a New York minute, not months, with very few demands on precious memory resources. Ready? Here goes:

  1. Avenues run parallel to the Hudson river (roughly north/south) while Streets run perpendicular (roughly East/West).
  2. Streets and Avenues are consecutively numbered with integers, increasing from east to west and from south to north.
  3. Each house number in a Street exists twice, once east and once west. The dividing line between east and west is 5th Avenue.
  4. Almost all Avenues and Streets are one-way, with even-numbered ones going north and east, while odd-numbered ones are going south and west. The opposite is true for Avenues east of 5th.
  5. There are a few exceptions to the rules above – there are some additional north-south connections such as Broadyway, Madison and Lexington Avenue. Also, Manhattan below Houston Street does not entirely follow the conventions above.

That’s it. Really. You are now good to move about Manhattan. I think you will be able to quickly reach 90+% of your potential destinations without any problems.

Cabs dominate traffic in Manhattan, for good reason.

Note that this remarkable efficiency comes about because we are taking advantage of the structure inherent in both the physical layout as well as the naming of the streets of Manhattan. In Berlin, we can not do that. Randomness does not lend itself to compression (this is one of the bedrock principles of information theory).

Now, the Manhattan case is somewhat disingenious (for our purposes). Applying this principle to science, we are not at liberty of laying out the “city” however we please. That – of course – is given to us by nature. However, it is important to note that the geometric structure of reality (in however many conceptual dimensions) is somewhere in between the complete randomness of Berlin and the simple 2-dimensional grid of Manhattan. If it were otherwise, science would not be possible (in the case of complete randomness) or trivial (in the case of the grid).

At this point, I would like to provide a reminder of what science is trying to achieve. It is trying to elicit the principles that govern reality in a systematic – and hopefully – efficient way. Given what is the goal of science – compressing the apparent complexity of the external world into as few underlying principles  as possible(an enterprise that is both platonic and modern. In this sense, Science is not unlike the jpeg-format).

But while we are not at liberty to design the structure of reality itself (at this point in time, at least), we are at liberty to name the uncovered principles in whichever way we please.

This explains why eponyms in science are as common as they are damaging. Giving these principles the name of the purported pioneer is the simplest, easiest, but also laziest thing to do. Names – by their very nature – have nominal scale niveau. Any potential structural relationships between principles and ideas (that map on the principles of the real world) are irretrievably lost. This is a big deal, as the scope of science continues to expand. This forces professional scientists into ever smaller specialties, and even threatens genuine progress, as old knowledge is forgotten and rediscovered under different names, see for instance the “flash-lag effect“, which was “discovered” in 1994 by Nijhawan. Before that, it was “discovered” by McKay in 1958 (McKay effect) and before that, it was “discovered” by Fröhlich as the “Fröhlich-effect” in 1923. It is now known as the “flash-lag illusion”. For the scientist, this is equally frustrating and confusing. In addition to fostering forgetting in science (the ultimate bane of the scientist), progress is also stifled in different ways: Learning a new field by relating new ideas to old ones, and by molding cognitive relationships in the shape of the conceptual relationships in the real world makes for rapid learning. If the concepts in the field are such that they simply have to be learned by heart and are meaningless otherwise, there is no advantage in already knowing something. The new concept still has to be learned afresh. This is intimidating. Even fundamental and otherwise trivial concepts, such as a Herzsprung-Russell diagram or the Mahalanobis distance start to appear alien and forbidding to those who don’t already know them. Of course, this adds to the appeal of introducing them. Those who are already in the know, those who have already taken these cognitive hurdles, have an advantage over the beginner – nothing to be scoffed at in an increasingly competitive scientific landscape. Also, it allows to flatter the more eminent colleagues. But it does sacrifice the cognitive efficiency that a true structural geometry of ideas in a given field would allow. It is not easy and for free to generate this geometry. But it can – and should be done. Everyone will be better off in the long run.

One successful illustration comes in the form of simple concepts: units. “Newtons” are a classical case. A unit of force, to be sure. But what is the meaning of a Newton? It seems quite obscure, whereas the same unit in SI base m x kg /s^2 is immediately meaningful .  A prominent case from Neuroscience is the unit of firing rates. Events (or spikes) per second  is immediately meaningful, even to the outsider. Calling it “Hertz” or “Adrians” is not (more on this later). The rest of the conceptual landscape of science necessarily needs to follow. Mathematicians and Physicists – for all their professional snobbery towards the rest of the scientific and academic enterprise (in an ironic reversal of the medieval order which prioritized theology, law, and so on above all the others) tend to be the worst offenders. Thus, I call for a “SI base” of concepts for all the sciences. In analogy to the unit case, the mission of science is to identify a few underlying principles and characterize the relations between them. A formidable challenge to theorists, to be sure, but one that needs to be met. We do need a meaningful language of ideas, if we are to succeed in the long run. This issue goes beyond eponyms. For instance, psychological concepts are often tainted by their equally named everyday meaning, giving novices in the art the dangerous notion that they know what the experts are talking about when they discuss things like personality or intelligence. In short, we need a serious language of science. Using eponyms hides our ignorance about the underlying conceptual structure of reality, particularly the relationships between the concepts themselves. But eponyms are only one apparent manifestation of the problem we are facing and the shortcomings that we are dealing with. In the long run, this will not do. It is not sufficient to assert that “Mathematics is the language of science”, as some do. Mathematics is no such thing. While logic is certainly necessary, sciences requires more than that. Mere internal consistency is sufficient for purely logical disciplines such as Mathematics or Philosophy, but not Science, which is concerned with the external world. The logic of mathematics needs to be related to the structure of the external world. Making these connections is presicely what a language does, and what we are sorely lacking. We are in desperate need of a precise scientific language.

An apt eponym. This illustrates the dangers of mismatching ones, which is the basis of the Stroop effect. Named after Stroop…

To conclude, it is ok to use eponyms for things that don’t matter or that have no inherent structure. Thus, there is a place for eponyms, but science is not that place. The costs of using them in science are too high.

There also several revealing examples.

What is more informative to the outsider – “the Carrington event” (which sounds like the title of a cheap thriller) or “the solar storm of 1859”? They both refer to the same thing, but the latter is descriptive enough even for those unfamiliar with the actual event to have a rough idea what the term refers to whereas the former is not.

What’s worse is that language evokes associations. For instance, the concept of “Granger causality” sounds ominous, complicated and obscure. Maybe it is related to Lagrange points? Not at all. It is a very straightforward (and intuitively plausible, if not uncontested) concept relating two time series, going beyond mere correlation and taking the sequence of events into account.

This seems to be a particular problem in medicine. Of course, it *is* understandable why people might want to use them. “Capgras syndrome” is named after the person who first described it and it sounds much more authoritative to say: “You have Capgras syndrome” (implying that we understand what is going on), rather than just saying “You have an illusion of doubles” (which is how Capgras himself described it). So there is an incentive misalignment. The practitioner has an incentive to sound more authoritative than is warranted by our understanding of the syndrome (which might be reassuring to the patient), while at the same time widening the gulf between those who know the lingo and those who don’t. Moreover, it elegantly solves (however imperfectly) the credit problem. Without eponyms, the original discoverer/describer of a condition is soon forgotten. Source memory problems and cognitive economy will see to that. Who first described the “circle of Willis”? Who first described any other vascular structure? Perhaps there is a reason why Vesalius is all but forgotten.

But if people want to go with eponyms, at least be consistent. For instance, handwashing by doctors (still not happening nearly enough, despite a record of extreme effectiveness) should be referred to as “the Semmelweis maneuver”. Daltonism or color blindness? Or maybe atomic theory? Can be confusing if a great man did more than one great thing…

As Kant observed: “Ärzte glauben, ihrem Patienten sehr viel genützt zu haben, wenn sie seiner Krankheit einen Namen geben.” – well then let’s make sure it is a good one.

There are many things one could call a “Schmitt trigger”, but calling it a Schmitt trigger doesn’t help anyone who doesn’t already know what that is. Barlow’s Syndrome or Mitral valve prolapse? If you could pick any term, which one would you choose? The same goes for Martin-Bell syndrome vs. the much more descriptive “fragile X syndrome”.

I rest my case.

Actually, I don’t. Concepts can have strong connotations. Can anyone picture what Hermite functions look like (without knowing anything else about them)? Or – in the age of Mad Men – what a Draper point is? Which is why I do all my signal processing with Kaiser windows (no affiliation with Microsoft or Wilhelm II). Speaking of signal processing – do you prefer kernel density estimation or Parzen-Rosenblatt window methods?

PS: It is of course understandable why this is – and has been – happening. It is devilishly hard even for experts to keep track of who did what. In a system where value of merit corresponds to the magnitude of one’s contribution, overlooking someone amounts to stealing from them or at least hurting their feelings. Unless we come up with a better way to track contributions, it is understandable how people want to play it safe – enshrining their own name with the contribution itself. Otherwise, it is easy to be forgotten, even if one made seminal contributions. For instance, who discovered rods and cones in the retina?

See here for a spirited discussion of these issues. barefoot

Posted in Pet peeve, Science | 3 Comments

In eigener Sache: The attractor structure of logarithmic iterations in the complex plane

I decided to post my explorations in empirical mathematics on arXiv. This was done in the same spirit as the work that led to the Eagleman prize back in 2006. At some point, I will have to prettify the plots when I get around to it, but for now the work stands for itself. It is what it is, and as far as I can tell it is novel. Comments welcome, as always.    

Posted in In eigener Sache, Matlab, Science | 1 Comment