A tale of two wars

We are upon the 100 year anniversary of the start of the 1st world war. Most people alive today don’t fully appreciate the cataclysmic forces that were unleashed in this conflict, several of which still shape world events today. Of course, most people are aware of the sequel, World War 2 – a very different, yet closely related conflict. More subtly, WW1 brought on the rise of communism (seeding the cold war) as well as the demise of the Ottoman empire. The way in which weak and inherently unstable nation states like Syria or Iraq were carved out of the corpse of the Ottoman empire troubles the world today. As was revealed by removing the strongmen in the early 2000s, most of these states could only be stabilized and pacified by dictatorships. Which is not a problem per se, unless one has a psychopathic and expansionistic one in charge of a major regional power (such as Iraq), in a region that decides the energetic fate of the world economy, as the experience of the 1970s illustrated. Put differently, Iraq really was about resources, but in a more subtle way than most people believe.

But why have a conflict so apocalyptic in the first place, as cataclysmic as it was? Briefly, because everyone wanted to fight, no matter how senseless it was in economic terms. The sequel – WW2 – could happen only because no one (except for Hitler) wanted to or could afford to (given the debt accumulated in WW1) fight.

A great deal can – and has been – said about what amounts to the suicide of the West, as it was classically conceived. Briefly, I want to emphasize a few points.

*Really everyone in Europe wanted to fight, including Germany (who felt itself surrounded by enemies), Russia (who wanted to come to the aid of their Serbian friends), Serbia who felt its honor wounded by the Austro-Hungarian empire, Austria-Hungry who wanted to teach the Serbs a lesson and get revenge, France who was in a revanchist mood since 1871 and saw the population development east of the Rhine with great concern, England who felt its imperial primacy threatened by an upstart Germany and even Belgium, who was asked by Germany to stand down so that Germany can implement the Schlieffen plan but instead blocked every road and blew up every bridge they could.

*The irony of the Schlieffen plan: Germany saw itself surrounded by powerful enemies (France and Russia). The way to beat a spatial encirclement is to introduce the concept of time – the Russians were expected to mobilize their armies slowly. This gave the Germans a narrow time window for a decisive blow against Paris to knock out France in time before turning around and dealing with Russia. In addition, this time pressure is so severe that while Germany had the necessary artillery to knock out the French border forts, it did not have the time to do so. This necessitated going through neutral Belgium, which brought he British and – eventually – the US into the war against Germany. The irony is that this worry about the Russians was misplaced. As a matter of fact, the Russians showed up way earlier than anyone expected and started to invade East Prussia, in an attempt to march on Berlin. However, while they were early, they were also a disaster. A single German army defending East Prussia managed to utterly destroy both invading Russian armies – and then some. Given this outcome, there was no need for the highly risky Schlieffen plan.

*The irony of having a war plan in the first place. But this is only obviously a problem in hindsight as well. In WW1, both sides made disastrous mistakes on a regular basis. As a matter of fact, the rate of learning was in itself appallingly slow – infantry operated with outdated tactics and without helmets well into the war. Ultimately, the side that was faster at improvising and made the lesser amount of disastrous mistakes won.

*The irony of constructing a high seas fleet for Germany. This got the English into the conflict, which turned it from a small regional engagement to a world war. Immensely costly to build, this fleet did the Germans a world of good, sitting in port for the entire duration of the war (with the exception of a brief and inconclusive engagement in 1916) and providing the seed for revolution in 1918, bringing the entire government down.

*The difference in conflict between WW1 and WW2. As mentioned above, everyone wanted to fight in WW1. Consequently, well over a million people were dead within a few months, whereas it took almost two years for WW2 to get “hot”.

*It is a legitimate question to wonder what would have happened if the US hadn’t intervened in 1917. Without US intervention, there probably wouldn’t have been enough strength remaining for either side to conclusively claim victory (Operation Michael in 1918 would probably not have been successful regardless and without US encouragement, the allied offense would likely have suffered the same fate as the one in previous years). But the war couldn’t conceivably go on any longer regardless, due to a global flu epidemic and war weariness on all sides. What would the world look like today if the war had ended in an acknowledged stalemate, a total draw?

*Eternal glory might be worth fighting for, but the time constant of glory in real life is much shorter. Most people have absolutely no idea what different sides were fighting for specifically, or be hard pressed to even name a single particular engagement.

*Much has been made of the remarkable coincidence involving the assassination of Archduke Ferdinand. It is true that the assassin struck his mark only after a series of unlikely events, e.g. Ferdinand’s driver getting lost, and the car trying to reverse – and stall – at the precise moment that the assassin Princip is exiting a deli (Schiller’s Delicatessen) where he got a sandwich. Given the significance of subsequent outcomes, one is hard pressed not to see the hand of fate in all of this. However, there might be a massive multiple comparisons problem here. First of all, Princip was not the only assassin. To play it safe, six assassins were sent and indeed, the first attempt did fail. More importantly, this event was the trigger – the spark that set the world ablaze – but not the cause. Franz Ferdinand makes an unlikely casus belli. Not only was he suspected of harboring tendencies supporting tolerance and imperial reform, Ferdinand had married someone who was ineligible to enter such a marriage. This was a constant source of scandal in the Austrian-Hungarian empire. The marriage was morganatic in nature, his wife was not generally allowed to appear in public with him and even the funeral was used to snub her. Therefore, the emperor considered the assassination “a relief from great worry“. More importantly, it can be argued that this event is just one in a long series, all of which could have lead to war. Bismarck correctly remarked in the late 19th century that “some damn thing in the Balkans” will bring about the next European war. Indeed, the Balkans was the scene of constant crises going back to 1874 and including 1912 and 1913, all of which could have led to a general war. If anything, it can be argued that fate striking in 1888 was more material to the ultimate outcome. In 1888, Frederick III, a wise and progressive emperor died from cancer of the larynx, after having reigned for only 99 days. This made way for the much more insecure and belligerent Wilhelm II.

*What remains is the scariness of people ready to go to war even though it makes absolutely no economic sense. In a hyperconnected world of globalized trade that closely resembles our own (mutatis mutandis, e.g. the US stands in for the British Empire as the global hegemon). In addition, there was a full – ultimately wasted – month for negotiations between the assassination of archduke Ferdinand and the beginning of hostilities. This raises the prospect of a repeat. At least, it doesn’t rule it out. In 1913, there had been an almost 100 year “refractory period” (respite from truly serious, all-out war) after the Napoleonic wars as well. However, odds are that if it should happen again, repeating the 20th century in the 21st, it will happen in Asia. Asia has the necessary population density and a lot of its key countries – China, Japan, Russia, South Korea, India and Pakistan – are toying with extreme nationalism. As the history of the 20th century illustrates, that is a dangerous game to play.

Posted in History, Strategy | 1 Comment

The relative scale of early visual areas

The visual system of primates comprises a large number of distinct cortical areas containing neurons that modulate their activity in response to a visual stimulus and are believed to represent different aspects of the visual scene. It has been recognized since the 1980s that these areas are roughly organized as “early” visual areas (primary visual cortex or V1 and V2) followed by two parallel (dorsal and ventral) but hierarchical visual streams. What is usually underappreciated is how much cortical real estate is taken up by the early visual system, i.e. V1 and V2. This matters, as the biggest bottleneck in the entire system is on the level of V2 outputs (at least within cortex). The scene is likely to be represented at a fine grain in the early visual system, but not afterwards. Put differently, what happens before V2 (mostly) stays in V2. To illustrate this point, we took a page out from a popular infographic meme and superimpose the rest of the visual system onto V1 and V2. To do so, we modified a figure from Wallisch et al. (2008), see figure 1. In this figure, size of the area is scaled proportional to its cortical surface area.

Figure 1. The higher visual system, superimposed on V1 and V2. As you can see, individual areas of the higher visual system are on the scale of small European principalities while V1 and V2 most resemble land empires in Asia or America. Modified with permission from Wallisch et al., 2008.

Figure 1. The higher visual system, superimposed on V1 and V2. As you can see, individual areas of the higher visual system are on the scale of small European principalities while V1 and V2 most resemble land empires in Asia or America. Modified with permission from Wallisch et al., 2008.

As you can see, V1 easily accommodates the entire dorsal stream (and then some) and V2 almost fits into the ventral stream (although not quite, because V4 is so big). Neatly, this is consistent with earlier reports regarding the design limitations of the visual system.

Posted in Neuroscience, Science | Leave a comment

Ideology poisons everything, as it rotates perceptions of reality

It is obvious where ideology comes from. It solves a lot of problems. A small tribe needs to agree on a distinct course of coherent action. Otherwise, its strength is frittered away, defeating the very point of finding strength in numbers, i.e. of being a tribe in the first place. Ideology also solves a lot of freerider and principal agent problems in general. It makes individuals do things for the common good that objectively impedes their subjective welfare and that they wouldn’t otherwise do. This also makes good sense along other lines. Small tribes perish or flourish as a whole (a genetically highly interrelated group), not as individuals. Ideology promotes fitness on the level that it matters, the group-selection level.

However, we no longer live in a world of small, competing, ever warring and highly xenophobic tribes. On the contrary, we live in an extremely large society, in terms of the US about 2-3 million times as large as the typical tribe, which constitutes the form of social organization that was the norm throughout almost all of human evolution (if one considers the whole world as one big globalized and highly interconnected society, it is about 70 million times as large). So the archaic model of social organization clearly doesn’t scale. Yet, its roots are still with us. For a simple test, try watching a soccer game (or indeed any team sport) without rooting for a particular team (or watch a game where you don’t care about any of the teams). The athletic display won’t be any different, but it will likely be rather un-riveting.

So in modern times, tribalism is a cancer that is threatening to tear society apart. Why? Because most of the remaining societal problems are extremely thorny and complicated (that’s why they remain in the first place – we already addressed the easy ones). They usually don’t lend themselves to resolution by experimental approaches. But a lot typically rides on the outcome, the answer to these questions. This gives ideology a perfect opening to take root. For instance, modeling the climate is extremely complicated. All models rely on plenty of assumptions, relatively sparse data and are so complex that they are even hard to debug (or to know when debugging was successful). It is very hard for anyone to ascertain what is going on, let alone will be happening in the future, yet ideologists are very keen to either dismiss any probability of warming or assume dramatic human-caused warming as a certainty.  The confidence of both camps far exceeds the data cover. Where does it come from? Potential holes in the story are simply filled in by ideology. Similar questions arise – for instance – in history. A key question is history is: What makes a society successful? The most realistic answer is that it likely involves a complex interplay of geography, genetics and culture. It is extremely hard to assign relative weights to these factors, as it is impossible to do experiments on this issue. Yet, one can make a good living writing books asserting that it is all geography (implicitly or explicitly assigning a factor of zero to the other factors), all culture or – recently – by pointing out that the weight of the genetic factor is unlikely to be zero, unfashionable as it might be, given the political climate.

Which position is most compelling to you says much less about which position is true – at this point, the evidence is far from conclusive – but much more about you: Which position do you want to be true? Why would you want a particular position to be true? Because it neatly fits in with your worldview or Weltanschauung.

What is the problem with that? The problem is that people on the two different ideological poles simply look at the same data from two different vantage points (e.g. left vs. right, see figure 1).

Figure 1: This represents reality. Two ideological camps have positions on issues that vary along the left/right dimension. Some of them are more valid than others, but no camp has a monopoly on validity, given these issues.

But in the mind of the ideologue, any issue doesn’t come down to a horizontal difference, but rather a vertical one – the ideologue assumes that the positions in one’s camp are valid, the others invalid. But when one sincerely perceives a difference in appraisal as a difference in fact (where one is either right or wrong), resolving these issues is basically impossible.

Figure 2: Liberal ideology. From the liberal perspective - which corresponds to a clockwise rotation of reality by 90 degrees - their positions are now perfectly centered in ideological terms. They just happen to be right, whereas the other camp is just wrong about everything.

Figure 2: Liberal ideology.   From the liberal perspective – which corresponds to a                  clockwise rotation of reality by 90 degrees – their positions are now perfectly centered in ideological terms. They just happen to be right, whereas the other camp is just wrong about everything.

Figure 3: Conservative ideology. From the conservative perspective - which corresponds to a counterclockwise rotation of reality by 90 degrees - their positions are now perfectly centered in ideological terms. They just happen to be right, whereas the other camp is just wrong about everything.

Figure 3: Conservative ideology. From the conservative perspective – which corresponds to a counterclockwise rotation of reality by 90 degrees – their positions are now perfectly centered in ideological terms. They just happen to be right, whereas the other camp is just wrong about everything.

The insidious thing is that this happens inadvertently and automatically. People that take a particular position will naturally see the other one as invalid and just plain wrong. Not as a difference in position, but as a matter of (moral) right and wrong. Righteousness vs. wickedness. Feeling this with every fiber of their being, leads to an immediate dismissal of the other position. If they can see the truth so clearly, why can’t the other side? Surely, they must be willfully ignorant, malicious or both. Ascribing turpitude usually follows. Worse, this destroys any reasoned discourse. Making a nuanced argument will go unappreciated, as the ideologue will not understand its nuance. Instead, it is automatically transformed into a low dimensional ideological space, perpetuating the framework of divisive tribalism, with all of its odious consequences. As we’ve known for a long time, it is basically impossible to convince someone with complete certainty that they are wrong, regardless of how ludicrous their position is or how much it flies in the face of new incoming information.

It is awfully convenient that the same people who disagree with us are also those who are dead wrong about everything. If the issue is important enough, (religious) long and brutal wars have been fought about this.

The problem is not to be wrong about something. That happens all the time. The problem is how right it can feel to be so wrong. If you want to experience this for yourself, there is a simulacrum with a somewhat juvenile name that deals with issues like information accumulation, cue validity, confidence and uncertainty. It allows an apt simulation of what social primates can be absolutely convinced of, even in the near absence of valid information. This can be rather scary. An important difference to reality is that in reality, there is rarely a reality check (feedback from reality or god) whether one’s beliefs actually correspond to the truth. 

What is the way out of this? Acknowledging that this is going on. Metacognition allows for a possible avenue of transcending these biases. Naturally, this will take a lot of training, particularly as most actors (the media, for instance, have every incentive to be as divisive as possible in their rendering of events). But by appreciating the complexity of problems and by embracing the fundamental uncertainty inherent to life, one opens the possibility that this can be done. It won’t be easy, but there is no real alternative. A de facto perpetual cold civil war is no real alternative. Certainly not a good one.

The problem with the ideologue is that they are constitutionally unable to learn from feedback. As they already see themselves as right to begin with, the error is irreducible. As such, they are the natural enemy of the scientist. In particular, it is important not to give people a pass for being uncivil (and not helpful) just because they agree with us ideologically. 

Tribalism had its day, for most of human history. And it was very adaptive. But today, it no longer is. Instead, it is needlessly divisive. Is it really useful to judge people based on what browser or operating system they use, what car they drive, what phone they have, which language they program in, etc.? People are obviously eager to self-righteously do so at the drop of a hat, but is that really helpful in the modern world (the traditional solution being to wipe out the other small tribe)?

Posted in Social commentary | 1 Comment

The social mission of perceptual research

Our perception corresponds to an idiosyncratic model of reality, not reality itself.

This is easy to forget, as we all share a common outside environment in the form of external reality and process it with a cognitive apparatus that has been honed in billions of years to work properly. Yet, this is a profound truth.

Illusory motion

Do you see motion? If so, it was created by your brain. There is nothing moving in the image in the outside world. But psychologists now understand the contrast gradients that can be used to make the brain assume the presence of motion.

It is important to recognize that the perceptual model does not necessarily correspond to objective reality. This is not a failing of the system. On the contrary, as it almost always has to work with incomplete information at the front end, gaps in evolutionary relevant information are filled in from other sources, be they other modalities, correlations to other cues as well as correlations that have been learned during ontogeny and phylogeny. Put differently, it is more adaptive for organisms to make educated guesses about what is out there than to take a strictly agnostic position if information is missing.

The perception of depth information is a good example of this. The spatial outside world is (at least) three-dimensional, yet the receptors that transduce the physical energy from photons to electrical energy that the brain can process are arranged in a two-dimensional sheet at the back of the eye, as a part of the retina. In other words, an entire dimension of information – how far away things are – is lost up front. Strictly speaking, the brain has no genuine distance information available whatsoever. Yet, we see distance just fine. Why? Because this is a dimension that the brain can ill afford to lose in terms of survival and reproduction. It is critical to know how far away predator and prey is, to say nothing of all other kinds of objects, if only not to bump into them. So what is the brain to do? In short, it uses a great many tricks to recover depth information from two-dimensional images. Most of these “depth cues” are now known and used by artists to make perfectly flat images look like a scene with great depth or even make movies look three-dimensional. Strictly speaking, this constitutes an error of the system, as the images are really two-dimensional, but this is an adaptive kind of error and given how complicated the problem is – operating with such little available information – even the best guess can still be a wrong one.

A bigger object that is father away takes up the same space on the retina as a smaller one that is closer. How – then – is the brain supposed to know which one is closer and farther? This information is not contained in the size of the retinal image traced by the object itself – as it is a projection – and has to be reconstructed by the brain.

The recovery of distance information illustrates the principle nicely, but it is far from the only case. Other perceptual aspects like color, shape or motion are derived in a similar fashion. The brain constructs its model almost always on the basis of incomplete information. There are many processing steps that go into the construction of the model, and most of them are unconscious, only the final percept is conscious. Many brain areas are involved in the construction of this model (in primates, between 30 and 50% of the brain).

One thing that is remarkable about all of this is that the brain fully commits to a particular interpretation of the available information at any given time. Even if the perceptual input is severely degraded, the brain rarely hedges, if the result is compatible with a coherent model of the world. The end-user is not informed which parts of the percept correspond to relatively “hard” information and which were “filled in”, but really correspond to little more than educated guesses about what is out there. This seems to be a general processing principle of the brain. Moreover, if the available information is inherently ambiguous and compatible with several different interpretations, it does not flag this fact. Instead, second thoughts manifest as a (sometimes rapid) switching between different interpretations, yet a meta-perspective is not taken by the perceptual system. A particular interpretation of the available information usually comes at the exclusion of all others, if only for a time. Interpretations switch, they don’t blend.

Inherently ambiguous stimuli. Interpretations switch, they don't blend.

Inherently ambiguous images. Interpretations switch, they don’t blend.

Arguably, this necessarily has to be so, as this system provides an interpretation of the outside world not for our viewing pleasure, but to be actionable, i.e. to guide and improve motor action. In the natural world, both indecision and dithering incur serious survival disadvantages. To be sure, flip-flopping also is not without its perils, but fully bistable and inherently ambiguous stimulus configurations are probably rare outside of the laboratory of the experimenter (these displays are designed specifically to probe the perceptual apparatus. Doing this in a fashion that doesn’t bias the stimuli one way or the other is not easy), so evolution probably didn’t have to make allowances for that.

To summarize, the brain overcommits to a particular interpretation of the available evidence in a way that is often not entirely warranted by the strength of the evidence itself. It does so for a good reason – survival – but it is nevertheless doing it. This has social implications.

Given the slightly disparate – if incomplete – information, individual differences in brain structure and function as well as a different history of experience accumulation (which in itself colors future perception), it is to be *expected* that different people reconstruct the world differently, sincerely but quite literally seeing things differently.

Different perspectives. Which one is correct about the state of the external world?

Different perspectives. Which one is correct about the state of the external world?

The construction of a perceptual model of the world involves so many steps that it is not surprising that the end-result can be idiosyncratic. What is surprising is that we have not yet fully wizened up to this fact. The fact that it is happening is not a secret; modern psychology and neuroscience is pretty unequivocal that this is basically how perception works. Most of us also know about this from personal experience. Visit any discussion board on any issue and you will see this effect in action. Every individual feels strongly (and often sincerely) that they are right – and by implication that everyone who is at variance with their stance is wrong. This is very dangerous, as we are closing ourselves off to other viable positions. The other side is – at best – misinformed, if not outright malicious or disingenuous. In addition to the illusory certainty of perception, there is often an immediate emotional reaction to things we disagree with. A righteous anger that has cost many a FB friendship and triggered quite a few outrage storms on twitter, with no end in sight.

The traditional form to resolve this kind of dispute is by resorting to violence. In the modern world, this kind of conflict resolution is frowned upon, partly due to the domesticating effects of civilization and partly due to effectiveness of our weapons, which make this path a little too scary these days. So what most people now do is to engage in a kind of verbal sparring as a substitute for frank violence. But this is often rather ineffective as few people can be convinced in that way. There is pretty good evidence that by talking to the other person, we are just talking to their PR department, spinning a story in any way possible. So most of these things devolve into attempts at silencing the other side by shaming. All of these tactics are highly divisive. Popular goodwill is a commons and it is easily polluted. Now, it might be possible that social interactions simply don’t scale. Small tribes need to find a consensual solution in order to foster coherent action, but large and diverse societies like ours don’t allow this easily.

A different approach opens up if we take the insights from perceptual psychology and neuroscience seriously. If we acknowledge that our brains have a tendency to construct highly plausible but ultimately overconfident models based on scant and in a large society necessarily disparate models, dissent is to be expected. It should not (necessarily) be interpreted as a personal attack or slight. Put differently, benign dissent is the default mode that should be expected from the structure of our cognitive apparatus. It is not necessary to invoke malice. Nor is it necessary to invoke ignorance. The other side might well be less informed. But it is also plausible that they simple had different experiences (and a different cognitive apparatus to process them).

Once this is acknowledged, there are two ways to go about building common ground. The first one would be to – instead of arguing – figure out what kind of evidence the other side is missing and to provide it, if possible. If that is not possible, it might still be worthwhile to point out a different possible interpretation of the evidence available to both.

Of course, this is a particular challenge online, as many statements are not made to have a genuine discourse, but rather for signaling purposes, showing one’s tribe how good of a person one is by toeing the tribal line. Showing tribal allegiance is easy. It also typically doesn’t cost an individual anything, nor does it usually achieve anything. But the cost to the commons is big, namely tribalism. This is indeed a tragedy as we all do have to share this planet together, like it or not. As online discourse matures, it is my sincere hope that this kind of fruitless and toxic pursuit will be flagged as an empty attempt at signaling, discouraging individuals who engage in it from doing so.

There is a hopeful note on which to end on: Taking the lessons of perceptual research seriously, we can transcend the tribalism our ancestors used to get to this level, but that now prevents society from advancing further. Once we recognize that it is tribalism that is holding us back, we can use the different perspectives afforded by different people using different brains to get a higher order imaging of reality than would be possible by any individual brain (as any individual brain necessarily has to take a perspective, see above). Being able to routinely do this kind of simultaneous multi-perspective imaging of reality would be the mark of a truly advanced society. Perceptual research can pave the way. If its lessons are heeded.

 

Posted in Neuroscience, Philosophy, Psychology, Science, Social commentary | 3 Comments

A primer on the neuroscience of happiness

The age old question of what makes for a happy life is of great interest to almost anyone who is in fact alive. A classic answer, building on Aristotelian notions of happiness, is provided by Charles Murray who points out that lasting life satisfaction is likely to derive from vocation, family, community and faith. We can now add a fifth element to these considerations, namely the neuroscience of happiness.

That is not to say that studying neuroscience will necessarily provide happiness. But there is mounting evidence as to what will *not* lead to increased human happiness.

Happiness is often symbolized by depicting people with arms spread out, people jumping and people looking into the sun.

Happiness is often symbolized by depicting people with arms spread out, people jumping and people looking into the sun.

As a matter of fact, perhaps the most important insight from neuroscientific considerations is a conceptual one. The brain uses happiness as a means to an end, getting the organism to do the right thing. It is by its very nature transient and elusive. Neither brain, nor reality are designed to make happiness easy to come by, nor to make it last. That would almost certainly interfere with survival- and reproduction goals. Ironically, this makes happiness *more* valuable and desirable, as evinced by the number of talks and books on the subject. If it was easy, there would be no need for anything more to say or do, as everyone would just be happy.

Of course, if one is really skilled in the ways of the sage, one can override and sidestep this programming. However, *simple* solutions like acquiring more stuff will not work.

Why? Because if they would, you would already be happy. One simple cannot expect improvements in material wealth to improve one’s subjective happiness in the long term. On the contrary.

Consider this: If you are reading this, it is likely that you are living in a magical palace that would have been unimaginable to most of your ancestors, even if your living quarters are modest by contemporary standards. Look around you. You can effortlessly turn on lights with the flick of a switch, at any time of the day or night. You have fairly well insulated windows that maintain a considerable temperature differential to the environment. You have a non-leaky roof over your head. Your walls are made of stone or a similar solid material. You command virtually endless quantities of potable water, both hot and cold, at a trivial cost. You also have a sewage system in place that makes waste disposal largely a non-issue. There are people collecting your trash, making this chore an afterthought. Your kitchen and household appliances replace a veritable horde of servants. Your refrigerator allows you to store large quantities of exotic foods from all over the world for long periods of time without spoilage. Your stove allows to prepare warm meals without much smoke or serious danger of fire, to say nothing of the wonders of the microwave which allows to do all that at the push of a button and in no time. Radiators and air conditioners allow you to keep this place at the desired comfortable temperature at all times, regardless of outside conditions. The most humble TV and radio seamlessly connect you to cultural content and information from around the world, again – remotely. The list is endless.

And this is just one aspect of your life, your living quarters. From the perspective of your ancestors, you live in unimaginable luxury. Yet, you take all of this completely for granted and most likely never give it a second thought.

But there is more. How did you afford this place? Almost certainly not by doing backbreaking work. Not in this day and age. Nor are you likely afraid to lose it all due to war or natural disaster. While these things still do happen, it is major news if they do, and there are local, regional, national and international relief efforts waiting in the wings if disaster should strike.

As a matter of fact, in this age of globalization, you are almost certainly doing business with almost everyone on the entire planet on a daily basis, making seminal contributions to the promotion of peaceful commerce, cooperation and trade without even being aware of it. Due to the way the system is set up, this all magically happens just by going about the things you do. Going to work, shopping for groceries, etc.

On that note, if you are reading this, you are also likely to own a smartphone. This “phone” connects you – at a modest fee – to virtually every other human on the planet as well as to the combined knowledge of civilization since time immemorial. Wirelessly, without cords of any kind, for long periods of time. On top of that, it can carry electronic versions of thousands of books, songs, it can play videos and – by the miracle of apps – double as a flashlight, mirror, wallet, notebook, voice recorder, photo album, map, calculator, camera and video camera, among many others. Yet, it fits into your hand.

How much do you think would such a device have been valued at a few short decades ago, for instance in the 1970s, if it was even conceivable?

Now, shortly after their arrival on the world scene, smart phones are mostly impacting human happiness *negatively*, when something doesn’t work quite as expected. Which is the first hint that expectations are the real driver of human happiness. If you expect a mapping to get you to your appointment and it doesn’t, you will feel let down. Naturally.

You should be in awe. You really ought to be amazed. Every day, all day. But you are not. Far from it. That would get in the way of achieving your (evolutionary) goals.

Essentially, you fell victim to a confusion of timescales. This is not uncommon. Most people assume that what will make them feel good in the short term (rewards) will make them feel good in the long term. But that is not the case. It’s a trap. If you go down that route, you will just need ever larger and ever more varied rewards just to feel the same level of satisfaction. Which might be good for the continued growth of GDP, but not necessarily for you.

The point is that unimaginable increases in the standard of living in rather short periods of time did not make you any happier than people living a couple of generations ago. It is naive to assume that future increases in standards of living will raise human happiness as a whole. At least not in absolute terms. There is evidence that material possessions can impact relative happiness for social reasons, for instance making you happier if you have a bigger house or TV than your neighbor or making you feel bad if your phone is not as advanced as that of your colleague. But that won’t change the level of happiness of society on the whole.

The neuroscience of happiness

Of course, this topic is vast and cannot be adequately covered by less than a treatise. So a short primer will have to suffice for now. Here it is.

Posted in In eigener Sache, Neuroscience, Philosophy, Psychology, Social commentary, Technology | 1 Comment

The consolation of temporal perspective

Few things are more discouraging and galling to the righteous than the raging success of the obviously undeserving and unworthy. This can be particularly dispiriting early in life. The wise will recognize that virtue and non-virtue have fundamentally different time constants. Lack of virtue is eventually its own undoing. The catch lies in the “eventually”. It might take a while. But it will happen, due to the inherent nature of virtue and the lack thereof. Assuming history is ergodic.

And therein lies the consolation.

Virtue vs nonvirtue

Virtue vs nonvirtue have different time constants

That this is necessarily so stems from it being a corollary of the notion of a great filter. If great power does not go hand in hand with an equally great sense of ethics and self-restraint, it will ultimately prove self-destructive. This can be observed in many systems, be they civilizations, technology, lottery winners or celebrities. The key is to minimize the damage – perhaps by non-association – when the inevitable collapse happens. Also, to stick around. Given the above situation, this is not easy, but unavoidable if one doesn’t want to end it on a down.

Put differently, the path of virtue is long and arduous, but sustainable. Taking shortcuts helps those who are lucky in the short term, but devastates them in the long run all the same. Solace is provided by Warren Buffet’s life story, which essentially shows that exponential growth can add up to one giant and irresistible snowball.

I would like to end this piece by pointing out that I do not believe the great filter to be a plausible explanation of the Fermi paradox. Mitochondria are a much more parsimonious explanation, leading credence to an “early filter”. Then again, parsimony ought not necessarily be invoked when explaining complex phenomena.

Posted in Philosophy | Leave a comment

SfN 2013 in San Diego

This post will document my annual pilgrimage to SfN. This year (as in 2004, 2007 and 2010), it will take place in San Diego.

The San Diego Convention Center. Hallowed halls.

The San Diego Convention Center. Hallowed halls.

See here how I prepare for the event and what I recommend how to go about it.

Dealing effectively with SfN is a daunting challenge, equivalent to running two marathons and likely to get you physically sick if you don’t do it right.

This year, I’m presenting on neural correlates of the line motion illusion.

The line motion illusion

The line motion illusion

In other news, the dynamic posters have finally arrived. I am glad that I was on the leading edge of this development. Of course, future generation won’t be able to appreciate this, as – for them – it will have always been this way.

Dynamic poster

The dynamic posters have arrived.

In other news, the second – “all black” – edition of “Matlab for Neuroscientists” has arrived. Now including LFP analysis, GUIs, parallel computing, etc.

The second edition

The second edition


Posted in Neuroscience, Science | Leave a comment

You really do need to sleep right

Two years ago, I wrote extensively why getting sufficient sleep is crucial to a good life and how to go about getting establishing sufficient levels of quality and quantity.

Since then,  the situation has – if anything – gotten even more dire. Despite an overhyped “quantified self” movement, few people seem to actually be serious about self-monitoring, so ZEO went out of business, leaving the market only with rather undesirable options for sleep tracking that deserves that name.

Culturally, there has also been no change in terms of sleep appreciation. High powered movers and shakers celebrate people who claim that they adopted the habit of only sleeping every other night or that they get a headache when they sleep for more than four hours. Of course, these were confabulations to intimidate the competition – sleep is not just a fungible habit, but a fundamental aspect of physiology. Some aspiring masters of the universe have already paid the ultimate price for ignoring basic physiology. Widespread elite failure has been a more disappointing aspects of living in these times and an inadequate sleep culture might well have contributed to it.

I do understand the desire to get more done in a given day, but it is important to recognize that the ability to stretch the day and compress the night by technology (mostly light and stimulants) is not a winning strategy in the long term. As laid out before, lack of sleep doesn’t just make one more tired. It also affects – negatively – every single aspect of cognition (including decision making and creativity) and emotion (motivation, emotional stability, etc.) as well as somatic integrity (aging, immune function, etc.) that has been studied. This includes virtually all features of proper bodily function including counter-intuitive aspects such as bone mineral density, which seems to be rather negatively affected by bad sleep habits.

Sleep disorders are so strongly associated with mental health problems such as depression and neurological disorders like dementia that many now suspect lack of adequate sleep to play a causal role in the genesis of these problems.

Recent research (that has come out since my 2011 piece) strongly corroborates this view.

First, there seems to be nothing “light” about light whatsoever: It has now been established that aberrant light cycles can *directly* (even if the overall amount of sleep is preserved) increase depression-like behaviors and increases in the release of stress hormones (with attendant decrease of cognitive function).

Second, while it has long been suspected that sleep is somehow involved in “restoring” brain health per se, it has now been shown that this is actually the case. Specifically, brain cells seem to be much smaller during sleep, increasing the interstitial space and leading to a dramatic increase in the exchange of fluid exchange. This – in turn – seems to remove potential toxins like beta-amyloid from neurons. Put differently, lack of sleep might quite literally be neurotoxic, with a potentially increased long-term dementia risk.

Sleep, from the perspective of the brain?

Sleep, from the perspective of the brain?

In this view, sleep has mostly a dishwasher/brainwasher/sewage plant function for the brain. This makes sense, given how conserved and widespread sleep is throughout evolution, even in animals not renowned for their cognitive prowess.

No one likes to be told what to do, particularly when it is perceived to be limited. However – in light of this evidence, it might be advisable to – finally – get some. Neurotoxicity and dementia is not something to play around with. We have previously discussed how barbaric the past appears to us. How could people possibly live like that? Our inadequate sleep culture is high on my list of things that are likely to evoke the same reaction a thousand years hence, once the critical value of adequate sleep on wellbeing is more widely recognized.

Skimping on sleep is not glamorous. It is a barbaric practice that likely imposes unaffordable long term costs on all involved.

Posted in Neuroscience, Optimization, Science, Social commentary | 2 Comments

The paradox of progress

I often wonder how people managed to get by a thousand years ago, without effective anesthetics or antibiotics or even a fundamental understanding of the underlying causes of illness and disease.

However, I realize that people a thousand years from now will wonder the exact same thing about us. For instance, we still don’t have effective antivirals, cancer treatment outcomes have largely stalled in past decades and our available “antidepressants” (as well as psychotropic agents in general) are woefully inadequate.

Put differently, not only do we have a long – and increasingly hard – way to go (the low-hanging simple fruit having been plucked a long time ago), our advances might even leave us worse off in the meantime.

For instance, if the Mongols were entirely ignorant of even the most fundamental tenets of medical science – attributing human ailments to animal spirits instead – this entire aspect of human existence won’t garner much attention. One goes about one’s daily life, if one gets sick one can pray and hope for the best and should one succumb to one’s illness, well perhaps that is just fate.

In contrast, we are in a tantalizingly different position. We know that genes play a big role in health and disease, we can now even read individual genomes. However, we are far from being able to interpret the role of individual genes, let alone understand their expression patterns (via epigenetics) or manipulate them if they are broken. Note the contrast: We know that being able to manipulate individual genes would be crucial, but we can’t do it.

Neuropsychology is in a similar position. We know that the state of the brain matters. We can even correlate individual lesions with striking mental deficits such as attention in conditions like hemineglect or Balints’ syndrome. And that’s where the state of the art ends. We are much better at diagnosing these conditions than being able to do anything about them.

The promised land, as seen from the desert. It is a far way off.

This state of affairs isn’t unique to neuropsychology either – it basically characterizes the situation in almost all of the neural sciences. The mongols didn’t appreciate the importance of brain chemistry. We do, but there is preciously little we can do about it. For instance, ADHD and ADD are fairly common and it is a good bet that dopaminergic neurotransmission is impaired in these conditions. However, we can’t be entirely sure what is wrong in an individual case and even drugs effective at modulating dopaminergic neurotransmission, e.g. Ritalin *downregulate* the expression and sensitivity of dopamine receptors in the long term. It also tends to interfere with other cognitive systems, such as memory in susceptible individuals. I have no doubt that in the long term, we will develop much more selective agents that work with the individual biochemistry and *up*regulate dopamine receptors in these people.

So we are basically in the position of Moses, wandering around in the desert for decades on end. We know the promised land exists and that we are on our way, but the path there will be hard and we will probably not live to see it (but endure all the suffering along the way). That is a truly harsh fate – having come so far only to realize that true progress eludes us further yet. Future generations – those living when the progress has been realized – will shake their head and wonder how anyone could live under such clearly inadequate conditions.

Again, the Mongols didn’t have this particular problem. I think we might be well advised to develop philosophical coping mechanisms that let us deal with our peculiar and – I would argue historically unique – position: Due to the very progress we made, we know that our available treatment options are far from ideal for many (if not most) conditions. We have come a long way, but at this point, it mostly just highlights how far we have yet to go.

Posted in Philosophy, Science, Social commentary, Strategy | 1 Comment

On the importance of consistent mapping

The problem I’m about to write about has been persisting for quite a while and I thought Google would have fixed it by now. Alas, no such luck, thus far.

In a nutshell, we have been aware of the extreme importance of consistent mapping to learn automatic, efficient and error-free behavior ever since Schneider & Shiffrin (1977).

Briefly put – although this differs from the particular experimental design of Shiffrin & Schneider – it is crucially important that the same symbol consistently has the same meaning and is not sometimes associated with other meanings.

And this is the problem with the current way Gmail uses the trashcan symbol. It can at the same time be used to delete an entire thread (possibly consisting of hundreds of individual emails) as well as to discard a recent – and unsent – individual draft of a message.

"Delete" refers to moving the entire thread - which could contain hundreds of messages - into the trash. "Discard draft" refers to one - as of yet unsent - message.

This is a concern because while the mouseover disambiguates the symbol, users do learn an automatic mapping (e.g. assuming that the trashcan discards a draft) so the mouseover never comes up in efficient mass-email handling. This leaves the possibility for users to accidentally delete entire email threads when they simply want to discard a draft. I know it happened to me, repeatedly.

This is a particular concern for Gmail because its whole concept rests upon the notion that one should never have to delete an email. However, because the system allows for accidental deletion, the integrity of the entire corpus is at risk.

Google is usually good about user interface design, but not in this case. Sadly, the ambiguity affects a critical function that can’t be undone after 30 days (and probably won’t be in time if it happened accidentally).

The good news is that this ought to be an easy fix. And should only be remembered as a cautionary tale.

Posted in Misc, Optimization, Pet peeve | Leave a comment

Data were analyzed using Matlab…

It is important to use the right tools for a given job. Science is no exception. In particular, given the vast amounts of data that are now routinely encountered in the field, one will want to use the best available data analysis tools (by whatever metric one prefers – ease of use, speed, efficiency, versatility, etc.)

In neuroscience, there is a prevailing sense that MATLAB currently dominates the market for analysis tools, but that Python has a lot of momentum.

Is Python in the future of brain research?

To get an empirical handle on this, I decided to search Google for a stock phrase employed in the vast majority of methods sections of papers (“Data were analyzed using x”), replacing x with a variety of modern – and presumably commonly used – analysis tools. For the sake of completeness, I also search for “Data were analyzed in x” and “Data were analyzed with x”, then adding them up (although the vast majority of phrases included “using”, not “with” or “in”). And yes, this is the passive voice. Most scientists are about as well trained in writing as they are in programming…

The results are below and they strike me as surprising, to say the least. A whopping 8 (in words, eight) hits for Python, 5 for Octave and none for Julia.

Results of the Google search (as of 09/26/2013). The slice representing Python, Octave and Julia together is too small to be visible. Aptly, the data underlying this figure were analyzed using Matlab.

So what is going on? Are scientists – despite all the enthusiasm for Python, Octave and Julia – not actually using these methods in published papers? Is there a systematically nuanced way of grammar usage that I am missing?

Regardless of the validity of these particular results, there can be no question that Matlab cornered the analysis market these days (at least in neuroscience – I presume the heavy use of SAS and Stata takes place in other fields).

Ironically, this is cause for concern. Success leads to dominance. Dominance leads to a sense of arrogant complacency that is not warranted in the field of technology. Just ask Nokia or the ironically named “Research in Motion”, ill-fated maker of the blackberry. Once a competitor has gained momentum because the monopolist missed “niche” developments, it is almost impossible to halt it.
To date, MathWorks has completely missed out on capabilities for online deployment of code. It is quite disgraceful actually, as this is now routinely done in Python and R. Does MathWorks have to be shamed into doing the right thing on this?

Finally, I hope we can move beyond primitive tribalism on this. I do understand that it comes naturally to people and that it is ubiquitous – be it with regards to computers (Mac vs. PC), cell phones (Android vs. iPhone), sports, etc.; however, this kind of brutish behavior has no place in science. All that matters is that one uses a suitable tool for the job at hand so that one can do the science in question and hopefully move the species forward a bit. Moreover, it is understandable that any self-respecting programmer can’t have things to be too easy or straightforward. Otherwise, anyone could do them. That might indeed be the chief problem of Matlab.

Seriously – it doesn’t matter as much which language you use to program as long as you are in fact programming. There is a simple reason for that: The success of western civilization allows for a second – heavily incentivized – route to rewards, namely social engineering (by hacking some fairly primitive tribalist circuitry). So the waves of BS can rise ever higher. But programming has to work. So the BS can only go so far. And we need more of that. More reality checks (in the literal sense), not more BS. We have too much of that as it is.

Posted in Matlab, Neuroscience, Psychology, Science | 8 Comments

A more general relationship between relevance and rigor

Recently, SMBC (one of the few webcomics still worth reading, as he somehow manages to be uncorrupted by his own success) posted another inimitable offering.

Except that in this case, it is actually perfectly imitable. This kind of thing can be done for any number of fields, including psychology and neuroscience. The advantage of doing something like this in a systematic fashion consists in the fact that one will be able to gauge – by the very reaction to it – how defensive or self-confident a given field is. For didactic purposes, I’ll start with low hanging fruit.

An obvious retort to this post is that it is extremely derivative. This is true. That doesn’t change the fact that Economics is in good company. Put differently, almost every field faces a tradeoff between the sexiness of the question under study and the availability of rigorous methods to study it, as the two seem to be inversely correlated. Like so:

Come to think of it, this relationship might be generally true across fields and empirically testable. Perhaps the only question that is – at this point -is both sexy and tractable.

But I might be biased.

Posted in Philosophy, Science | Leave a comment

Superior motion perception in individuals with autism?

The empirical evidence seems to contradict Betteridge’s law.

For the past 10 years, research on the “spatial suppression effect” showed that large moving stimuli are more readily perceived than smaller ones.

Most people suggest that the larger one should be easier to see. But data suggests that the opposite is the case empirically. Large stimuli need to be presented for a longer time to be perceived as accurately as small ones.

However, this relationship doesn’t seem to hold in certain populations, such as those with a history of depression or lower IQ.

These results have been explained with a lack of inhibitory tone. It has been suggested that there is also less GABAergic tone in autism. So the prediction from this research would be that there is also less spatial suppression in autism.

Yet, this is not the case empirically. Larger stimuli *were* more readily perceived by autistic observers, but so where small ones. It is uncanny how good their motion perception was. Just a few frames of motion was sufficient for ready identification of motion direction – blink and you’ll miss it.

Individuals with autism see both small and large moving stimuli faster

See here for a more comprehensive writeup: http://www.jneurosci.org/content/33/37/14631.full?etoc

On a final note, it is time to transcend singular lenses on autism. In the spirit of this excellent piece:

http://www.nimh.nih.gov/about/director/2013/the-four-kingdoms-of-autism.shtml

Its only drawback is the title. These are not kingdoms. They are positions or perspectives. And it is crucial to transcend a singular one. The issue is just more complex than that.

Posted in Psychology, Science | Leave a comment

Local and global connectivity – a tale of two datasets

The original images were generated based on facebook friendship data as well as data on scientific collaborations from Elsevier’s Scopus. The map of scientific collaborations was in itself inspired by the facebook map – I considered a direct comparison to be interesting. Note that – as far as we know – most brain regions (particularly in early sensory areas) exhibit a connectivity pattern that is quite similar: Mostly local connectivity with some long range connections. In this sense, the external social network seems to replicate the internal one.

Comparing scientific collaborations and facebook connections

Above: Scientific collaborations. Below: Facebook friendships

The image speaks for itself. Neither is a strict subset of the other, with interesting systematic differences. The scientific one basically seems to be a mapping between “global cities“.

And they say that there is no such thing as a social scientist…

Update: Flight paths seem to exhibit similar patterns as well.

Posted in Science | 1 Comment

The current mental health crisis and the coming Ketamine revolution

Few FDA approved drugs have a reputation as controversial as Ketamine. This reputation is well earned. Originally developed in the 1960s as a short-acting anesthetic for battlefield use, in recent decades it has become notorious as a date-rape drug (‘Special K’)*, a club drug (‘Vitamin K’) and for its use in veterinary medicine (‘horse tranquilizer’).

However, I venture to bet that Ketamine is about to be rehabilitated for legitimate human uses. Here is why.

The reason for this belief is a most severe crisis in mental health and our approach to treating mental illness. It is not polite to say this, but given the gravity of the situation it needs to be said: It’s not working out. Not really.

Certainly a strong statement, but this sobering reality is becoming increasingly apparent. Every year, upwards of 10 million people experience a major depressive episode in the US alone. The number of people who never see a professional – and are thus never diagnosed – is likely much higher. Each of these episodes can be expected to last for months to years and is characterized by utter misery; a substantial number is terminal (the number of suicides in the US outnumbers the number of violent non-self gun deaths by about 3:1). Currently available “antidepressants” (a term I would use only in quotation marks, as most struggle to beat placebo except for the most extreme cases and might appear statistically more effective than they actually are) – these days usually a selective serotonin reuptake inhibior (SSRI) – can be expected to help one in ten people, and only after a trial period of 4 weeks to several months. Of course, if the depression happens to coincide with an underlying bipolar tendency, treatment with an SSRI will likely trigger a dysphoric mania. In other words, one will never reach the end of the interval that the SSRI needs to take to work (if it ever does). Instead, one will – at best – need to be hospitalized. So-called “treatment resistant depression” (in reality, depression that didn’t respond to treatment with a few SSRIs or SNRIs as MAO-inhibitors and tricyclics have largely fallen out of favor due to their side effect profile) is either allowed to take its course (with devastating consequences for the individual) or treated with electroshock therapy (ECT). ECT is remarkably effective in addressing this kind of depression, but the “side effects” (more aptly called treatment effects) like deleterious memory loss are a steep price to pay.

This state of affairs is obviously unacceptable. The good news is that it is increasingly being recognized as unacceptable. The National Institute of Mental Health (NIMH) recently announced that it would no longer fund research based on diagnostic criteria as outlined in the DSM. Psychiatric disorders are the only medical conditions that – for historical reasons – are diagnosed entirely based on subjectively reported symptoms. Doing this for any other condition, e.g. cancer or infectious diseases would obviously be absurd. Molecular biology has seen to that. In the 21st century, this simply will no longer do and the NIMH is essentially willing to start over from scratch.

Enter Ketamine. When Ketamine was used as a battlefield anesthetic in Vietnam and the first Gulf war, it was anecdotally noted that wounded soldiers such treated developed far fewer cases of PTSD than soldiers with similar injuries that were treated with other anesthetics. A couple of years ago, systematic studies showed that the vast majority of patients suffering from major depressive disorders made a rapid recovery when given a low, sub-anesthetic dose (most studies show a dose of 0.5 mg/kg to be effective) of Ketamine.

Ketamine

Ketamine

The clinical effectiveness of Ketamine in these scientific reports sounds too good to be true, particularly when compared with anything else on the market. The effects take hold within days (if not hours, compared with weeks to months for SSRIs), they seem to work for most people, there seem to be few discernible side effects (bladder issues seem to be a concern, but mostly at “recreational” doses and frequency of use) in stark contrast to – for instance – ECT), and it seems to be equally effective in the treatment of bipolar depression (which is notoriously hard to treat).

So what is the catch? Given the crushing disease burden inflicted by major depressive disorders, the question why this treatment is not readily available arises immediately.

The issue does not seem to be primarily medical in nature. Ketamine *is* a dissociate anesthetic, so the immediate effects on the conscious experience are rather extreme. Based on reports from people who have received low sub-anesthetic doses of Ketamine, the Kantian a priori categories of space and time seem to unravel shortly after the injection. They report that it becomes obvious that shared reality is a construct – brought about by the normal operation of the brain – but that Ketamine suspends this normal construction process, allowing for different reconstructions of reality. At higher doses, the reports speak of “leaving flatland” and becoming aware of higher dimensional objects that only appear to be separate when projected onto a low-dimensional space (such as the one we commonly perceive). Whether these experience reports sound scary or intriguing, there is no question that these experiences do not last very long. Given the short half-life of Ketamine and depending on individual metabolism and route of administration (IM or IV), these dissociative effects last for an hour or two, not longer.

Dissociative symptoms from Ketamine

One of the most remarkable figures I’ve ever seen. Dissociative symptoms over time. x-axis is nonlinear. Differences at all time points other than the 40 minute mark are not significant. Adapted from Diazgranados, Nancy, et al. “A randomized add-on trial of an N-methyl-D-aspartate antagonist in treatment-resistant bipolar depression.” Archives of general psychiatry 67.8 (2010): 793.

It is also reassuring that the patients could *answer* the CADSS, so they couldn’t have been too far gone. Intriguingly, while there was no significant difference between placebo and ketamine group except for the 40 minute mark, *all* the mean scores of the placebo group seem to be slightly above the ketamine group. The CADDS features items like: “Do objects look different than you would expect?”. It is not inconceivable that, the experience properly calibrated the scale of the Ketamine group.

Obviously, there are no studies on long term effects in humans at this point, but similar studies on monkeys are encouraging. Animals which had received similar doses for long periods of time on a daily basis show that the dose has to be relatively high and the period of time relatively long to demonstrate impairment. Moreover, the doses involved in the treatment of pain are usually considerably higher, without reports of long term ill effects. To be clear: *Any* potential neurotoxicity is obviously cause for serious concern. However, all medical decisions involve tradeoffs. Depression in itself is increasingly linked to neurotoxicity. Moreover, the spectre of neurotoxicity can lurk where one least expects it, e.g. from antibiotic treatments. In addition, Ketamine seems to potentially *reverse* depression (or stress) induced brain damage via synaptogenesis. The moral of the story is that one should not embark on a course of medical treatment unless the expected upside (far) exceeds the expected downside.

Of course, there is a rub. More than one, actually. The two biggest and uncontested issues seem to be unclear mechanism of action as well as sustainability.

The first issue is that we do not really know or understand how Ketamine seems to bring about its antidepressant magic. Many theories are currently being explored in active research. Some of the hottest trails are NMDA antagonism, neurogenesis and dentritic sprouting. Personally, I believe there might be something to the notion that one is “growing a prefrontal forest”, strengthening prefrontal networks that are in turn able to better quench aberrant activity originating in evolutionary older structures (amygdala, limbic system, etc.)  One way or the other, glutamate seems to be involved. While it may sound unsettling that we do not understand the mechanism of a treatment, this situation is far from unique. As a matter of fact, we do not understand the mechanism of action of *any* anesthetic drug. Similarly, there is plenty of evidence that even SSRIS don’t work the way we thought they do. There is mounting empirical support for the notion that the Serotonin action is largely incidental, and that neurogenesis is really behind their therapeutic effects.

The second issue – that of sustainability – is more serious. Curiously, the effects don’t seem to last. While Ketamine can rapidly pull someone out of a serious depression, the depression seems to return in time, requiring a “booster” shot to banish the demons again, even for a short while. The time until relapse is different from individual to individual, ranging from weeks to months, but it is intriguing that there is a time constant at all.

However, there are many chronic diseases that require daily administration of medications, including injections. In this regard, Ketamine isn’t even all that different from most other antidepressants, which are – more often that not – a pretty permanent deal. Many have to keep taking them, for fear of relapse. The real issue why these treatments are not more readily available seems to be economic in nature.

Ketamine is an extremely cheap drug, as it has been off patent for almost 40 years. Put differently, there is no money to be made here. The FDA is tasked with protecting the public from harmful treatments. Thus, the approval process is lengthy and costly. In reality, only a major pharmaceutical company has the financial resources to spearhead the approval of a drug. In the case of Ketamine, this is unlikely to happen, as these companies could never realistically expect to recover their expenses. To be clear, Ketamine already *is* FDA approved, but not for the treatment of depression. There are already some courageous pioneers who will administer Ketamine today (for a king’s ransom) in its off-label use for depression. This is not in itself unusual. Many drugs – once FDA approved – are prescribed for off-label uses. For instance, Modafinil was initially approved for the treatment of narcolepsy. Today, the vast majority of prescriptions are not issued by neurologists for narcoleptics, but rather by primary care physicians for people who feel a little tired – or simply because people want to use it. So in principle, Ketamine already is available for the treatment of depression, off-label. However, the overwhelming majority of psychiatrists is unlikely to touch a drug that is a PCP derivative and needs to be injected, no matter how effective. This problem is not unique to Ketamine. Once a drug has acquired a certain infamy, it is hard to change minds. For instance, Thalidomide is now being explored as an effective cancer treatment, precisely because its such a potent inhibitor of angiogenesis. But that’s a hard sell, given its historical record.

Where does this leave us? In an uncomfortable (as millions are suffering right now), but hopeful position. The evidence for the antidepressant effectiveness of Ketamine (for whatever, yet to be understood reason) is so overwhelming that quite a few pharmaceutical companies are feverishly working on Ketamine analogues and delivery methods (e.g. nasal sprays) as well as alternative NMDA modulators that *can* be patented and thus would be worthwhile to put through the highly demanding FDA approval process. Preliminary results are so promising that one can reasonably hope to have truly effective antidepressants available within another decade or so. If this happens, a mental health revolution will be at hand. And it will be sorely needed. Having rapidly acting and unequivocally effective antidepressants widely available and covered by insurance (akin to antibiotics) will make all the difference.

Note: There is no question that Ketamine is a crude drug when it comes to addressing the pathology that underlies depression. Nevertheless, it is a promising and encouraging start, not necessarily the end. Further drug development will need to hone in on the underlying biological target systems (which is why a mental illness classification based on biomarkers is so sorely needed). Moreover, it has not escaped my notice that this discussion has focused on psychopharmacological aspects of depression. There are other aspects that are social, psychological and nutritional in nature, among others. Etiology is likely complex. Breakdown in social coping structures? Sedentary lifestyles? Overfeeding? Intense and chronic stress? Extreme social competition and comparison? Sleep deprivation? Light pollution? Hormone disruption? There certainly is a discussion to be had about these aspects, but not now. It will take a while for research to disentangle these causal links. Meanwhile, it is important to lighten the burden of disease. Thus, the focus was deliberate and we will save a deliberation of other factors for later.

On a final note, it is quite unsettling that virtually all truly effective treatments for mental disorders (e.g. Lithium, ECT, Ketamine, etc.) and psychoactive substances in general (e.g. LSD, Benzos, etc.) were discovered entirely by chance, by pure serendipity. Conversely, all mental health treatments *designed* to do a certain thing, based on our current understanding of the nervous system (e.g. SSRIs, but not just SSRIs – one can always do worse) basically failed to deliver. This suggests that we do not currently understand it very well. Recognizing this should break a lance for pioneering (some call it basic, but there is nothing basic about it) neuroscience research.

PS: This is another installation in an ongoing series on how language really does matter. Most people suffering from depression would reasonably turn to antidepressants, not dissociative anesthetics for help. But just because marketing calls them that doesn’t make it so (whatever one’s position, they are on average so ineffective that a vigorous debate whether or not they are more effective than placebo is even possible. A debate we didn’t see regarding the effectiveness of penicillin. Regardless, there are so many vested interests involved here that the debate is likely to go on. So it is certainly premature to write “Listening to Ketamine” even as people are sick of listening to overhyped BS. Yet, the suffering of the suffering is so severe that I remain hopeful that reason will prevail in the struggle to make depression history. The notion to have effective antidepressants available is a powerful one). As it turns out, the antidepressant effects of some dissociative anesthetics are in all likelihood much more potent than that of current “antidepressants”. Go figure.

Be it as it may, it is downright scandalous to have people suffer every day, some of them killing themselves and have a safe solution (at the right dose) readily available, yet not allow it to be used.  That this is even a possible – and ongoing – state of affairs does not instill confidence in the way things are in general. Downright scary, actually.

Update: The story has now hit the mainstream. Also, there is now solid evidence that repeated low dose administration of Ketamine seems to keep depression at bay, akin to maintenance ECT and without mounting side effects.

Update: In the piece, I expressed surprise at the fact that patients have not been more forceful in advocacy and outreach, demanding the FDA approval of this treatment on an emergency basis. This now seems to be happening. The website also compiles a – growing – directory of health care providers willing to administer Ketamine infusions. Given how bad the suffering can be – note this case of a woman enduring 273 (mostly bilateral) ECT treatments – without an appreciable effect on the depression, but dramatic and sustainable effects from low-dose Ketamine infusions on a maintenance schedule every 3 weeks. Note that dosage seems to be *critical*, with a very narrow therapeutic range between 0.4-0.6 mg/kg.

*It is possible that Ketamine acquired this reputation somewhat unfairly, as legislators might have confused it with the date-rape drug GHB when passing emergency legislation in August 1999 (classifying it as a schedule-3 substance in the US). Interestingly enough, GHB is now legally available as a drug for narcolepsy, Xyrem.

Posted in Psychology, Science | 19 Comments

Dress images

TheOriginalDressImage

giphy

LandY

Posted in Uncategorized | Leave a comment

Can music elicit a visual motion aftereffect?

Briefly, if you look at a large moving scene for a while, you will experience things moving in the opposite direction afterwards. This “motion aftereffect” was already known to Aristotle, presumably from the visual inspection of waterfalls. It was rediscovered by Purkinje in the 19th century, on the occasion of witnessing a cavalry parade. Now, we were able to show that listening to ascending or descending musical scales produces a visual aftereffect in the expected (opposite) direction.

A cavalry parade, much like the one that inspired Purkinje and launched a thousand papers.

A cavalry parade, much like the one that inspired Purkinje and launched a thousand papers.

I anticipate to get a lot of grief for this, but I consider the study to be executed in a valid and rigorous way and it contributes to the often neglected study of multimodal effects. Let alone multimodal higher-order effects. But they do exist.

Hedger SC, Nusbaum HC, Lescop O, Wallisch P, Hoeckner B (2013). Music can elicit a visual motion aftereffect. Attention, Perception & Psychophysics.

What is a motion aftereffect (MOA)? Have a look here or below.

MOA

To see a motion aftereffect, look at the red dot (Warning: Browser-dependent, the effect might be a little subtle if the animation renders in a choppy fashion.)

Here are two figures that didn’t make it into the paper. Consider them as supplementary material. For an explanation of what the axes mean, etc. – see the paper itself. It won’t make much sense to go into that here without the context of the experimental setup.

The basic effect

Psychometric functions

Posted in Psychology, Science | Leave a comment

Bang or BAM? On respecting complex problems

There are simple problems that can be solved with a single bang. The task of understanding the (human) brain is not a simple problem. On the contrary, the classic quote

The brain, the masterpiece of creation, is almost unknown to us.

attributed to Nicolaus Steno in 1669 is – by any large – still very much true today. This is owed to the fact that brains – let alone human brains – are essentially unparalleled in terms of their complexity, both in terms of their structure, as well as their function (the activity patterns they are able to produce).

If anything, the past 150 years or so of “modern” research on the brain give us an appreciation for the magnificent scale of the complexity. A single synapse is awe-inspiringly complex. Each single neuron typically has thousands of such synapses (in addition to quite a few other functional parts). Each local circuit contains a plethora of many different varieties of neurons. In the interest of brevity, I will skip a few levels of organization here, but a whole (human) brain contains on the order of 100 billion (or slightly less, but regardless of the precise number about 10 for every single person on the planet or in the neighborhood of the number of stars in the galaxy) neurons (and about an order of magnitude more glia cells, which might play functional roles as well), organized in intricate functional structures, constantly producing dynamic electrochemical activity patterns that in turn change the structure of the synapses and neural connectivity that produced these patterns.

From this concise outline, it should be obvious that understanding the human brain (or any brain for that matter) is not a simple problem.

A great deal has already been written about the Brain Activity Map project (BAM), so I will make this short.

As welcome as the money will be to researchers who have gotten used to funding rate in the single digits in the past decade, it is important to be realistic what 3 billion dollars can buy you.

Depending on the number you use (total program cost, procurement cost, unit cost or flyaway cost), a single Northrop Grumman B-2 spirit bomber costs on the order of 1-2 billion dollars. For the purposes of this argument, I think it not unreasonable to argue that 3 billion will buy you two operational ones. This is a conservative estimate. What matters for the purposes of this calculation is the ultimate cost to the taxpayer which is – if anything – higher.

The US made 21 of them (20 are left, after one of them crashed in Guam in 2008. After originally ordering over 130 at the end of the cold war, but cutting back to 21 after the fall of the Soviet Union). Ever since they became available, they participated in most major campaigns, e.g. pacifying Yugoslavia in the late 1990s.

BANG or BAM? Can we understand the human brain for the cost of these?

BANG or BAM? Can we understand the human brain for the cost of these?

As awe-inspiring as these marvels of advanced technology might be, as unparalleled their ability to rain down death and destruction with impunity, they are designed to solve relatively simple (as in tractable and straightforward) problems.

To meaningfully understand structure and function of the human brain, I think that it will take money on the order of magnitude that would buy a fleet of B2 spirits that will blacken the sky. And that is just the funding aspect of it. Money will be necessary, but it won’t necessarily be sufficient. Could the Manhattan project be pulled off without decades of “basic” (there is nothing basic about basic research) research and some fortuitous insights by Einstein himself? Probably not, no matter how much money one threw at the problem. There is a place for a “targeted science” approach in neuroscience, but I will focus on this in another piece.

It is not a bad thing to have challenging goals. It is also not a bad thing to spend money on research (effectively spending money on understanding the world around us). But it is important to be realistic about the magnitude of the challenge and the magnitude of funds devoted to it. There needs to be a balance. If not, money is either wasted, or one sets oneself up for failure, or both.

Human societies can accomplish a great deal if they put their mind to it (in close analogy to reliable computation with unreliable components). We got Buzz Aldrin to the moon and back, safely. But we could not protect him from being ravaged by depression for decades afterwards. Understanding the brain matters, and we do not understand it yet. Our ability to mitigate suffering originating from the human brain is at the very beginning. For instance, it has been suggested that current therapies for clinical depression (most based on manipulating levels of monoamine neurotransmitters) are beneficial for only one in ten patients.

We did build the B2 bomber. We do not yet understand the human brain. I think the latter can be done, but we need to get our priorities straight and have the scale of our challenges match the scale of our resources. This is a political discussion – the US annually spends hundreds of billions of dollars on defense. It is not a foregone conclusion that money spent on keeping world peace is necessarily misspent, but there are many competing policy goals. In times of scarce resources, nothing should be sacrosanct, all options should be on the table.

I’m all for getting serious about understanding the human brain. But in order to do so, we need to actually get serious about understanding the human brain.

What do we want? A bang? A big bang? A BAM? Or rather, a really, really BIG BAM?

If one wants to begin to start understanding the human brain or – with apologies to Churchill – end the beginning of understanding the human brain, one could argue that we need a really big BAM, or more than one.

From what we already learned about the brain, it is clear that the gains in understanding are worth the tremendous cost and effort. However, it is equally clear that it won’t be possible to gain substantial further understanding on the cheap or quickly.

Adequate relative scaling matters. For prospects of success.

Does the brain deserve some respect?

Regardless: How much would an actionable understanding of brain function be worth to you? How much would you be willing to pay, as a personal share?

Update: It now seems that the numbers are in. Way to politicize neuroscience for $100 million. For comparison, people in the US paid 320 times that in overdraft fees in 2012 alone. It is not unreasonable to think in dimensions that scale with the magnitude of the problem, even if the absolute numbers are starting to get a little high. The total projected life-cycle cost for the F35 program is just over 1.5 trillion dollars. The US built over 12,000 B17 bombers to win WW2. They also built a large number of B24s, B25s, B26s and B29s. It is all about how serious one wants to be about addressing an issue. An estimated billion people suffer from some kind of brain disorder right now. To say nothing of the untold billions who will suffer from one in the future. So even a multi-trillion dollar investment might not get a bad return on investment in terms of reduction of suffering if it yields anything tangible. Can we afford it? Can we afford not to do this? Some people wonder how people in the dark ages got by, without the benefit of understanding the source of all their troubles with regard to bacteria and viruses. I wonder if people in the future will ask the same question (mutatis mutandis) about us.

That said, it is undoubtedly true that we are in urgent need of new tools to probe the brain, as our available methods are woefully inadequate for the task. And the new acronym – BRAIN – is also much improved. Hedged excitement as the most suitable way to move ahead?

Regardless, one needs to be cautious of overselling. The history of AI research provides a cautionary tale, showing that in the long term, overly hyping something without being able to delivery inevitably leads to backlash, which leads to funding cuts (“winters“).

Posted in Science, Social commentary | 2 Comments

PSA: Your “sleep monitor” is probably anything but

As the “quantified self” (probably ill-named) movement gains steam, all kinds of apps that purport to measure important physiological parameters that are related to health gain popularity.

In principle, this development is to be welcomed, as individual lifestyle and metabolism is so heterogenous across the population that most scientific studies on the matter are so noisy that they offer only very limited benefits to optimize individual lifestyles.

Having more data available is almost always to be welcomed, as it can inform decision making about lifestyle choices far beyond common sense (which is actually far less common than commonly assumed) and old wives’ tales. Moreover, the medical profession is more geared towards limiting downsides and minimizing suffering from acute harm than towards optimizing the upside of life. Preventative medicine is in its infancy.

However, these developments naturally also harbor significant risks. The only decisions worse than those made randomly are those based on bad data, as they tend to be systematically wrong, yet are defended with conviction.

Most scientists are properly trained in the gathering and interpretation of data, as this how they make their living. As data collection and interpretation moves into the mainstream, familiar (to scientists) concerns about objectivity, reliability and validity of data come to the fore.

Giving a primer on research methods is beyond the scope of this piece (however helpful it might be). Instead, we will focus on one recent – alarming – development.

As we have observed before, language matters. Tremendously. This is also the case here.

In a nutshell, all devices that measure physiologic parameters rely on proxies (that are actually measured). The validity of the measurement crucially relies on the tightness of the link between the proxy and the quality of interest. This can be tricky if the quality of interest is psychological or neurological in nature. For interest, current and past “lie detectors” are really only called that. In reality, they measure skin conductance changes, which is used as a proxy for sweat gland activity, which is used as a proxy for sympathetic nervous system activity, which is used as a proxy for the probability that someone is lying (because something is personally significant or unsettling). In principle, this is sound, as the autonomous nervous system is not under the voluntary control of most people. However, that’s a lot of proxies, so the link is tenuous at best.

The same is true for sleep, which is still for the most part a mystery. Even if you go in for a clinical “sleep study”, the way sleep is currently assessed is by “polysomnography“. Briefly, several sensors measure the electrical activity on the scalp (via EEG), muscular activity (via EMG), eye movements (via EOG) breathing rate, blood oxygenation and perhaps some other parameters. Of these, EEG, EMG and EOG – in combination – are most diagnostic of gross physiological states, such as sleep. For instance, an EEG that is dominated by high amplitude and low frequency waves (delta) characterizes deep sleep. But things can get tricky. The EEG patterns during REM sleep are relatively low amplitude and high frequency and can look quite desynchronized. To the untrained observer, it would be hard to distinguish REM sleep from awake, based on the EEG alone. That’s where the other parameters come in. During REM, EMG will show a lack of muscle tone (there is actually an active muscle paralysis going on) whereas EOG will show characteristic rapid eye movements, both of which sets it apart from the waking state. Similarly, it is hard to distinguish deep sleep from REM sleep if one were to look at the EMG alone. There isn’t much movement during either phase, even though these sleep states couldn’t be more different in any other way.

Polysomnography

Polysomnography

The gist of this is that one really needs all three parameters in order to properly characterize sleep stages, as they are defined now (as we don’t really understand sleep yet, I anticipate there to be further, neurotransmitter-based metrics to come into common use in the future). At a minimum, one cannot forgo EEG, as one won’t be able to distinguish REM from deep sleep without it. There is a reason it is called polysomnography. Many parameters need to be measured in order to characterize sleep.

And herein lies the rub. In order to measure the EEG, one has to get an electrode on the scalp. With the advent of wireless technology, brave companies like ZEO have pioneered this approach. While the results fall far short of full polysomnography – as one would do in a clinical setting – they are quite impressive. It is remarkable what modern signal processing can do with a single electrode. Their correlations with polysomnographic recordings are high, implying a high reliability. This kind of “at home” sleep measurement capability opens up the potential for all kinds of investigations, both for research and personal use.

But this technology is not cheap and however important sleep might be, most people turned out to be reluctant to shell out much money for its measurement. Moreover, the ZEO device necessarily required a headband to be worn, and most people couldn’t be bothered. Consequently, ZEO (the company) is struggling for survival.

In contrast, all kinds of smartphone apps and devices that rely on accelerometers which can be gotten for a few dollars  are booming. It is understandable that people want to minimize cost and not want to bother with headbands, utilizing devices that they have anyway for this purpose (e.g. phones, calorie trackers). However, one should *not* confuse these devices with sleep monitors, as they do no such thing. Claims that they allow to “track”, “monitor” or “measure” sleep are disingenuous at best.

Measuring sleep is not trivial

Measuring sleep is not trivial

Inferring sleep from vibrations on the surface of the bed is at least one step too far. A myriad of factors could confound these measures, all of which should be obvious, such as pets jumping on the bed, sharing the bed with a partner, etc. Most serious is the insurmountable hurdle of distinguishing REM sleep from deep, based on data from accelerometers. Most currently available apps just lump the two together and call it “sleep depth”. This is at best inaccurate (as they could not be more different physiologically) and at worst dangerous. For instance, it has been shown that a lot of deep sleep is restorative in terms of physical exertion whereas too much REM sleep is not necessarily a good thing. Instead, it can be indicative of a major depressive episode. Sleep disturbances like that accompany almost all mood disorders. It is an extremely disconcerting prospect that someone with such a disorder could rely on measurements from such a pseudo-sleep-monitor to reassure himself that their sleep is just fine, when it is really not.

To summarize, actigraph based metrics can at best measure the quantity, but not the quality of sleep. Claiming otherwise is (deliberately?) misleading.

No one likes criticism. But in this case, it is warranted. It is understandable why people do what they do, but in this case, I must stand fast. I’m not sure if it will make a dent, but it is imperative to stem this quite unfortunate development.

Of course, there is no harm in people using these cheap accelerometer-based devices if they understand their limitations. However, given how these are currently marketed, I doubt that most people are aware of this problem. It would be a good start to stop calling them “sleep monitors”.

It is up to every individual how serious they take the development of their own life. As long as they can live with the outcome. Because they will have to live with it.

Update: ZEO has lost the struggle for survival. It is a terrible shame that cheap actigraphs killed the only device that came close to a home sleep monitor. Hopefully, this is just an indication that ZEO was ahead of its time and not indicative of how serious people are willing to get about sleep. Perhaps people *are* rational after all, though. If the vast majority of users is unable to interpret a hypnogram anyway, it makes sense to go with the option that is cheaper, more convenient and – apparently – simpler to interpret. As detailed above, this can – however – be dangerous, as excessive REM sleep is far less refreshing than commonly believed.

In the meantime, I bet there is a sizable minority who would be happy to spend quite a bit for genuine home sleep-monitoring capabilities sensu ZEO. What to do? Kickstarter to the rescue? Maybe mattress manufacturers could sponsor it – if their marketing claims are true, one *should* see a reliable improvement in sleep, as measured.

What ZEO could have given you

What ZEO could have given you

Posted in Data driven lifestyle, Optimization, Pet peeve, Social commentary, Technology | 7 Comments

Meet the netmonger – could it be you?

Netmongers (Excerpt from “Advice for a modern investigator”, chapter 5. Elsevier Press, 2011)

The most striking observation regarding this category is that it was entirely absent in the 1898 edition of this book. Meanwhile, it has become by far the most prevalent type, crowding out all the others. In terms of etiology, this can perhaps be explained by the fact that the prevalence of netmongers is closely tied to the presence of certain technologies, namely computers and internet access. In places where these are missing, netmongers are absent as well. Conversely, if fast internet access is available, all other types effectively convert to netmongers, particularly contemplators and bibliophiles, but also misfits and theorists.

What characterizes this – by now – extremely common and serious disease of the will?

Netmongers can be described as exhibiting pure akrasia. It is not the case that the netmonger is apathetic or lethargic. On the contrary, netmongers exhibit sustained and frenetic activity for long periods of time, during which they utilize the most powerful productivity tools ever conceived.  Moreover, they begin the day with the best of intentions to further their important scientific work. Yet, at the end of the day, they leave their place of business without having accomplished anything of relevance.

How is this possible?

This is best explained by looking at the typical day of the netmonger. When he arrives in his office bright and early, he first has to check his facebook page. After that, he checks his email for messages that might necessitate urgent attention. Both activities are repeated every 15 minutes, just in case. The netmonger has a lot of friends, so besides the link to the hilarious video on youtube, which he just watched, there is always something so extreme that he must share it with the rest of the world. Thus, he apprises his large crowd of followers of what is going on by posting it on twitter. Invariably, he does receive numerous immediate responses, some of which refer to the latest blogs that are relevant to the issue at hand. Due diligence requires that he read them carefully before crafting a response. By now, some dissenters have voiced their opinion, and wikipedia has to be consulted in order to resolve the dispute. Meanwhile, new messages have arrived, and the circle begins afresh. Doing this for several hours, the netmonger is exhausted and takes a break during which he reads CNN and the New York Times. Without fail, there is always a curious story to be found, which is itself linked to other stories that are of interest to netmongers. He cannot resist reading these as well. But even more interesting than these stories are the 2500 comments associated with it, some of which are just outrages and require a ruthless rebuttal. Of course, there is so much to read that the netmonger is often forced to skim. Consequently, he remembers only very little of what he just read. At this point, there are now 35 browser tabs open and it is almost time to go home. In any case, it is now too late to start something serious. Having spent all day at work, the netmonger indulges in a little guilty pleasure – cracked.com – before going home.

The netmonger is hopelessly caught in the web

The netmonger is hopelessly caught in the web

The tragedy of the netmonger is that this disease of the will mostly affects people that are genuinely attracted to information. Of course, the internet supplies such copious, interconnected and ever changing amounts that these depths can never be plumbed, not even by the most severe netmonger.

Posted in Life, Social commentary, Technology | Leave a comment