The modern world, from its microchips to its architecture, is built upon the foundations of mathematics. It's easy to take that fact for granted, just as it's easy to take in stride the fact that, in today's universities, "applied math" majors are on the lookout for more ways to connect the mathematical with the practical. And yet the connection between math and reality, is, when you think about, actually quite strange. Writing in the math magazine Plus, mathematician Phil Wilson explores the implications of the "stunning fact" that "the world can be understood mathematically": "Why," he asks, "is applied maths even possible?"
No one's mystified by the way that language describes the world, albeit imperfectly: it's a human invention that's evolved, over time, to be fairly accurate and illuminating. Mathematics, though, seems like something else altogether, because of two apparently contradictory facts. First, there's the fact that math describes the world with breathtaking exactness, to the degree that we say that reality is "governed" by mathematical laws. And second, there's the fact that math also seems like its own autonomous world, full of intricate patterns and relationships that don't (yet) have any real-world analogues. How can math be simultaneously perfectly realistic and totally bizarre?
Wilson explains the different ways that philosophers of math, or "metamathematicians," have thought about these problems through the ages. Today, he argues, most mathematicians have settled on a form of Platonism. They understand themselves as exploring "what feels like a platonic realm -- they don't invent their mathematics, they discover it." Many believe that the physical world "emerges," somehow, out of a mathematical one, and that both worlds are, in a sense, real. Mathematicians keep their platonism under wraps, however, because it feels mystical and weird: "It is not natural," Wilson explains, "for a philosopher or scientist to wholeheartedly embrace such a view (even if they may wish to) since it tends to encourage the preservation of mystery rather than the drawing back of the obscuring veil." And yet there the mystery is, plain as day to anyone who thinks about it, and suggestive of all sorts of metaphysical and theological questions.
Wilson offers, unsurprisingly, no solution to to the mystery -- but he does point to ways to think about it more clearly. One way is illustrated proposed by the metamathematician Roger Penrose in his "three world diagram":
The diagram, Wilson writes, shows how at least some of the platonic, mathematical world is embedded or expressed in the physical world; how at least some of that physical world is embedded or expressed in our mental world; and how at least part of that mental world is embedded or expressed in the platonic world. It seems as though there are three worlds, or three levels of reality; "each world," Wilson concludes, "remains a mystery," especially when it comes to its relation to the others. A fascinating discussion: read the whole thing here.
What is -- or what was -- 'indie' film? In Indie: An American Film Culture, the film critic Michael Z. Newman tries to explain just what makes indie film indie. The style, mood, and structure of indie film are, he argues, easily duplicated by the major studios, and robbed of their indie-ness. Ultimately, indie film is really about the indie audience, a "film culture' unto itself which is united in opposition to Hollywood and accustomed to thinking about movies in a certain way.
Pulp Fiction: indie film's "masterpiece."
Newman, a professor of film studies at the University of Wisconsin, is -- like most critics today -- heavily influenced by Pierre Bordieu's classic book Distinction. In that book, Bordieu argued that it's important to think about culture not just artistically, but also anthropologically. From an anthropological point of view, art is as much about social life as it is about artistic life. We enjoy music, books, and movies not just in themselves, but also because they identify us as certain kinds of people, and set us apart from all the other kinds. That's not to proclaim all art a sham, but it is to say that art is never pure: it's impossible to separate the art you see while visting the ICA from the feeling of being a hip, in-the-know insider.
In a Bourdieuian frame of mind, Newman argues that the same is true for indie film. There are things that separate indie films from Hollywood ones: they de-emphasize plot, encouraging viewers to think about characters and their inner lives and social identities (think Lost in Translation); they self-consciously make referene to the world of culture, encouraging a feeling of cultural-insiderness (Blood Simple); and they use weird, elusive storytelling to create a playful atmosphere (Pulp Fiction). The key fact about these qualities, though, isn't that they're so great in themselves. Someties they are (Blue Valentine) and sometimes they aren't (Garden State). The key fact is that they work against the conventions of big-budget Hollywood movies.
Everybody knows this, of course; the question is, what does it mean? To Newman, it means that 'indie' is as much about a "film culture" as it is about the films themselves. Think about indie street style. Some kids actually look good in Vans, skinny jeans, and ironic t-shirts. Other look like schlubs. In both cases, thishey're all broadcasting the same message: "We're different." And that message is only meaningful if there really is a community of like-minded people who have a substantive, interesting way of looking at the world. It's easy enough to duplicate indie style. But, if you live in a small town full of squares, it's impossible to duplicate the indie community, for whom that style means something.
According to Newman, that's more-or-less how indie film works. Indie movies rose to prominence in the 80s and 90s; in the 2000s, Hollywood studios started producing indie-styled movies (Sideways) using their "major-minor" studios (Fox Searchlight). No matter who makes them, though, the idea of indieness depends upon "a community with shared knowledge and expectations." That community is constantly changing its mind about which styles, forms, and motif 'count' as indie. So, in the future, we can expect 'indieness' to change as the indie community changes, grows, and shifts.
Here in the U.S., Las Vegas casinos like Paris Las Vegas and The Venetian recreate foreign architecture, albeit in a cartoony, miniaturized form. That's weird enough -- but, over the last few years, Chinese architects have taken things a step further, designing full-scale, inhabited cities in a variety of European styles. Near Shanghai, Thames Town (pictured below) is one of a number of European-style towns; others are modeled on Barcelona and Venice. They're popular destinations for couples seeking wedding photos.
Now the trend has reached a new level of absurdity: According to Der Spiegel, an exact replica of an Austrian town is being built in China's Guangdong province. The eight hundred residents of Hallstatt, a UNESCO World Heritage site, have only just learned about the plans, and are, Der Speigel writes, "scandalized."
Thames Town, near Shanghai. Photo by Huai-Chun Hsu.
China's internationally themed towns are, in essence, huge public-private real estate projects; most of them are part of a vast Shanghai-based effort called "One City, Nine Towns." Often, there's a certain logic to the choice of architectural scheme. The town of Anting, for example, is home to the German-Chinese joint venture Shanghai-Volkswagen; as a result, it's also home to Anting German Town, a development designed to look like a modern German city by Albert Speer & Partners GmbH. (The project has been largely unsuccessful: most of the buildings have remained empty.)
The Hallstatt project is a little different. The goal is to exactly duplicate every detail of the original town, including its lake. This has unnerved the residents of the real Hallstatt. Local clergy, for instance, want to know if the duplicate of their church will be respected as a religious space; some experts feel that the Chinese architects should have asked for permission. On the other hand, the town is relentlessly photographed by visitors anyway, and the Chinese version could stimulate even more tourism in the original town. Presumably, the Chinese architects accounted for some of the 800,000 visitors the town receives each year.
As construction hasn't started yet, there's still time for all these issues to get ironed out. The project might seem creepy, or misguided, but it doesn't seem to be underhanded. The Chinese architects, like everyone else in the world, simply love the way Hallstatt looks. Unsatisfied with two-dimensional representations, they want a three-dimensional one. [Via BldgBlog.]
Is gambling too much an addiction? According to the new, fifth edition of the Diagnostic and Statistical Manual -- due out in 2013 -- that's exactly what it is. The new DSM will move compulsive gambling from the "impulse control disorder" section to the one for "addiction and related disorders." The science journalist Dirk Hanson, writing at his blog, Addiction Inbox, explains just what that move means.
Fyodor Dostoevsky, a compulsive gambler, was frequently penniless after gambling sprees. After one such spree, in desperate need of money, he wrote Crime and Punishment and The Gambler simultaneously, completing both in 1866.
Hanson draws on a helpful recent article, "Disordered Gambling: Etiology, Trajectory, and Clinical Considerations," by Howard Schaffer and Ryan Martin of Harvard Medical School, published this spring in the Annual Review of Clinical Psychology. Martin and Schaffer put a lot of striking facts into relief. For example, despite an explosion in gambling opportunities over the last few decades -- on reservations, in convenience stores, and especially online -- the total number of American problem gamblers doesn't seem to have increased. This suggests that compulsive gambling isn't something that can happen to just anyone; instead, there's a part of the population that's particularly susceptible. Similarly, twin studies have shown that compulsive gambling runs in families. Another extraordinary finding is that medications can turn non-gambling Parkinson's patients into compulsive gamblers (it's been beautifully covered by WNYC's Radiolab).
Compulsive gamblers, moreover, are often addicted to other things, too: "Pathological gamblers," Hanson writes, "are five and a half times more likely to have suffered from a substance abuse disorder." If someone is a heavy drinker, smoker, or even shopper, it's more likely that they'll become a heavy gambler.
All this suggests not only that gambling is an addiction, but also that we ought to rethink what addiction's all about. We might get addicted to particular activities, like gambling, but an addictive disposition comes first. The object is merely incidental. Schaffer and Martin advocate a "syndrome model of addiction," in which individual addictions are understood as "expressions" of an underlying problem. The goal, they write, is to avoid "the incorrect view that the object causes the addiction." This shift in our view of addiction, they hope, will pave the way for more practical, and flexible, treatment.
Since time immemorial, curious people have asked where the universe came from. Nowadays we have a secular answer: the Big Bang. And yet that answer, incredible as it may be, is only partially satisfying. After all, we can still ask where the Big Bang came from; and we can still wonder, sensibly enough, how something (the universe) could come from nothing (whatever came before it). In his new book, On Being, Peter Atkins, a British chemist and science writer, offers an intriguing answer to those questions. To understand how something can come out of nothing, he writes, you have to appreciate the fact that "there probably isn't anything here anyway" -- that "at a deep level there is nothing" in the universe, really. "The substrate of existence," he argues, "is nothing at all."
Consider electrical charge. In our universe, there are positively and negatively charged particles. How did all that charge come into being out of nothingness? It didn't, Atkins writes, since "the total charge is zero." The Big Bang merely separated out a uniform state of chargelessness into many individual instances of charge, positive and negative. The same goes for matter and energy generally: the total amount of matter and energy in the universe seems to be balanced out by huge amounts of "dark matter" and "dark energy," which express themselves in terms of gravitational attraction. The Big Bang didn't create all that energy, as such. Instead, it seems to have turned an initial Nothingness into a "much more interesting and potent" Nothingness -- a "Nothing that has been separated into opposites to give, thereby, the appearance of something."
How much, if anything, does that explain? "The separation of Nothing into opposites still needs explanation," Atkins concedes. Still, he writes, "it seems to me that such a process, though fearsomelessly difficult to explain, is less overwhelmingly fearsome than the process of positive, specific, munificent creation." The main point is that the Big Bang doesn't mark, necessarily, the creation of something out of nothing. If that happened at all -- and it may be, Atkins points out, that there was has never been absolutely Nothing, in a total sense -- then it probably happened further back in the pre-cosmological past. Instead, it marks the emergence of texture, differentiation, and particularity out of even, unchanging featurelessness. It's not something out of nothing, but interestingness out of boredom.
Over at The Atlantic's In Focus blog, Globe alumnus Alan Taylor is putting on "World War 2 in Photographs: A Retrospective in 20 Parts." The first two episodes, "Before the War" and "The Invasion of Poland and the Winter War," are already up. A new episode will be posted every Sunday. The photos so far are spectacular and, in many cases, little-seen:
In this powerful, succinct TED Talk, Eli Pariser -- the former Executive Director of MoveOn.org -- explains how automated, "algorithmic editors" built into Google, Facebook, and other sites surreptitiously tailor what you see on the web. There is no "objective" internet: The web, Pariser explains, looks different to each of us, and is subtly personalized to show us what we want to see.
“The best editing," he says, "gives us Justin Beiber and a little bit of Afghanistan; it gives us some information vegetables, it gives us some information dessert.” Algorithmic editors, on the other hand, show us only what pandering computer programs, without the "embedded ethics" of human editors, imagine that we'll enjoy. They prevent us from being our best selves.
History in the first person: The BBC's spectacular, daily Witness podcast interviews people who were there at great historical moments and has them tell their stories in ten minutes. A recent favorite: Nureyev defects to the West. (BBC)
Body worship in the Third Reich: Der Spiegel investigates a Nazi bestseller, entirely devoted to nude photography: "We want a strong and joyful affirmation of body awareness, because we need it to build a strong and self-confident race." (Der Spiegel)
Paul Krugman picks five books, starting with Isaac Asimov's Foundation: "The story is about these people, psychohistorians, who are mathematical social scientists and have a theory about how society works.... I was probably 16 when I read it and I thought, 'I want to be one of those guys!'” (So did I!) (Five Books)
Ezra Klein on what Inside Job got wrong: "Watching it, you’d think that the only people who missed the meltdown were corrupt fools, and the way to spot the next one is to have fewer corrupt fools. But that’s not true. Worse, it’s dangerously untrue." (The Washington Post)
What's Poetry Magazine doing with that $200 million gift? "'Poetry is not a moneymaker... And so the grand experiment here was to throw money into this art form that had no history of making money and see if poetry would be OK at the end of the day.' For years, what had been playing out at the foundation and magazine sounded like a multi-act play about the consequences of winning the lottery." (Chicago Tribune)
The Civil War and the Meaning of Life: By Drew Faust, historian-president of Harvard. "The seductiveness of war derives in part from its location on [the] boundary of the human, the inhuman, and the superhuman. It requires us to confront the relationship among the noble, the horrible, and the infinite; the animal, the spiritual, and the divine." (The New Republic)
[Image: Rudolf Nureyev.]
C.G.P. Grey -- of this winter's "The Differences Between the U.K., Britain, and England" video -- is back with "Coffee: Greatest Addiction Ever!"
Mic Wright of the tech blog Humans Invent catalogues the many artificial sounds built into common products and experiences. Many of the sounds we take for granted are actually engineered for effect -- like the satisfying thunk! of a slamming car door:
A car door is essentially a hollow shell with parts placed inside it. Without careful design the door frame amplifies the rattling of mechanisms inside. Car companies know that if buyers don’t get a satisfying thud when they close the door, it dents their confidence in the entire vehicle.... To produce the ideal clunk, car doors are designed to minimise the amount of high frequencies produced (we associate them with fragility and weakness) and emphasise low, bass-heavy frequencies that suggest solidity.
The effect is achieved in a range of different ways -– car companies have piled up hundreds of patents on the subject –- but usually involves some form of dampener fitted in the door cavity.
Other great examples are the artificially amplified roar of stadium crowds and the whirring of ATMs as they dispense dollar bills (in fact, the whirring comes from a speaker; cash machines are well-engineered and naturally silent). Read more here, and check out part two.
[Image: Waveform for a slamming car door, from Freesound.org.]
Writing at his spectacularly interesting blog, Ribbonfarm, the independent scholar Venkat Rao offers up a 7,000-word essay on "The History of the Corporation, 1600 to 2100," which is like a best-selling book in miniature. Rao argues that the corporation, after centuries of explosive growth, is now a sinking ship, essentially because it's run out of resources to exploit.
Rao divides the history of the corporation into three phases. First, roughly from 1600 to 1800, came the Mercantilist phase. During this period, huge corporations like England's East India Company and the Dutch V.O.C. ruled the waves. Increased naval power opened up whole new areas of the globe, and governments, taken by surprise, allowed private companies to take over huge swaths of it. The scarce resource was territory, and companies used private armies and navies to seize it. They could use government forces, too, because company employees often did double-duty as government officials: Robert Clive, for instance, was not only a high-ranking officer in the British Army, but also an employee of the East India Company. ("If you thought it was bad enough that Dick Cheney used to work for Halliburton before he took office," Rao suggests, "imagine if he'd worked there while in office.")
Towards the end of the 18th century, however, governments started to reign in corporations, horning in on their revenue streams by levying taxes. (A good example is the English tax on tea, an East India Company product; that tax, Rao notes, cost England her colonies.) The corporation shape-shifted. Having exploited all of the world's territory, corporations stumbled upon a new resource to exploit: time. Corporations discovered the power of technology to make life faster and more efficient. In this new, "Schumpterian" phase of the corporation's life-cycle, corporations created more time, and then filled it. Labor-saving technologies created more time for the consumption of goods; many of the goods consumed were themselves labor-saving devices. From 1800 to 2000, Rao argues, modern societies entered into a feedback loop, in which new pockets of time were continually created and exploited. The zero-sum world of mercantilism was left behind; "'Progress,'" Rao writes, "had begun." The result was astonishing corporate growth -- not just in terms of profits, but in terms of attention and mind-share. Today, every day of our lives is crowded with some corporate product, designed either to create time or to fill it.
Unfortunately, it turns out that time and attention are finite resources, too. In America, he suggests, we reached "Peak Attention" several years ago. This is bad news for corporations. "As you find and capture most of the wild attention," he explains, "new pockets of attention become harder to find. Worse, you now have to cannibalize your own previous uses of captive attention. Time for TV must be stolen from magazines and newspapers. Time for specialized entertainment must be stolen from time devoted to generalized entertainment.... Due to the increasingly frantic zero-sum competition over attention, each new 'well' of attention runs out sooner."
What's next? Rao argues that the age of the corporation is over. There's plenty of room for life to improve -- but corporations, having conquered the spatio-temporal world, won't be able to take part in that improvement, which, Rao suggests, will be mostly "perspectival." Read the whole piece here.
Neuroscience is a vast field, but in the end the whole enterprise is motivated by a few central questions. One of them is, "Do we have free will?" A new study, published in the journal Neuron, has shed a little more light on that question. It suggests that we might need to rethink what "free will" really means.
Until now, the most important finding about free will has come from the famous "Libet experiment," devised by Benjamin Libet in 1983. Libet sat you in a chair, stuck electrodes on your head, and put a clock and a button in front of you. Whenever you felt like it, you could push the button; your only task was to notice when, according to the clock, you'd decided to push it. Libet found that your neurons started firing well in advance of your conscious decision-making: The surge in activity, or "readiness potential," started forming almost a full second before "W," your experienced moment of decision. W, Libet suggested, wasn't the present-tense sensation of making a decision, but the past-tense sensation of already having made one. This felt, to many observers, like a blow struck against the idea of free will.
Libet's experiment was thought-provoking, but imprecise. The followup study, by Itzhak Fried, Roy Mukamel, and Gabriel Kreiman, leverages new technologies that allow neuroscientists to monitor individual neurons, rather than huge brain regions. It also takes advantage of new clinical situations. Nowadays, super-small electrodes are sometimes inserted deep into the brains of epilepsy patients during pre-surgery diagnostic tests. Fried, Mukamel, and Kreiman asked these patients to perform a version of the Libet experiment.
What did they find? First, they discovered that W, the sensation of 'making a decision,' is correllated with activity in the motor areas of the brain -- not in some specific decision-making area. Second, they found that the W moment involves decreases in brain activity, as well as increases. Writing in Neuron, Patrick Haggard, a neuroscientist, argues that it's now "wrong to think of W as a prior intention, located at the very earliest moment of decision in an extended action chain. Rather, W seems to mark an intention-in-action, quite closely linked to action execution.... [occurring when] the brain transforms a prior plan into a motor act." The decreases in neural activity before W, meanwhile, suggest that the brain is set up to "tonically inhibit unwanted actions": the sensation of making a decision might not be about hatching a new plan, but about green-lighting one of many competing impulses.
What does all this suggest for the question of whether free will exists? That's probably the wrong question to ask. What the study does do is illuminate what "free will" really means. "Free will" isn't about some disembodied, hypothetical, abstract process of choosing; it's more about doing. Choosing -- at least in simple, immediate situations -- isn't about thinking a decision; it's about enacting one. With this in mind, it's a little less weird that the sensation of making a decision and the moment of enacting it are so closely linked.
So it looks as though our ideas about free will might need to change in a subtle way. That doesn't mean, though, that free will is going anywhere; in fact, there are many contexts in which we already understand free will this way. At Wimbledon this week, we'll be seeing top athletes playing at the highest levels: With every point, they'll be making split-second decisions not that dissimilar from the Libet decision. Clearly, those decisions will be made in a high-speed dance between body and brain. And yet we would never say that, because his body is involved in his decisions, an athlete doesn't have free will. Of course Roger Federer uses his body when he makes choices -- he's an athlete! So, it appears, are the rest of us.
Japanese haiku is a a rare thing in the world of poetry: a world-famous, universally beloved verse form, practiced both by serious poets and schoolkids. Its present-day popularity is especially incredible given its ancient history. In Haiku Before Haiku, Steven Carter, a professor of Japanese literature at Stanford, charts the emergence of haiku as an art-form, and offers new translations of 320 poems from the period in which haiku was developing out of an earlier form called hokku.
Matsuo Basho's "Frog Haiku," one of the earlier haiku poems, composed in 1686.
Haiku, for all its simplicity, grew out of a complex tradition of Japanese collaborative poetry called renga. In renga, Carter explains, a group of poets -- sometimes more than a dozen -- gather under the supervision of a renga master, or sōshō. Each poet contributes a stanza in turn, with the sōshō guiding composition by mandating the use of particular words or the exploration of certain topics. In one renga session, the poets might produce as many as 100 linked stanzas, which mutate over time to take the renga through different movements. The first verse of the renga, called a hokku, is identical to a modern haiku.
Renga, Carter explains, was wildly popular as early as the twelfth century: "Records tell us that each spring, at temples such as Bishamondo and Joshoji and in the Washio area of the Eastern Hills of Kyoto, large numbers of renga enthusiasts of all social classes would gather for marathon linking sessions" supervised by "hana no moto rengashi, or Masters of Renga Beneath the Blossoms." It wasn't until Basho, in the late 17th century, that hokku came into its own as a verse form.
Hokku are a little harder to understand than haiku, because they were written in a collaborative environment. Many of them are supposed to commemorate the specific renga gathering at which they were written, and so make reference to particular people and situations. In Haiku Before Haiku, this makes for fascinating reading. Some hokku are energizing, like Asayama Bonto's poem from the turn of the 15th century: "I gaze at the moon -- / And every night is the night / I had waited for." Presumably this was composed while the assembled renga poets looked at the moon. Others focus the mind on the small instants that are still the hallmark of modern haiku, like this poem by the thirteenth-century renga master Junkaku: "Beneath a tree / autumn wind shows itself / In a single leaf."
Renga poetry faded in popularity during the Meiji period in Japan, in part because the idea of collaborative poetry gave way to a more Western conception of how poetry ought to be composed. In the popularity of haiku, though, something of that collaborative spirit remains.
Writing in The New York Review of Books, Yale history professor Timothy Snyder reviews four new books about the Holocaust: Holocaust: The Nazi Persecution and Murder of the Jews and Heinrich Himmler: Biographie, by Peter Longerich; Model Nazi: Arthur Greiser and the Occupation of Western Poland, by Catherine Epstein; and The “Final Solution” in Riga: Exploitation and Annihilation, 1941-1944 by Andrej Angrick and Peter Klein. Together, he writes, they constitute a "new approach to the Holocaust," and shed new light on one of history's most elusive questions: How, exactly, did Hitler go about ordering the final solution?
It's commonly understood that the plan for the Holocaust was hatched at the Wansee Conference on January 20, 1942. Historians, however, have long questioned that idea. The Wansee meeting lasted only about an hour and a half, and many activities we now understand as part of the Holocaust were already ongoing, including the construction of death camps, and the gassing of Jews at Chelmo. The Holocaust, in fact, was not centrally planned in meticulous detail. Instead, broad goals were established, and individuals down the chain of command were allowed to take many of the decisive steps.
It's this process of delegation and incentivization, Snyder explains, that is coming into increasing focus. The Nazis, he writes, developed sweeping, politically appealing plans -- proposing, for example, to fill Poland with German farmers. Local officials were charged with achieving these aims, which the historian Peter Longerich calls "positive solutions" to the problem of German racial impurity. Inevitably, though, the plans proved impossible to carry out: millions of Poles and Jews stood in the way, with more arriving every day as they fled Western Europe. Eventually local officials proposed "negative solutions," which were always approved. The key was that, from Himmler on down the chain of command, individuals were "regarded as responsible for German racial consolidation, the 'positive solution,' but in fact controlled the coercive power needed for the crucial 'negative solution,' the mass murder of Jews that we call the Holocaust."
The Holocaust, in other words, was "ordered" via a system of incentives and permissions, rather than by means of a detailed plan. That's why, Snyder explains, "Historians of Germany have pushed the date of the crucial decision to eliminate all Jews later and later, until it seems that it could go no further":
They debate whether the critical moment was June 1941 (which few now believe), or October 1941, or December 1941. [Now Peter] Longerich calmly pushes through late 1941 and January 1942, the month of the Wannsee Conference, without recording a moment from which the Holocaust as total extermination was inevitable. He believes that there was in fact no crucial moment when Hitler decided, or communicated his decision, to kill all Jews under German control. In his view, “we should abandon the notion that it is historically meaningful to try to filter the wealth of available historical material and pick out a single decision” that led to the Holocaust.
The Holocaust, Snyder concludes, was a vast, group effort, facilitated by the widespread adoption of "scapegoating and murder as the response by lower cadres to imprecise signals from above." Evil doesn't have to come in a neat, person-sized package: it can be embodied in a system, too.
A girl, her bike, and her badge: An aggressive motorist harasses the author of the "A Girl and Her Bike" blog -- only to discover, when she pulls out her badge, that she's a police officer.
Behind corporate walls, the masters of the universe weep: Even among executives, frustration with the awfulness of corporate life is boiling over, with "secret grief and hidden angst, bursting out in an extraordinary way." (The Independent)
A reading list for women in male-dominated fields: Compiled by Athene Donald, a British physicist. (Five Books)
The difficulty of creating an underwater dolphin translation computer: "Imagine if an alien species landed on Earth wearing elaborate spacesuits and walked through Manhattan speaking random lines from The Godfather to passers-by." (New Scientist)
Rematch: Last year at Wimbledon, John Isner and Nicolas Mahut played the longest tennis match in history (11 hours, 5 minutes spread over three days). This year they've been rematched. (GQ)
[Image: Still from Andy Warhol's Empire.]
What do the Egyptian revolution and the financial crisis have in common? Writing in the May / June issue of Foreign Affairs, Nassim Nicholas Taleb (of Fooled by Randomness and The Black Swan) and Mark Blyth (a political economist at Brown) argue that "the critical issues in both cases is the artificial suppression of volatility -- the ups and downs of life -- in the name of stability."
In markets and in politics, our governments go to great lengths to ensure a smooth ride for everyone. With the best of intentions, they bail out banks and prop up dictators. The problem, Taleb and Blyth argue, is that we come to believe that our artificially created stability is natural; we forget its origins in our own interventions. We sweep volatility under the rug, and then forget that we did so. Eventually, all that volatility comes back with a vengeance, catching everyone by pseudo-surprise:
It is both misguided and dangerous to push unobserved risks further into the statistical tails of the probability distribution of outcomes and allow these high-impact, low-probability 'tail risks' to disappear from policymakers' fields of observations..... Complex systems that have artificially suppressed volatility tend to become extremely fragile, while at the same time exhibiting no visible risks. In fact, they tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface....
The problem, essentially, is that "variation is information." If you act as though variation is undesirable, you end up living in a political or economic bubble.
So, what to do? Taleb and Blyth have two concrete suggestions. First, focus on causes, rather than on catalysts. The sub-prime mortgage crisis, for example, was the catalyst of our financial meltdown, but it wasn't the cause: the cause was a creeping overconfidence and unreality among financiers. You'll get a better sense of what's happening on a systemic level, they argue, if you allow the system to speak for itself. Second, be more willing to do nothing. It's "hard to justify inaction," they admit, especially "in a democracy where the incentive is to always promise a better outcome than the other guy" -- but, in many cases, it's only by not acting that you'll be able to see the big picture.
This isn't to say that volatility is a good in itself -- only that volatility needs to be coped with, rather than suppressed. Think, for example, of how insurance works: unpredictability is accounted for, and then steps are taken to mitigate its inevitable impact. Ideally, no one is under any illusions. Ignoring volatility, on the other hand, actually makes the world more dangerous. Taleb and Blyth quote Seneca: "Repeated punishment, while it crushes the hatred of a few, stirs the hatred of all," he wrote, "just as trees that have been trimmed throw out again countless branches."
(The article is behind the Foreign Affairs paywall, but if you're a subscriber you can read it here.)
This extraordinarily entertaining video shows the best bits of a debate amongst Christopher Hitchens, Sam Harris, and two rabbis, David Wolpe and Bradley Shavit Artson, on the question "Is there an afterlife?" The debate is from February of this year; unfortunately, it seems that Hitchens is now having trouble speaking. He puts in a bravura (and occasionally profane) performance here.
"As Princess Leia says in Star Wars -- I can quote literature and scripture, too...."
[Via The Browser.]
Today, June 16th, is Bloomsday: the single day narrated in James Joyce's Ulysses, now celebrated around the world. If you haven't read Ulysses -- or, if you've started it but haven't finished it -- one of the most extraordinary sections is the final chapter, in which Molly Bloom, waiting to fall asleep in bed, thinks for hours about every subject imaginable (it's often called her "soliloquy"). The chapter contains only eight sentences, and is some of the most beautiful writing of the twentieth century.
In my opinion, the best audio recording of Molly's soliloquy appears in the Naxos audiobook of the novel; it's read perfectly by the Irish actress Marcella Riordan. As it happens, you can listen to the last few minutes of her performance on YouTube. Here's the recording, with the text below. Molly thinks about nature and God, recalls her childhood in Gibralter (she's half Spanish), and relives the moment she accepted her husband's proposal of marriage.
And yes, that is Marilyn Monroe reading Ulysses (and, it looks like, Molly's soliloquy). The photograph is by Eve Arnold, who took many wonderful photos of Monroe. Here's what Arnold had to say about the photograph:
We worked on a beach on Long Island. She was visiting Norman Rosten the poet.... I asked her what she was reading when I went to pick her up (I was trying to get an idea of how she spent her time). She said she kept Ulysses in her car and had been reading it for a long time. She said she loved the sound of it and would read it aloud to herself to try to make sense of it -- but she found it hard going. She couldn’t read it consecutively. When we stopped at a local playground to photograph she got out the book and started to read while I loaded the film. So, of course, I photographed her. It was always a collaborative effort of photographer and subject where she was concerned -- but almost more her input.
If Marilyn can read the book out of order, so can you! Marcella Riordan's reading of the end of the book, with the text below:
"God of heaven theres nothing like nature the wild mountains then the sea and the waves rushing then the beautiful country with the fields of oats and wheat and all kinds of things and all the fine cattle going about that would do your heart good to see rivers and lakes and flowers all sorts of shapes and smells and colours springing up even out of the ditches primroses and violets nature it is as for them saying theres no God I wouldnt give a snap of my two fingers for all their learning why dont they go and create something I often asked him atheists or whatever they call themselves go and wash the cobbles off themselves first then they go howling for the priest and they dying and why why because theyre afraid of hell on account of their bad conscience ah yes I know them well who was the first person in the universe before there was anybody that made it all who ah that they dont know neither do I so there you are they might as well try to stop the sun from rising tomorrow the sun shines for you he said the day we were lying among the rhododendrons on Howth head in the grey tweed suit and his straw hat the day I got him to propose to me yes first I gave him the bit of seedcake out of my mouth and it was leapyear like now yes 16 years ago my God after that long kiss I near lost my breath yes he said I was a flower of the mountain yes so we are flowers all a womans body yes that was one true thing he said in his life and the sun shines for you today yes that was why I liked him because I saw he understood or felt what a woman is and I knew I could always get round him and I gave him all the pleasure I could leading him on till he asked me to say yes and I wouldnt answer first only looked out over the sea and the sky I was thinking of so many things he didnt know of Mulvey and Mr Stanhope and Hester and father and old captain Groves and the sailors playing all birds fly and I say stoop and washing up dishes they called it on the pier and the sentry in front of the governors house with the thing round his white helmet poor devil half roasted and the Spanish girls laughing in their shawls and their tall combs and the auctions in the morning the Greeks and the jews and the Arabs and the devil knows who else from all the ends of Europe and Duke street and the fowl market all clucking outside Larby Sharons and the poor donkeys slipping half asleep and the vague fellows in the cloaks asleep in the shade on the steps and the big wheels of the carts of the bulls and the old castle thousands of years old yes and those handsome Moors all in white and turbans like kings asking you to sit down in their little bit of a shop and Ronda with the old windows of the posadas 2 glancing eyes a lattice hid for her lover to kiss the iron and the wineshops half open at night and the castanets and the night we missed the boat at Algeciras the watchman going about serene with his lamp and O that awful deepdown torrent O and the sea the sea crimson sometimes like fire and the glorious sunsets and the figtrees in the Alameda gardens yes and all the queer little streets and the pink and blue and yellow houses and the rosegardens and the jessamine and geraniums and cactuses and Gibraltar as a girl where I was a Flower of the mountain yes when I put the rose in my hair like the Andalusian girls used or shall I wear a red yes and how he kissed me under the Moorish wall and I thought well as well him as another and then I asked him with my eyes to ask again yes and then he asked me would I yes to say yes my mountain flower and first I put my arms around him yes and drew him down to me so he could feel my breasts all perfume yes and his heart was going like mad and yes I said yes I will Yes"
This incredible footage shows Argentina's Nahuel Huapi Lake. It's been covered with ash from the ongoing eruption of the Puyehue Volcano, which is located more than 100 miles away, in Chile. The volcano has grounded flights all over the region. This is what the lake usually looks like:
More pictures here. In many photos, there are large areas of clear water, and it appears that the ash might get blown ashore (where it enriches the soil). But the eruption, though it's slowing, isn't over yet. (And no: scuba diving in all that ash is not particularly safe!)
Via the excellent philosophy blog Think Tonk: Ali G. gets right to the heart of consequentialism in this clip. Consequentialism, as the Stanford Encyclopedia of Philosophy puts it, is the view that "whether an act is morally right depends only on the consequences of that act" -- not on any abstract principle of right and wrong, and not (necessarily) on intent. He gets there right at the end -- it's a photo-finish! (Ever-so-slightly not safe for work.)
David Lynch to open real-life Club Silencio in Paris: "Visitors will presumably be advised to avoid the dumpster out the back, where it is rumoured that a hobo lies in wait." (The Guardian)
American students don't know history: "Fewer than a quarter of American 12th-graders knew China was North Korea's ally during the Korean War." (WSJ)
A science reading list for summer, full of classics: "Since science doesn't move nearly as fast as most people think it does, great science books remain surprisingly timely." (Scientific American)
Idaho homeowners driven out by thousands of snakes: "The home was most likely built on a winter snake sanctuary, likely a snake den or hibernaculum." (Associated Press)
Seven Problems a Recovery Won't Fix: Including dumbification -- "a sapping and draining of the human thirst for great, world-changing achievement." (Harvard Business Review)
Bill Gates, not as you imagined him: "Rocking gently in his chair, he begins to sing: 'I wanna be a billionaire so freakin' bad. Buy all the things I never had. I wanna be on the cover of Forbes magazine. Smiling next to Oprah and the Queen...." (The Daily Mail)
[Image: Club Silencio, from Mulholland Drive. Note: the hobo isn't behind the club, but behind Winkie's.]
Nowadays, notebooks are a little twee. Serious work gets done on a computer; pen and paper are for sketching eccentric people on the subway. But it was not always that way. Michael Canfield's extraordinary new book, Field Notes on Science and Nature, takes us back to a time when the notebook was a serious scientific tool: when paper-and-pencil field notebooks were "the most basic tool for studying the science of nature." Field Notes, by beautifully reproducing dozens of pages from field notebooks old and new, reveals the important role note-taking has played, and still plays, in scientific reasoning.
Canfield is a biologist at Harvard who specializes in caterpillar camouflage. He's also a historian of scientific note-taking, from Linnaeus through Darwin to the present day. In Field Notes, Canfield invites a dozen field scientists to share pages from their field notebooks, and to explain their note-taking methods in accompanying essays. The notebooks contain beautiful illustrations of plants and animals from around the world: lobsters, fish, lions, mushrooms, wild cats, even diagrams of underground vole tunnel systems.
Field notebooks serve a dizzying array of purposes. For many scientists, Canfield writes, they work more or less the way diaries work for the rest of us: "Taking time to write out an idea or observation forces us to pause and consider." But for others, the notebooks are an artistic opportunity, encouraging "drawing to observe," or a way of writing "letters to the future." E.O. Wilson, who contributes a forward, thinks of the notebooks as a way of capturing, for "more sedentary spirits," "the rich and mostly unknown world that gives natural history its primal stimulus." In every case, the notebooks reveal the scientist's secret weapon: attention. Field notebooks serve to focus, direct, regularize, and, ultimately, refine a scientist's attention to the most minute details.
They also record the origins of the scientific impulse. Over and over, the scientists write not only about the pleasure of observing, but also about the satisfaction of learning to observe better, which is what led them to embark on scientific careers. Many explain how their note-taking skills first developed during childhood. The biologist Bernd Heinrich includes pages from his first field notebook, begun when he was seventeen: ornithological observations are nestled in between notes about his track meets. As he became a scientist, he explains, note-taking "took on a life of its own." It "made the difference between simply being a witness to nature and being one who identifies themes and questions."
[For the curious: you can browse scans of field notebooks from the Grinnell Survey, a huge naturalist effort in California, here.]
Writing in the newest issue of Academe Online, Eric Alterman parses the differences between professors, reporters, and think-tankers. "These three realms," he explains, "differently understand one categorical imperative: to tell the truth. Ah, but there’s the rub. What is 'truth'? Its meaning changes between locations."
Alterman should know: he's a professor (of English and journalism), a journalist (at The Nation and The Daily Beast), and a progressive think-tanker (at the Center for American Progress, the Nation Institute, and the World Policy Institute). Academics, he explains, are rigorous but frustratingly uncommunicative. Journalists are aggressively communicative, but rushed and uncritical. Think-tankers are organized and motivated, but also biased and disingenuous. Each group offers its own compromised version of the truth, shaped by its institutional structures: tenure, deadlines, grants.
The big story of the last few decades, Alterman argues, is that think-tankers have replaced professors as the source of authoritative ideas for journalists:
They have been able to succeed, in part, because most academics who retain a commitment to intellectual scrupulousness have lost the ability to speak beyond their narrow disciplines to the larger public.
For predictable reasons, Alterman is particularly alarmed by the rise of right-wing think-tanks, and his article has a message for progressives: hire more thin-tankers! He ignores the progressive think-tank apparatus, of which he is a part, almost entirely.
But his article also has a message for universities that's beyond left and right. Professors now face stiff competition in public life from motivated, well-funded, and outward-facing think-tank researchers. If universities want to make a difference in the world, then their scholarship needs to adapt. It needs to be published more regularly, written more accessibly, and disseminated more widely, and in much greater volume. It's not enough for universities to hire public relations officers (which they've begun to do); the genres of scholarship themselves need to change.
Last week the Space Shuttle Endeavour launched into orbit on its final mission. Filmmaker Chase Heavener used video released by NASA to create this old-school, 60s style launch film. It's a nice reminder that the space program wasn't always so prosaic -- in fact, it used to be downright psychedelic. The cameras are mounted on the Endeavour's booster rockets, and you get to watch as they drift down for an ocean landing.
The video is funny-shaped, so in order to watch it, you'll need to click the full-screen button -- the thing with the four arrows on the lower-right.
You've committed a crime, and been convicted. The judge offers you a choice: Five years, or five lashes with a rattan cane. Which would you choose? That's the question Peter Moskos asks in In Defense of Flogging. Moskos, a former Baltimore police officer, is now a professor at the John Jay School of Criminal Justice. His book is, as promised, a well-reasoned defense of flogging. It's also an attack upon the penal system. "Faced with the choice between hard time and the lash, the lash is better," he writes. "What does that say about prison?"
Moskos puts today's prisons in historical perspective, starting in the late 18th century, when corporal punishment was the norm. Though criminals were imprisoned, imprisonment was rarely a punishment in itself; instead, you'd be held in jail while awaiting trial, punishment, or execution. Jails were informal and even co-ed. Then, around the turn of the century, religious reformers changed everything, arguing that flogging was inhumane, and that the goal of punishment should be moral rejuvenation. Over the next fifty years, corporal punishment was outlawed, and imprisonment became the justice system's primary tool. Criminals were moved to small, individual cells, in which they could meditate on their crimes and ask God for penance. The new prisons were called, appropriately, "penitentiaries."
Almost immediately, Moskos explains, the penitential system ran aground. Prisoners didn't repent; in fact, the confinement and boredom made them crazy. (Charles Dickens, on tour in America, wrote that the prison cell was deeply inhumane -- it buried criminals alive in a "stone coffin.") To save money, larger prisons were built. Fast forward two hundred years, and you have a system of punishment that is, Moskos argues, vast, inhumane, ineffectual, and incoherent. "I can't think of another institution," he writes, "that has failed as mightily as the prison has."
Moskos is no punishment zealot -- in fact, he wants us to face up to just how much punishment we're already meting out. Most of the book is devoted to enumerating the horrors of prison. Prisons, he argues, are essentially state-run torture chambers, with the torturing outsourced to the inmates. Corporal punishment still goes on, just under the table. We may turn up our noses at corporal punishment, he argues, but only because we're willfully ignorant of what really goes on in American prisons. Against all the evidence, we continue to buy into the humanitarian founding myths of the prison system.
In fact, he writes, our system is more violent now than it's ever been. Effectively, we sentence child abusers "to torture followed by death." We condemn prisoners to insanity-inducing isolation, and "force straight men to have semiconsensual prison-gay sex." Ultimately, Moskos argues, it would be better to rewind the clock. Flogging is simpler, cheaper, and more humane; it puts punishment out in the open, where it belongs. Moskos suggests that many criminals could be offered the choice between time and flogging, with two lashes being equivalent to each year in jail. Victims, or judges, could be given veto power.
It's hard to say how serious Moskos is being (though my money is on "pretty serious"). Even if you aren't convinced that flogging is the future, though, Moskos' deeper argument is still compelling. The act of punishment, he argues, is inherently strange, uncomfortable, and unsettling; there's a natural impulse to hide it away. Our prison system, though, shows that this is a mistake. Today, he writes, Americans are like the citizens in the science-fiction film Soylent Green. In that movie, it turns out that a popular food is made of people. "So," Moskos writes, "is our system of corrections." Instead of piling on the prison terms, we need to start asking hard questions about the value and meaning of punishment. Until then, we'll never have a sensible prison system.
From the Monterey Bay Aquarium Research Institute, a beautiful video about the huge variety of jellyfish species: "Ecologically, they are even more adaptable than one would expect by looking only at the conspicuous bloom forming families and species that draw most of the attention."
Over at The New Republic, Jamie Holmes writes about an interesting application of cognitive science to poverty. For decades, psychologists have been exploring the fact that will-power is a finite cognitive resource -- exercising it now means you have less of it later. People living in poverty, he points out, must constantly exercise their will-power, making continual and agonizing financial trade-offs between, say, food and rent. Is it possible, he asks, that the poor live in a state of continual will-powerlessness? Does poverty, in a sense, erode free will?
Psychologists and economists have been exploring one particular source of stress on the mind: finances. The level at which the poor have to exert financial self-control, they have suggested, is far lower than the level at which the well-off have to do so. Purchasing decisions that the wealthy can base entirely on preference, like buying dinner, require rigorous tradeoff calculations for the poor. As Princeton psychologist Eldar Shafir formulated the point in a recent talk, for the poor, “almost everything they do requires tradeoff thinking. It’s distracting, it’s depleting... and it leads to error.”
...This suggests that we need to rethink our approaches to poverty reduction. Many of our current anti-poverty efforts focus on access to health, educational, agricultural, and financial services. Now, it seems, we need to start treating willpower as a scarce and important resource as well.
Well-off societies, Holmes notes, use various "commitment products" to keep "willpower costs" low. Educational savings accounts are good examples: they take away the cognitive costs of trade-off decisions -- you don't have to agonize about whether you should buy a new washing machine or save for college. The real insight here is that the lowering of willpower costs in one place enables you to exert more willpower in other places. In the rich world, we enjoy willpower liquidity. In the poor world, people suffer from willpower scarcity. More here, from behavioral economists Abhijit Banerjee and Sendhil Mullainathan.
My work explores the relationship between new class identities and urban spaces.
With influences as diverse as Derrida and John Cage, new synergies are crafted from both explicit and implicit textures.
Ever since I was a child I have been fascinated by the ephemeral nature of meaning. What starts out as vision soon becomes corroded into a dialectic of temptation, leaving only a sense of nihilism and the prospect of a new reality.
As shifting phenomena become frozen through emergent and diverse practice, the viewer is left with an insight into the limits of our world.
This was a long time coming, but was worth the wait: the Arty Bollock's Generator writes your artist's statement for you!
Wikileaks is a success story right out of 1990s science fiction: An international team of computer hackers, led by a mysterious genius with a sophisticated name, uncovers government secrets and disseminates them on the Net. It's an extraordinary effort by an extraordinary group of people. Writing in Dissent, Sarah Leonard asks: What motivates them? Is it politics -- or just a love of the game?
To a degree, Leonard writes, Julian Assange and his crew are political crusaders. Wikileaks first burst onto the scene last spring with the disturbing "Collateral Murder" video, which showed U.S. soldiers firing, from helicopters, upon a group of Iraqis that included a pair of Reuters journalists and two children. (The images were captured by a targeting camera on the helicopter; the legality of the air strikes is the subject of an excellent post at The New Yorker's News Desk blog.) Presenting the video at a press conference, Assange clearly wanted to motivate political change. "He offered context and opinion," Leonard writes; "he tracked down family members of victims beforehand. It was a fully formed story, with its own politics and purpose." The success of the leak, in short, seemed to depend upon a larger, political outcome: a change in sentiment about the war in Iraq.
Since then, Leonard writes, Wikileaks has abandoned political showmanship in favor of mere "data dumping." To Leonard, that suggests that stealing the information is, for the Wikileakers, way more fun than trying to understand it, or use it for any specific purpose. Rank-and-file Wikileakers, Leonard argues, are really apolitical. Motivated by "the antiauthoritarian tendencies of Web 2.0 enthusiasts," they're pursuing "the same program they have pursued with music, film, video games, books: making the information free." Wikileaking, in other words, is an end in itself -- a vast endeavor under the sign of hacker virtues like data "openness." The important thing to remember, Leonard argues, is that information isn't, in itself, political. Political change requires acting on what you know. "The most visible gatherings of warm, offline bodies in reaction to WikiLeaks," she points out, "have been to protest the persecution of Julian Assange himself."
That's not to diminish the power of a video like "Collateral Murder." It provided a rare, unfiltered glimpse of the war in Iraq. Leonard's point is that Wikileaks has contributed mainly to the spectacle of politics. Whether it will contribute to its substance is something no one knows just yet.
He talks about the (amazing) albino crocodiles at the end of his (excellent) Cave of Forgotten Dreams -- and the crowd goes wild!
The crocodiles sound crazy on The Colbert Report, but they are incredible when they appear in the movie.
Economic crises are complicated, but they often boil down to one simple problem: debt. Writing in The American Prospect, the economist Robert Kuttner argues that we're approaching our debt the wrong way. Stalled economies get moving, he explains, when debts are forgiven. But instead of forgiving debts, Western governments have sided with creditors. As a result, our economies are trapped in "debtor's prison" -- paying creditors at the expensive of productivity, and prioritizing a senseless bubble over a substantive future.
Kuttner's argument is simple. When nations burst economic bubbles, they face a choice: "Either the creditor class prevails at the expense of everyone else, or governments find ways to reduce the debt burden so that the productive power of the economy can recover." Creditors, obviously, want their debts repaid, and view anything less than full repayment as "the end of economic civilization." It's not, though: Throughout economic history, governments have figured out ways to restructure unusual or "economically perverse" debt. The restructuring of debt after World War II, for example, didn't undermine confidence in business norms -- instead, it allowed Western economies to rebound and create new wealth.
We tend to think about debt in moral terms: we want to know who's responsible for it. Creditors want to blame borrowers, and vice versa. It's easy, but not always right or useful, Kuttner argues, to think about debt this way. In a bubble, everyone is responsible -- and the crucial fact to grasp, Kuttner says, is that debt crises create a struggle "between the claims of the past and the potential of the future." Businesses, Kuttner points out, often choose the future, by restructuring their debts in times of crisis. (That's what General Motors did: its creditors made compromises to keep the factories running.) Hypocritically, however, "the business elite look askance when others -- homeowners, small nations, the entire economic system -- seek relief" from extraordinary debt. This, Kuttner argues, is a double standard. Whether we're talking about businesses or about consumers, the goal should be the same: "orderly relief from past debt so that... productive enterprise is not needlessly destroyed."
Understanding the economic crisis in terms of a choice between the past and the future puts some facts into sharp relief. Take the recent resurgence in bank profits. As the economist Michael Konczal points out in a response to Kuttner, that resurgence in profits comes entirely from the past. "Those profits aren't the reward for effectively allocating capital to a recovering economy," Konczal writes; instead, they come from "milking the bad debts of the housing and credit bubbles" -- bubbles that the financial sector helped to create. The banks, in short, are still living off the bubble. That's why they're still seeing bubble-level profits.
The resurgence in bank profits, via Felix Salmon.
Thinking in terms of the past and the future also highlights the senselessness of wide-spread foreclosure. Today -- since government programs designed to slow or avert foreclosures have been half-hearted at best -- underwater homeowners are struggling to pay back huge, bubble-sized mortgages. One way to understand this is that, by honoring those crazy housing prices, we are favoring creditors over borrowers. Another is that we are favoring the past over the future, by demanding that the debts of the past must be repaid, no matter how bizarre they may be, at the expense of future productivity.
Kuttner's article has spurred a lot of discussion among the nation's most dynamic economists (see, for example, the responses from Matt Yglesias, Paul Krugman, Yves Smith, Adam Levitin, and Steve Waldman, among others.) It's struck a nerve because the rhetoric of austerity and belt-tightening has come to seem ideological, rather than pragmatic. Essentially, Kuttner is reminding us about a basic principle of crisis policy: When, in times of crisis, values drop catastrophically, debts must shift, too. You can read his article here.
Apple Computer plans to build a new, UFO-shaped building in Cupertino, California. It's huge, circular, runs off its own generator, and is large enough for 12,000 employees. You can watch Steve Jobs -- clad in his usual superhero costume of black mock turtleneck and jeans, and with his patented presentation style in full effect -- present the new building to the Cupertino City Council here:
It's strangely fascinating: a very prickly industrial titan at work in an everyday setting, interacting with everyday people. [Via Business Insider.]
If you're of a certain age, then one of your razor-sharp childhood memories may well be of unwrapping an original, brick-sized Nintendo Game Boy and playing its debut game, Tetris. (You may also remember the Tetris theme music -- which, it turns out, is actually based on a sad Russian folk song, "Korobeiniki.") Since that time, millions of man-hours have been devoted to those rotating Tetriminoes.
They weren't, it turns out, wasted hours: Jeremy Fordham, writing at The Beautiful Brain, recaps some of the research showing that Tetris actually transforms your gray matter, the 'plastic' part of your brain that changes as you learn. The main piece of evidence is a 2009 study showing that just thirty minutes of Tetris a day can produce positive changes in the brain. As Fordham explains, it's all about "the Tetris effect":
When a person initially starts to play Tetris, their brain consumes a huge amount of glucose in order to solve its fast-paced puzzles. Through consistent and limited daily practice, the brain begins to consume less glucose to perform just as well, if not better, at Tetris. After a few months the brain becomes so efficient at playing the game that it requires only a very small amount of fuel to perform the game’s rapid puzzle work.
The 'thickening' happens because the gray matter neurons become more interconnected: 'thicker' neural networks process Tetris problems faster. This isn't the only neurologically aspect of Tetris, either: In 2009, a study at the University of Oxford found that playing Tetris after a traumatic event might diminish traumatic flashbacks afterwards. Tetris, the lead author suggests, "specifically interferes with the way sensory memories are laid down in the period after trauma." I'll go out on a limb and speculate: Tetris is probably the sort of structured input that the brain responds to especially well. If only the rest of the world were as simple!
Inequality matters -- From Stanford University, twenty facts about American inequality that everyone should know.
The Talk-O-Meter -- A new iPhone app can tell voices apart, and produces a handy chart that shows who's dominating the conversation.
Open-hearted -- Philosopher Ronald Aronson discovers how life without God can have meaning, by having open-heart surgery.
Parting shots -- Fire a journalist, expect a fantastic parting shot in return. Slate collects the best of them.
[Image: Roberto Bolano.]
A. C. Grayling, Richard Dawkins, Niall Ferguson, Christopher Ricks, Peter Singer -- together, they constitute an all-star team of professors from across the liberal arts. This week they’ve announced that they’ll be teaching for the same college: the New College in the Humanities, a higher-education startup in London. It will be headed by Grayling, a philosopher and logician, and backed by a group of venture capitalists and businessmen. The college promises to bring American-style liberal-arts education to Britain, at American prices: the tuition is £18,000 a year, or about $30,000. That's $20,000 cheaper than Harvard, but twice as expensive as Oxford and Cambridge.
A. C. Grayling.
New College seems to be aiming for something halfway between a research university and a liberal arts college. Details are still a little murky, but it looks as though the star-studded professors (none of whom have given up their posts at their current universities) will be giving lectures, supported by "subject conveners" who are, essentially, distinguished-but-ordinary professors. That structure is reminiscent of an American research university. The student body, however, will be very small -- according to The Guardian, only 200 students will be admitted the first year -- and a lot of the instruction will be built around a one-on-one tutorial system.
This is a cautiously innovative model. As Tyler Cowen points out, the plan seems to be to "rent illustrious names rather than paying the whole set of fixed costs." And the course offerings are subtly different, with required courses in Logical and Critical Thinking, Science Literacy, Applied Ethics, and Professional Skills. The college will offer financial aid, but the incoming students will certainly be wealthy: writing at RichardDawkins.net, Grayling argues that a good education just is expensive. "If you look at what UK universities charge overseas students," he writes, "and at fees at US Ivy League universities, you get an idea of the true cost of a high quality higher education." This is a sensitive subject, as public subsidies for higher education in Britain are being slashed left and right.
Here in the U.S., of course, everyone is freaking our about our broken higher-ed system. We have an expensive, privatized high-end, and an underfunded, public low-end. Britain, meanwhile, has always aimed for the middle. Much of the commentary on the College is, inevitably, going to focus on how it's funded. But Grayling seems less interested in the business aspects, and more interested in the intellectual ones. The goal, he explains, is "bridging the CP Snow gap" -- the gap, that is, between the two cultures of science and the humanities. Modern universities, he argues, are on their way to creating "a society that knows nothing of history , cares nothing about literature, and never asks great questions about life, society and value"; at the same time, they have failed "to bring extended examples of serious, disciplined, evidence-and-reason-based scientific styles of thinking into the humanities curriculum." The New College of the Humanities, he explains, aims to do both.
So there are two things to watch here. First: will the College's innovations in high-end education work, socially and fiscally? Second, can its intellectual program succeed? Most commentators seem dubious on both counts, but there's no need to rush to judgment: higher education needs more experimentation, not less.
The global war on drugs has failed, with devastating consequences for individuals and societies around the world.... Vast expenditures on criminalization and repressive measures directed at producers, traffickers and consumers of illegal drugs have clearly failed to effectively curtail supply or consumption. Apparent victories in eliminating one source or trafficking organization are negated almost instantly by the emergence of other sources and traffickers.
The commissioners say that we need to "break the taboo on debate and reform" when it comes to drugs, placing a far larger emphasis on treatment, and experimenting with decriminalization. Countries need to "end the criminalization, marginalization and stigmatization of people who use drugs but who do no harm to others," and "encourage experimentation by governments with models of legal regulation of drugs to undermine the power of organized crime."
It's an extraordinary report, created by an extraordinary group of commissioners: the list includes Kofi Annan, Paul Volcker, George Schultz, Ernesto Zedillo, Mario Vargas Llosa, Carlos Fuentes, and Richard Branson. It provides a largely international perspective. The war looks just as bad, however, from the domestic point-of-view: just listen to David Simon, the creator of The Wire:
would decriminalize drugs in a heartbeat. I would put all the interdiction money, all the incarceration money, all the enforcement money, all of the pretrial, all the prep, all of that cash, I would hurl it as fast as I could into drug treatment and job training and jobs programs. I would rather turn these neighborhoods inward with jobs programs.... You talk honestly with some of the veteran and smarter detectives in Baltimore, the guys who have given their career to the drug war, including, for example, Ed Burns, who was a drug warrior for twenty years, and they’ll tell you, this war’s lost. This is all over but the shouting and the tragedy and the waste. And yet there isn’t a political leader with the stomach to really assess it for what it is.
Part of what makes the war on drugs so problematic, Simon argues, is that the "war" metaphor forecloses thinking. No one wants to lose a war. In fact, though, the drug problem is unique, even bizarre, and can't be thought about in terms of other sorts of problems: it's not a war, but a Franken-problem that's partly economic, partly epidemiological, partly social. If we dealt with the drug problem without the fog of war, we might be more adventurous in coming up with solutions.
I just can't get enough of time-lapse videos lately! Via Jason Kottke, here's an incredible video of thunderclouds forming and dissipating. It was filmed by the Australian artist Murray Fredericks, who specializes in capturing nature at its most minimal.
If you're curious about clouds, here's an interesting take on them from Steve Grand, the artificial life researcher. In Creation: Life and How to Make It, Grand explains that clouds aren't really things -- instead, it makes more sense to think of them as regions of space in which a cooler climate prevails. Water vapor enters the cloud and condenses, then evaporates when it leaves the cloud. To us, it looks as though a cloud is like a puff of steam -- a unique and consistent gathering of vapor particles moving through space. In fact, though, the 'contents' of a cloud are constantly changing as the cloud-space moves through the sky. Imagine the pool of light a flashlight makes as you shine it around a dark room: the pool of light moves, while its contents change:
The puffy, white, cumulus clouds that you see on a summer's day are constantly changing.... The vapour condenses at a certain height, moves up through the cloud and then re-evaporates as it begins to fall down the sides of the mushroom of convecting air. This is why clouds are such a paradox: we know that they contain many tons of water, and yet they float lazily over our heads as if they weigh nothing at all. In a very real sense, they do weigh nothing at all, since a cloud is just a name we give to a region of space, through which moist air passes and momentarily renders its water content visible.
Clouds are not unique in this respect. To take a frivolous example, if you dug a hole in the ground, and then repeatedly removed earth from one side and added it to the other, the hole would move along. Is it still the same hole?
"You," Grand argues, "are like a cloud: something that persists over long periods, while simultaneously being in flux. Matter flows from place to place and momentarily comes together to be you. Whatever you are, therefore, you are not the stuff of which you are made."
Justin Wilkinson, a NASA scientist, narrates this spellbinding compilation of videos shot by astronauts using their personal video cameras. These are the things they look at when they watch the Earth from above:
William the Marshall at work.
Glitzy, celebrity knights -- it sounds like a Monty Python skit. In fact, as Nigel Saul argues in History Today, the cult of celebrity, which we think of as uniquely modern, likely began with the chivalric tradition. In particular, Saul singles out William the Marshall, a twelfth-century knight who excelled in medieval tournaments and who may have been the first celebrity. William had "that certain glitziness which underlies and informs a relationship between the celebrity and an admiring audience":
Each year he and his friends would make their way round the tourneying grounds of France, practising their fighting skills, gaining in experience and winning names for themselves as they went. The Marshal’s exceptional gifts brought him to the notice of Henry II’s son, Henry, the Young King (1155-83), whose service he entered and with whom he achieved great things.... The Marshal made a point of playing to win. Wherever he went he was ruthless on the field, mastering tactics (such as grabbing his opponent’s horse’s reins) that eluded others. On the death of the Young King in 1183 he gained a place in the service of Henry’s brother Richard -- Richard the Lionheart (r. 1189-99) -- and on the latter’s accession scooped up yet more rewards. In 1189 he was awarded the hand in marriage of Isabel de Clare, heiress of the earldom of Pembroke, one of the richest inheritances in the kingdom. A great landed magnate, he was now on the road that would take him to the regency of England on the accession of the young Henry III in 1216.
The key fact is that everyone loved William: He was "someone whose appeal to the public transcends the sum of his or her deeds and achievements and turns as much on their personality and personal story." "Celebrity," Saul writes, "was found in that taste for showmanship, that touch of populism which gave the person who attained it a lasting place in the hearts of an adoring public.... It was in the Middle Ages... that the public adoration of famous men first bordered on what today we would recognise as ‘celebrity.’"
David Eagleman's new book, Incognito, looks more or less like every other neuroscience book. It has a catchy, slightly lurid subtitle (The Secret Lives of the Brain); it's suffused with the overpowering, gee-whiz rhetoric of science writing (everything is "rich," "wondrous," "surprising," "magical," "awe-inspiring" and so on); and it piles on the optical illusions, head injuries, and gotcha! psychology experiments that have become staples of the neuroscience genre.
All the fluff, however, hides a bracing surprise. Before Incognito draws to a close, Eagleman turns to his real subject: not, in fact, "the secret lives of the brain," but the deep, disturbing questions neuroscience raises about crime, punishment, and the organization of society. Neuroscience, Eagleman argues, by revealing the extent to which different people have different capacities for self-control and human connection, will force us to give up on "the myth of human equality"; in the process, it will upend the legal system, "which is built partially upon the premise that humans are all equal before the law."
Eagleman is a neuroscientist, specializing in the perception of time, and in synesthesia. He is also a prolific, imaginative writer: His previous book, Sum, consisted of forty neuroscientifically informed vignettes about the afterlife; it's been translated into 23 languages. In Incognito, Eagleman begins from a simple premise and imagines, almost science-fictionally, its conclusions. The premise is that much of what you think, choose, and do is driven by unconscious processes. Many of your preferences, thoughts, and intentions form without your conscious participation: they aren't the product of "you," but rather of "your brain."
In itself, of course, this can't be too surprising. Where, after all, would your thoughts and personality come from, if not from your brain? (It would be truly astonishing to find out that they come from somewhere else!) Meanwhile, we use our bodies all the time to do things, and don't find it at all unsettling. You use your legs, for instance, to walk from place to place -- and yet you would never say that it's really your legs that do the walking, rather than yourself. We use our legs to walk; in just the same way, we use our brains to choose, think, and act. The fact that brains are involved doesn't, in itself, make those thoughts any less our own.
The trouble, then, isn't that, as Incognito sometimes puts it, your brain is in charge rather than you. It's that you rely on your brain to do everything important, and, as Eagleman rather bluntly puts it, "all brains are not created equal." Brains are shaped by genes. And they change over time; they can be cultivated by education and experience, or ravaged by abuse or disease. It's usually obvious whose body is stronger or weaker: that's why heavyweight boxers don't fight lightweights. Neural inequality, however, has not been as obvious. Neuroscience, Eagleman argues, is about to change that.
Today, we acknowledge neural inequality in only the crudest ways: We protect criminals under 18, or with IQs of less than 70, from capital punishment. But, Eagleman writes, "As neuroscience improves, we will have a better ability to understand people along a spectrum, rather than in crude, binary categories. And this will allow us to tailor sentencing and rehabilitation for the individual rather than maintaining the pretense that all brains respond to the same incentives and deserve the same punishments."
Eventually, Eagleman argues, we will have to acknowledge that "criminal activity itself should be taken as evidence of brain abnormality." Eagleman envisions a time when we will sentence criminals based on their neural "modifiability," giving harsher sentences only to those who could, conceivably, change their behavior. At the same time, we will stop thinking about crime in terms of "blameworthiness," If someone couldn't have done otherwise than commit a crime, we might sequester him without punishing him, and without holding him, in a moral sense, responsible. Many criminals might turn out to be like Charles Whitman, who, taking aim from a tower at the University of Texas, killed 16 people and wounded 32 more. Whitman suspected that he was suffering from a mental illness, and requested, in a suicide note, that an autopsy be performed on his brain. Sure enough, the autopsy revealed a brain tumor; it had damaged Whitman's amygdala, which, Eagleman explains, "is involved in emotional regulation, especially as regards fear and aggression."
Incognito, obviously, is far from the last word on this subject. Eagleman seems too comfortable with a brave new world in which well-intentioned scientists determine just how much an individual can be expected to change, grow, and take responsibility for his own actions. People care, for profound reasons, about ideas like selfhood and equality, and can defend them in sophisticated, rigorous ways; the "myth of human equality" is not like one of the dubious "intuitions" Eagleman spends much of the book so joyfully overturning. But Incognito does the right thing by diving straight into the deep end and trying to swim. Eagleman, by imagining the future so vividly, puts into relief just how challenging neuroscience is, and will be.
David Eagleman will be appearing at Harvard Book Store this Friday.
Leon Neyfakh is the staff writer for Ideas. Amanda Katz is the deputy Ideas editor. Stephen Heuser is the Ideas editor.
Guest blogger Simon Waxman is Managing Editor of Boston Review and has written for WBUR, Alternet, McSweeney's, Jacobin, and others.
Guest blogger Elizabeth Manus is a writer living in New York City. She has been a book review editor at the Boston Phoenix, and a columnist for The New York Observer and Metro.
Guest blogger Sarah Laskow is a freelance writer and editor in New York City. She edits Smithsonian's SmartNews blog and has contributed to Salon, Good, The American Prospect, Bloomberg News, and other publications.
Guest blogger Joshua Glenn is a Boston-based writer, publisher, and freelance semiotician. He was the original Brainiac blogger, and is currently editor of the blog HiLobrow, publisher of a series of Radium Age science fiction novels, and co-author/co-editor of several books, including the story collection "Significant Objects" and the kids' field guide to life "Unbored."
Guest blogger Ruth Graham is a freelance journalist in New Hampshire, and a frequent Ideas contributor. She is a former features editor for the New York Sun, and has written for publications including Slate and the Wall Street Journal.
Joshua Rothman is a graduate student and Teaching Fellow in the Harvard English department, and an Instructor in Public Policy at the Harvard Kennedy School of Government. He teaches novels and political writing.