Recent highlights from the ideas blog

By Joshua Rothman
August 14, 2011

E-mail this article

Invalid E-mail address
Invalid E-mail address

Sending your article

Your article has been sent.

Text size +

Why we’re so darn nice Human generosity is a bit of a puzzle for evolutionary biologists. There are sound reasons for people to be generous with one another - they may be part of the same family, or work together. Over the years, though, experiments have found that people are far more generous to strangers than those factors suggest they should be. Why are we so nice?

The evolutionary psychologists Andrew Delton, Max Krasnow, Leda Cosmides, and John Tooby think they’ve found the answer: Essentially, we’re generous with strangers because each meeting is a kind of wager about whether we’ll meet in the future. By modeling human interactions in a computer, the group has shown that it makes more sense to bet we’ll meet a stranger again - and, therefore, to be nice. Their paper has just been published in the Proceedings of the National Academy of Sciences.

To think through the problem, the researchers programmed a computer to simulate a large number of “agents,” individual people capable of interacting with one another. Some agents chose to cooperate when they met, others to cheat. Cheaters got short-term gains but, having cheated, were barred from interacting with the cheatee again. Cooperators, on the other hand, were allowed to have future interactions with those they met.

As in real life, each meeting meant making a guess: Will this person matter in my life? It’s a guess because, for many encounters, you simply can’t know. Cut in line at the movies, and you get in sooner - but you can never be totally sure that, next week, the guy behind you won’t be interviewing you for a job. Repeat meetings, the group points out, were even more likely in our evolutionary past, when we tended to live in smaller communities.

The researchers ran their simulation for tens of thousands of generations to figure out where the human-generosity meter would naturally be set. They found that, over time, it makes more sense to adopt a general attitude of generosity, in the hope that paying it forward now will pay back later. This suggests, they write, that “human generosity, far from being a thin veneer of cultural conditioning atop a Machiavellian core, may turn out to be a bedrock feature of human nature.”

Down and out in 1933 and 2011 George Orwell’s early masterpiece, “Down and Out in Paris and London,” was published nearly 80 years ago, in 1933. To write the book, Orwell lived on the streets in two of Europe’s greatest cities. Writing for the BBC, Emma Jane Kirby retraces Orwell’s steps at and finds that poverty in those cities hasn’t changed. In fact, today’s poor live, work, and suffer in exactly the same ways as in Orwell’s time.

In Paris, Kirby meets Modi, a plongeur (or dishwasher) from Morocco, who lives nearly the same life Orwell lived when he worked as a plongeur at a hotel restaurant in the early ’30s, sleeping four hours a night and making subsistence wages. She visits with Madame Jolivet, who stays off the streets by living in an infested garret for which her landlady charges an extortionate $2,484; while they talk, the landlady listens through the keyhole.

Orwell wrote that poverty “tangles you in a net of lies,” and Kirby finds this just as true for today’s poor, who struggle to keep up appearances and retain a little dignity. There’s only one obvious difference between then and now: In Orwell’s time, the very poor couldn’t afford to drink; today, they can.

Ultimately, Kirby concludes, in Orwell’s time and in ours, “The chief cruelty of homelessness is that it doesn’t dull the sensibilities of the man sleeping in the doorway but rather spitefully heightens them.”

A robot-friendly world As robots become more commonplace, how will they get around and figure out what to do? According to Matt Jones, a designer and engineer at the consulting firm BERG, we’ll need to build a robot-readable world to help them.

Robots don’t see the world the same way we do: Their vision systems often pick out different sorts of details, and even see in different wavelengths. In some ways, robot vision is more detailed than ours; in others, it’s sketchier. It’s full of “strange opportunities and constraints”: infrared on the one hand, problems with depth perception on the other.

So, as more and more robots make it out into the real world, we can look forward to a changing built environment, one that communicates information to computers. In some cases this means machine-readable symbols, like bar codes; in other cases it might happen in subtle and even aesthetic ways, often below the radar of human attention. You can see more images at

Joshua Rothman is a graduate student and teaching fellow in the Harvard English department and an instructor in public policy at the Harvard Kennedy School of Government. He teaches novels and political writing.


The daily Ideas blog