< Back to front page Text size +

Computing: It's Complicated!

Posted by Josh Rothman  August 16, 2011 01:59 PM

E-mail this article

Invalid E-mail address
Invalid E-mail address

Sending your article

Most of us have a rough idea of how computers work -- we know that microchips and transistors are involved, somehow or other. The more profound question is: Why do computers work? Computers are essentially big adding machines equipped with virtual scratch pads; how is it that such fundamentally simple devices can fly airplanes, recognize faces, and compete on Jeopardy?

Computers can do all these things because the world, it turns out, contains very few problems which can't, ultimately, be broken down into a series of simple mathematical steps. The theory that every process, no matter how complex, can be worked out step-by-step is called the theory of computability; it was invented by Alan Turing, Kurt Gödel, Alonzo Church, and other mathematical geniuses in the 1930s, and it's the foundation of the computerized world we live in today. In the intervening decades, though, computability has started to look more complicated. In a new paper, "Why Philosophers Should Care About Computational Complexity," Scott Aaronson, a computer scientist at MIT, explains why computational complexity theorists believe that some problems are practically uncomputable. He lays out some of the implications this might have for some of the biggest questions in philosophy, physics, biology, and economics. The paper is to be published in a forthcoming volume from the MIT Press, and it is a relentlessly fascinating -- and challenging -- read.


Gödel and Einstein.

Computational complexity theorists, Aaronson explains, agree that almost all problems are computable, given infinite time in which to do the computing. The key fact, though, is that some problems take more-or-less forever to solve, and therefore aren't computable in practice. Time matters when it comes to computation -- if a problem will take longer than the age of the Universe to compute, then it's not really computable in any meaningful sense of the term.

Modern cryptography, which we use daily to send emails and shop online, is based on the fact that some problems take nearly forever to solve. Many modern codes rely on prime factorization, which you may remember from the SAT, because it's actually a super-complicated, practically uncomputable problem. Take a small prime number, like 1,591. If I give you its prime factors, 37 and 43, then you can quickly verify that, multiplied together, they add up to the target number. But try the problem the other way -- by asking "What are the prime factors of 1,591?" -- and you have a much lengthier problem to solve. In fact, the number of steps required to find prime factors increases exponentially as you add digits, even though it remains pretty easy to multiply the two prime factors together. That, roughly speaking, is how many modern codes work: I can verify that you have the right factors, or "keys"; at the same time, it's practically impossible for you to come up with the keys yourself.

To computational complexity theorists, cryptography appears to be relying upon a fundamental distinction in the wider world of math. Increasingly, it looks as though there are two basic kinds of problems. Some problems, like multiplying two numbers together, can be solved in "polynomial time" (a four-digit multiplication takes twice as long to compute as a two-digit one). Other problems, like prime factorization, can only be solved in "exponential time" (each time you add a digit to the problem, you increase the number of steps exponentially). To grasp the distinction, Aaronson writes, think of the difference "between writing down a thousand-digit number and counting to that number." It takes only one step to write down a fifth digit -- but tens of thousands of new steps to get there by counting.

Are these two kinds of problems fundamentally different? Or is there a way to make the impossible problems possible? That's the question theorists wrestle with today: They hope to arrive at a fundamental proof which either bridges the gap between polynomial and exponential problems, or shows that the gap can't be bridged. If mathematicians could turn the exponential problems into polynomial ones, they could crack all the codes on the Internet (and, maybe, use computers to discover mathematical proofs). On the other hand, if they could prove that exponential problems must forever remain that way, they'd show that some problems are fundamentally beyond the reach of computers -- at least until quantum computers come along, harnessing the power of bits stored in parallel universes to solve exponential problems by computing in parallel.

The answer to this question is important, for practical and philosophical reasons. Practically, many exponential problems pose challenges to real-world endeavors, like understanding protein synthesis, building warehouses, and designing microchips. And philosophically, it'd be nice to know if there really are non-computable problems -- problems, in essence, that con't be solved by brute computing power, but which must be addressed, as Aaronson has written elsewhere, through "creative leaps."

This blog is not written or edited by Boston.com or the Boston Globe.
The author is solely responsible for the content.

E-mail this article

Invalid E-mail address
Invalid E-mail address

Sending your article

 
About brainiac Brainiac is the daily blog of the Globe's Sunday Ideas section, covering news and delights from the worlds of art, science, literature, history, design, and more. You can follow us on Twitter @GlobeIdeas.
contributors
Brainiac blogger Kevin Hartnett is a writer in Columbia, South Carolina. He can be reached here.

Leon Neyfakh is the staff writer for Ideas. Amanda Katz is the deputy Ideas editor. Stephen Heuser is the Ideas editor.

Guest blogger Simon Waxman is Managing Editor of Boston Review and has written for WBUR, Alternet, McSweeney's, Jacobin, and others.

Guest blogger Elizabeth Manus is a writer living in New York City. She has been a book review editor at the Boston Phoenix, and a columnist for The New York Observer and Metro.

Guest blogger Sarah Laskow is a freelance writer and editor in New York City. She edits Smithsonian's SmartNews blog and has contributed to Salon, Good, The American Prospect, Bloomberg News, and other publications.

Guest blogger Joshua Glenn is a Boston-based writer, publisher, and freelance semiotician. He was the original Brainiac blogger, and is currently editor of the blog HiLobrow, publisher of a series of Radium Age science fiction novels, and co-author/co-editor of several books, including the story collection "Significant Objects" and the kids' field guide to life "Unbored."

Guest blogger Ruth Graham is a freelance journalist in New Hampshire, and a frequent Ideas contributor. She is a former features editor for the New York Sun, and has written for publications including Slate and the Wall Street Journal.

Joshua Rothman is a graduate student and Teaching Fellow in the Harvard English department, and an Instructor in Public Policy at the Harvard Kennedy School of Government. He teaches novels and political writing.

archives

Browse this blog

by category