Which is smarter at chess-humans or computers?

Which is smarter at chess-humans or computers?


brings us back to our original question here: Which is smarter

at chess-humans or computers?

Neither. It’s the two together, working side by side.

We’re all playing advanced chess these days. We just haven’t

learned to appreciate it. Our tools are everywhere, linked with our minds, working zo

in tandem. Search engines answer our most obscure questions; status updates give us an ESP-like awareness of those around us; online collaborations let far-flung collaborators tackle prob- lems too tangled for any individual. We’re becoming less like Rodin’s Thinker and more like Kasparov’s centaurs. This trans- formation is rippling through every part of our cognition- how we learn, how we remember, and how we act upon that knowledge emotionally, intellectually, and politically. As with Cramton and Stephen, these tools can make even the amateurs among us radically smarter than we’d be on our own, assuming (and this is a big assumption) we understand how they work. At their best, today’s digital tools help us see more, retain more communicate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance, I’d

argue, what is happening is deeply positive .. · · Th ” d d . d” In a sense, this is an ancient story. e exten e mm

theory of cognition argues that the reason humans are so intel- lectually dominant is that we’ve always outsourced bits of cogni- tion, using tools to scaffold our thinking into ever-more-rarefied realms. Printed books amplified our memory. Inexpensive paper and reliable pens made it possible to externalize our thoughts quickly. Studies show that our eyes zip around the ~age w~~le performing long division on paper, using the handwntten dtgtts as a form of prosthetic short-term memory. “These resources


Smarter Than You Think

enable us to pursue manipulations and juxtapositions of ideas and data that would quickly baffle the unaugmented brain,” as Andy Clark, a philosopher of the extended mind, writes.

Granted, it can be unsettling to realize how much thinking already happens outside our skulls. Culturally, we revere the Rodin ideal-the belief that genius breakthroughs come from ~ur gray matter alone. The physicist Richard Feynman once got mto an argument about this with the historian Charles Weiner. F~ynman .understood the extended mind; he knew that writing hts equattons and ideas on paper was crucial to his thought. But when Weiner looked over a pile of Feynman’s notebooks, he called them a wonderful “record of his day-to-day work.” No, no, Feynman replied testily. They weren’t a record of his thinking process. They were his thinking process:

“I actually did the work on the paper,” he said.

“Well,” Weiner said, “the work was done in your head, but the record of it is still here.”

“No, it’s not a record, not really. It’s working. You have to work on paper and this is the paper. Okay?”

Every new tool shapes the way we think, as well as what we think about. The printed word helped make our cognition linear and abstract, along with vastly enlarging our stores of knowledge. Newspapers shrank the world; then the telegraph shrank it even n:ore dramatically. With every innovation, cultural prophets btckered over whether we were facing a technological apocalypse or a utopia. Depending on which Victorian-age pundit you asked, the telegraph was either going to usher in an era of world peace (“I.t i~’ impossible that old prejudices and hostilities should longer extst, as Charles F. Briggs and Augustus Maverick intoned) or drown us in a Sargasso of idiotic trivia (“We are eager to tunnel



under the Atlantic … but perchance the first news that will leak

through into the broad, flapping American ear will be that the Princess Adelaide has the whooping cough,” as Thoreau opined). Neither prediction was quite right, of course, yet neither was quite wrong. The one thing that both apocalyptics and utopians understand and agree upon is that every new technology pushes us toward new forms of behavior while nudging us away from older, familiar ones. Harold Innis-the lesser-known but arguably more interesting intellectual midwife of Marshall McLuhan- called this the bias of a new tool. Living with new technologies

means understanding how they bias everyday life. What are the central biases of today’s digital tools? There are

many, but I see three big ones that have a huge impact on our cognition. First, they allow for prodigious external memory: smart- phones, hard drives, cameras, and sensors routinely record more information than any tool before them. We’re shifting from a stance of rarely recording our ideas and the events of our lives to doing it habitually. Second, today’s tools make it easier for us to find connections-between ideas, pictures, people, bits of news- that were previously invisible. Third, they encourage a superfluity of communication and publishing. This last feature has many surprising effects that are often ill understood. Any economist can tell you that when you suddenly increase the availability of

a resource, people do more things with it, which also means they do increasingly unpredictable things. As electricity became cheap and ubiquitous in the West, its role expanded from things you’d expect- like night-time lighting-to the unexpected and seem- ingly trivial: battery-driven toy trains, electric blenders, vibrators. The superfluity of communication today has produced everything from a rise in crowd-organized projects like Wikipedia to curious new forms of expression: television-show recaps, map-based story-

telling, discussion threads that spin out of a photo posted to a


Smarter Than You Think

smartphone app, Amazon product-review threads wittily hijacked for political satire. Now, none of these three digital biases is immu- table, because they’re the product of sofrware and hardware, and can easily be altered or ended if the architects of today’s tools (often corporate and governmental) decide to regulate the tools or find they’re not profitable enough. But right now, these big effects dominate our current and near-term landscape.

In one sense, these three shifts-infinite memory, dot 25 connecting, explosive publishing-are screamingly obvious to anyone who’s ever used a computer. Yet they also some- how constantly surprise us by producing ever-new “tools for thought” (to use the writer Howard Rheingold’s lovely phrase) that upend our mental habits in ways we never expected and often don’t apprehend even as they take hold. Indeed, these phenomena have already woven themselves so deeply into the lives of people around the globe that it’s difficult to stand back and take account of how much things have changed and why. While [here I map] out what I call the future of thought, it’s also frankly rooted in the present, because many parts of our future have already arrived, even if they are only dimly understood. As the sci-fi author William Gibson famously quipped: “The future is already here-it’s just not very evenly distributed.” This is an attempt to understand what’s happening to us right now, the better to see where our augmented thought is headed. Rather than dwell in abstractions, like so many marketers and pundits- not to mention the creators of technology, who are often remarkably poor at predicting how people will use their tools- ! focus more on the actual experiences of real people.

To provide a concrete example of what I’m talking about, let’s take a look at something simple and immediate: my activities while writing the pages you’ve just read.

4 51


As I was working, I often realized I couldn’t quite remember a detail and discovered that my notes were incomplete. So I’d zip over to a search engine. (Which chess piece did Deep Blue sacrifice when it beat Kasparov! The knight!) I also pushed some of my thinking out into the open: I blogged admiringly about the Spanish chess-playing robot from 1915, and within min- utes commenters offered smart critiques. (One pointed out that the chess robot wasn’t that impressive because it was playing an endgame that was almost impossible to lose: the robot started with a rook and a king, while the human opponent had only a mere king.) While reading Kasparov’s book How Life Imitates Chess on my Kindle, I idly clicked on “popular highlights” to see what passages other readers had found interesting-and wound up becoming fascinated by a section on chess strategy I’d only lightly skimmed myself. To understand centaur play better, I read long, nuanced threads on chess-player discus- sion groups, effectively eavesdropping on conversations of people who know chess far better than I ever will. (Chess players who follow the new form of play seem divided-some think advanced chess is a grim sign of machines’ taking over the game, and others think it shows that the human mind is much more valuable than computer software.) I got into a long instant-messaging session with my wife, during which I realized that I’d explained the gist of advanced chess better than I had in my original draft, so I cut and pasted that explanation into my notes. As for the act of writing itself? Like most writers, I constantly have to fight the procrastinator’s urge to meander online, idly checking Twitter links and Wikipedia entries in a dreamy but pointless haze-until I look up in horror and realize I’ve lost two hours of work, a missing-time experience redolent of a UFO abduction. So I’d switch my word processor into full-screen mode, fading my computer desktop to black so


Smarter Than You Think

I could see nothing but the page, giving me temporary mental peace.

[Let’s] explore each of these trends. First off, there’s the emergence of omnipresent computer storage, which is upend- ing the way we remember, both as individuals and as a cul- ture. Then there’s the advent of “public thinking”: the ability to broadcast our ideas and the catalytic effect that has both inside and outside our minds. We’re becoming more conversa- tional thinkers- a shift that has been rocky, not least because everyday public thought uncorks the incivility and prejudices that are commonly repressed in face-to-face life. But at its best (which, I’d argue, is surprisingly often), it’s a thrilling develop- ment, reigniting ancient traditions of dialogue and debate. At the same time, there’s been an explosion of new forms of expres- sion that were previously too expensive for everyday thought- like video, mapping, or data crunching. Our social awareness is shifting, too, as we develop ESP-like “ambient awareness,” a persistent sense of what others are doing and thinking. On a social level, this expands our ability to understand the people we care about. On a civic level, it helps dispel traditional politi- cal problems like “pluralistic ignorance,” catalyzing political action, as in the Arab Spring.

Are these changes good or bad for us? If you asked me twenty years ago, when I first started writing about technology, I’d have said “bad.” In the early 1990s, I believed that as people migrated online, society’s worst urges might be uncorked: pseudonymity would poison online conversation, gossip and trivia would domi- nate, and cultural standards would collapse. Certainly seep. es for some of those predictions have come true, as anyone ways to make

the “l”m of two who’s wandered into an angry political forum knows. minds” move. But the truth is, while I predicted the bad stuff, I didn’t fore- see the good stuff. And what a torrent we have: Wikipedia, a



global forest of eloquent bloggers, citizen journalism, political fact-checking–or even the way status-update tools like Twitter have produced a renaissance in witty, aphoristic, haikuesque expression. If [I accentuate] the positive, that’s in part because we’ve been so flooded with apocalyptic warnings of late. We need a new way to talk clearly about the rewards and pleasures of our digital experiences–one that’s rooted in our lived experi- ence and also detangled from the hype of Silicon Valley.

The other thing that makes me optimistic about our cog- JO nitive future is how much it resembles our cognitive past. In the sixteenth century, humanity faced a printed-paper wave of information overload-with the explosion of books that began with the codex and went into overdrive with Gutenberg’s movable type. As the historian Ann Blair notes, scholars were alarmed: How would they be able to keep on top of the flood of human expression? Who would separate the junk from what was worth keeping? The mathematician Gottfried Wilhelm Leibniz bemoaned “that horrible mass of books which keeps on grow- ing,” which would doom the quality writers to “the danger of general oblivion” and produce “a return to barbarism.” Thank- fully, he was wrong. Scholars quickly set about organizing the new mental environment by clipping their favorite passages from books and assembling them into huge tomes-florilegia, bouquets of text-so that readers could sample the best parts. They were basically blogging, going through some of the sa~e arguments modem bloggers go through. (Is it enough to cltp a passage, or do you also have to verify that what the author wrote was true? It was debated back then, as it is today.) The past turns out to be oddly reassuring, because a pattern emerges. Each time we’re faced with bewildering new thinking tools, we panic-then quickly set about deducing how they can be used

to help us work, meditate, and create.


Smarter Than You Think

History also shows that we generally improve and refine our tools to make them better. Books, for example, weren’t always as well designed as they are now. In fact, the earliest ones were, by modem standards, practically unusable–often devoid of the navigational aids we now take for granted, such as indexes, paragraph breaks, or page numbers. It took decades-centuries, even-for the book to be redesigned into a more flexible cogni- tive tool, as suitable for quick reference as it is for deep reading. This is the same path we’ll need to tread with our digital tools. It’s why we need to understand not just the new abilities our tools give us today, but where they’re still deficient and how they ought to improve.

I have one caveat to offer. If you were hoping to read about the neuroscience of our brains and how technology is “rewiring” them, [I] will disappoint you.

This goes against the grain of modem discourse, I real- ize. In recent years, people interested in how we think have become obsessed with our brain chemistry. We’ve marveled at the ability of brain scanning-picturing our brain’s electrical activity or blood flow-to provide new clues as to what parts of the brain are linked to our behaviors. Some people panic that our brains are being deformed on a physiological level by today’s technology: spend too much time flipping between windows and skimming text instead of reading a book, or interrupting your conversations to read text messages, and pretty soon you won’t be able to concentrate on anything- and if you can’t concentrate on it, you can’t understand it either. In his book The Shallows, Nicholas Carr eloquently raised this alarm, arguing that the quality of our thought, as a species, rose in tandem with the ascendance of slow-moving, linear print and began declining with the arrival of the zingy,



flighty Internet. “I’m not thinking the way I used to think,”

he worried. I’m certain that many of these fears are warranted. It has

always been difficult for us to maintain mental habits of con- centration and deep thought; that’s precisely why societies have engineered massive social institutions (everything from univer- sities to book clubs and temples of worship) to encourage us to keep it up. It’s part of why only a relatively small subset of people become regular, immersive readers, and part of why an even smaller subset go on to higher education. T oday’s multitasking tools really do make it harder than before to stay focused during long acts of reading and contemplation. They require a high level of “mindfulness”-paying attention to your own atten- tion. While I don’t dwell on the perils of distraction [here], the importance of being mindful resonates throughout these pages. One of the great challenges of today’s digital thinking tools is knowing when not to use them, when to rely on the powers of older and slower technologies, like paper and books.

That said, today’s confident talk by pundits and journalists 35 about our “rewired” brains has one big problem: it is very prema- ture. Serious neuroscientists agree that we don’t really know how our brains are wired to begin with. Brain chemistry is particularly mysterious when it comes to complex thought, like memory, creativity, and insight. “There will eventually be neuroscientific explanations for much of what we do; but those explanations will tum out to be incredibly complicated,” as the neuroscientist Gary Marcus pointed out when critiquing the popular fascina- tion with brain scanning. “For now, our ability to understand how all those parts relate is quite limited, sort of like trying to understand the political dynamics of Ohio from an airplane window above Cleveland.” I’m not dismissing brain scanning; indeed, I’m confident it’ll be crucial in unlocking these mysteries


Smarter Than You Think

in the decades to come. But right now the field is so new that it is rash to draw conclusions, either apocalyptic or utopian, about how the Internet is changing our brains. Even Carr, the most diligent explorer in this area, cited only a single brain-scanning study that specifically probed how people’s brains respond to using the Web, and those results were ambiguous.

The truth is that many healthy daily activities, if you scanned the brains of people participating in them, might appear outright dangerous to cognition. Over recent years, professor of psychiatry James Swain and teams of Yale and University of Michigan scien- tists scanned the brains of new mothers and fathers as they listened to recordings of their babies’ cries. They found brain circuit activ- ity similar to that in people suffering from obsessive-compulsive disorder. Now, these parents did not actually have OCD. They were just being temporarily vigilant about their newborns. But since the experiments appeared to show the brains of new par- ents being altered at a neural level, you could write a pretty scary headline if you wanted: BECOMING A PARENT ERODES YOUR BRAIN FUNCTION! In reality, as Swain tells me, it’s much more benign. Being extra fretful and cautious around a newborn is a good thing for most parents: Babies are fragile. It’s worth the trade-off. Simi- larly, living in cities-with their cramped dwellings and pounding noise-stresses us out on a straightforwardly physiological level and floods our system with cortisol, as I discovered while research- ing stress in New York City several years ago. But the very urban density that frazzles us mentally also makes us 50 percent more productive, and more creative, too, as Edward Glaeser argues in Triumph of the City, because of all those connections between people. This is “the city’s edge in producing ideas.” The upside of creativity is tied to the downside of living in a sardine tin, or, as Glaeser puts it, “Density has costs as well as benefits.” Our digital environments likely offer a similar push and pull. We tolerate



their cognitive hassles and distractions for the enormous upside of being connected, in new ways, to other people.

I want to examine how technology changes our mental hab- its, but for now, we’ll be on firmer ground if we stick to what’s observably happening in the world around us: our cognitive behavior, the quality of our cultural production, and the social science that tries to measure what we do in everyday life. In any case, I won’t be talking about how your brain is being “rewired.”

Almost everything rewires it … . The brain you had before.you read this paragraph? You don’t

get that brain back. I’m hoping the trade-off is worth it.

The rise of advanced chess didn’t end the debate about man versus machine, of course. In fact, the centaur phenomenon only complicated things further for the chess world-raising questions about how reliant players were on computers and how their presence affected the game itself. Some worried that if humans got too used to consulting machines, they wouldn’t be able to play without them. Indeed, in June 2011, chess master Christoph Natsidis was caught illicitly using a mobile phone during a regular human-to-human match. During tense moments, he kept vanishing for long bathroom visits; the ref- eree, suspicious, discovered Natsidis entering moves into a piece of chess software on his smartphone. Chess had entered a phase similar to the doping scandals that have plagued baseball and cycling, except in this case the drug was software and its effect

cognitive. This is a nice metaphor for a fear that can nag at us in our 40

everyday lives, too, as we use machines for thinking more and more. Are we losing some of our humanity? What happens if the Internet goes down: Do our brains collapse, too? Or is the question naive and irrelevant-as quaint as worrying about


Smarter Than You Think

whether we’re “dumb” because we can’t compute long division without a piece of paper and a pencil?

Certainly, if we’re intellectually lazy or prone to cheating and shortcuts, or if we simply don’t pay much attention to how our tools affect the way we work, then yes-we can become, like Natsidis, overreliant. But the story of computers and chess offers a much more optimistic ending, too. Because it turns out that when chess players were genuinely passionate about learn- ing and being creative in their game, computers didn’t degrade their own human abilities. Quite the opposite: it helped them internalize the game much more profoundly and advance to new levels of human excellence.

Before computers came along, back when Kasparov was a young boy in the 1970s in the Soviet Union, learning grand-master-level chess was a slow, arduous affair. If you showed promise and you were very lucky, you could find a local grand master to teach you. If you were one of the tiny handful who showed world-class promise, Soviet leaders would fly you to Moscow and give you access to their elite chess library, which contained laboriously transcribed paper records of the world’s top games. Retrieving records was a painstaking affair; you’d contemplate a possible opening, use the catalog to locate games that began with that move, and then the librarians would retrieve records from thin files, pulling them out using long sticks resembling knitting needles. Books of chess games were rare and incomplete. By gaining access to the Soviet elite library, Kasparov and his peers developed an enormous advan- tage over their global rivals. That library was their cognitive augmentation.

But beginning in the 1980s, computers took over the library’s role and bested it. Young chess enthusiasts could buy CD-ROMs filled with hundreds of thousands of chess games.



Chess-playing software could show you how an artificial oppo- nent would respond to any move. This dramatically increased the pace at which young chess players built up intuition. If you were sitting at lunch and had an idea for a bold new opening move, you could instantly find out which historic players had tried it, then war-game it yourself by playing against software. The iterative process of thought experiments-“If I did this, then what would happen?”-sped up exponentially.

Chess itself began to evolve. “Players became more creative and daring,” as Frederic Friedel, the publisher of the first popu- lar chess databases and software, tells me. Before computers, grand masters would stick to lines of attack they’d long stud- ied and honed. Since it took weeks or months for them to research and mentally explore the ramifications of a new move, they stuck with what they knew. But as the next generation of players emerged, Friedel was astonished by their unusual gambits, particularly in their opening moves. Chess players today, Kasparov has written, “are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if

it doesn’t.” Most remarkably, it is producing players who reach grand 45

master status younger. Before computers, it was extremely rare for teenagers to become grand masters. In 1958, Bobby Fischer stunned the world by achieving that status at fifteen. The feat was so unusual it was over three decades before the record was broken, in 1991. But by then computers had emerged, and in the years since, the record has been broken twenty times, as more and more young players became grand masters. In 2002, the Ukrainian Sergey Karjakin became one at the tender age

of twelve.


Smarter Than You Think

So yes, when we’re augmenting ourselves, we can be smart W’ b · ~ e re ecommg centaurs. But our digital tools can also leave us smarter even when we’re not actively using them.

Joining the Conversation

1. Clive ~ompson lists three shifts-infinite memory, dot connectmg, and explosive publishing-that he believes have strongly affected our cognition. What exactly does he mean by these three shifts, and in what ways does he think they have changed our thinking?

2. Thompson starts paragraph 20 by sayt’ng “0 I ur too s are everywhere, link~d with our minds, working in tandem.” What. do yo~ thmk? Does his statement reflect your own expenence wtth technology?

3. I~ paragraphs 33-35, Thompson cites Nicholas Carr, whose vtews about technology differ from his. How does he respond to. Carr-and how does acknowledging views he disagrees wtth help support his own position?

4. So ~hat? Has Thompson convinced you that his topic mat- ters. If so, how and where does he do so?

5. ~rit~ an essay reflecting on the ways digital technologies ave mfluenced your own intellectual development drawing

from Thompson’s text and other readings in this chapter- and on your own experience as support for your argument. Be sure to acknowledge views other than your own.

Place Your Order Here!

Leave a Comment

Your email address will not be published. Required fields are marked *