Introduction
“In an age of advanced technology, inefficiency is the sin against the Holy Ghost.” —ALDOUS HUXLEY
“Complexity is a solvable problem in the right hands.” —JEFF JARVIS
Silicon Valley is guilty of many sins, but lack of ambition is not one of them. If you listen to its loudest apostles, Silicon Valley is all about solving problems that someone else—perhaps the greedy bankers on Wall Street or the lazy know-nothings in Washington—have created. “Technology is not really about hardware and software any more. It’s really about the mining and use of this enormous data to make the world a better place,” Eric Schmidt, Google’s executive chairman, told an audience of MIT students in 2011. Facebook’s Mark Zuckerberg, who argues that his company’s mission is to “make the world more open and connected,” concurs. “We don’t wake up in the morning with the primary goal of making money,” he proclaimed just a few months before his company’s rapidly plummeting stock convinced all but its most die-hard fans that Facebook and making money had parted ways long ago. What, then, gets Mr. Zuckerberg out of bed? As he told the audience of the South by Southwest festival in 2008, it’s the desire to solve global problems. “There are a lot of really big issues for the world to get solved and, as a company, what we are trying to do is to build an infrastructure on top of which to solve some of these problems,” announced Zuckerberg. In the last few years, Silicon Valley’s favorite slogan has quietly changed from “Innovate or Die!” to “Ameliorate or Die!” In the grand scheme of things, what exactly is being improved is not very important; being able to change things, to get humans to behave in more responsible and sustainable ways, to maximize efficiency, is all that matters. Half-baked ideas that might seem too big even for the naïfs at TED Conferences—that Woodstock of the intellectual effete—sit rather comfortably on Silicon Valley’s business plans. “Fitter, happier, more productive”—the refreshingly depressive motto of the popular Radiohead song from the mid-1990s—would make for an apt welcome sign in the corporate headquarters of its many digital mavens. Technology can make us better—and technology will make us better. Or, as the geeks would say, given enough apps, all of humanity’s bugs are shallow. California, of course, has never suffered from a deficit of optimism or bluster. And yet, the possibilities opened up by latest innovations make even the most pragmatic and down-to-earth venture capitalists reach for their wallets. After all, when else will they get a chance to get rich by saving the world? What else would give them the thrill of working in a humanitarian agency (minus all the bureaucracy and hectic travel, plus a much better compensation package)? How will this amelioration orgy end? Will it actually accomplish anything? One way to find out is to push some of these nascent improvement efforts to their ultimate conclusions. If Silicon Valley had a designated futurist, her bright vision of the near future—say, around 2020 or so—would itself be easy to predict. It would go something like this: Humanity, equipped with powerful self-tracking devices, finally conquers obesity, insomnia, and global warming as everyone eats less, sleeps better, and emits more appropriately. The fallibility of human memory is conquered too, as the very same tracking devices record and store everything we do. Car keys, faces, factoids: we will never forget them again. No need to feel nostalgic, Proust-style, about the petite madeleines you devoured as a child; since that moment is surely stored somewhere in your smartphone—or, more likely, your smart, all-recording glasses—you can stop fantasizing and simply rewind to it directly. In any event, you can count on Siri, Apple’s trusted voice assistant, to tell you the truth you never wanted to face back then: all those madeleines dramatically raise your blood glucose levels and ought to be avoided. Sorry, Marcel! Politics, finally under the constant and far-reaching gaze of the electorate, is freed from all the sleazy corruption, backroom deals, and inefficient horse trading. Parties are disaggregated and replaced by Groupon-like political campaigns, where users come together—once—to weigh in on issues of direct and immediate relevance to their lives, only to disband shortly afterward. Now that every word—nay, sound—ever uttered by politicians is recorded and stored for posterity, hypocrisy has become obsolete as well. Lobbyists of all stripes have gone extinct as the wealth of data about politicians—their schedules, lunch menus, travel expenses—are posted online for everyone to review. As digital media make participation easier, more and more citizens ditch bowling alone—only to take up blogging together. Even those who’ve never bothered to vote in the past are finally provided with the right incentives—naturally, as a part of an online game where they collect points for saving humanity—and so they rush to use their smartphones to “check in” at the voting booth. Thankfully, getting there is no longer a chore; self-driving cars have been invented for the purpose of getting people from place to place. Streets are clean and shiny; keeping them that way is also part of an elaborate online game. Appeals to civic duty and responsibility to fellow citizens have all but disappeared—and why wouldn’t they, when getting people to do things by leveraging their eagerness to earn points, badges, and virtual currencies is so much more effective? Crime is a distant memory, while courts are overstaffed and underworked. Both physical and virtual environments—walls, pavements, doors, log-in screens—have become “smart.” That is, they have integrated the plethora of data generated by the self-tracking devices and social-networking services so that now they can predict and prevent criminal behavior simply by analyzing their users. And as users don’t even have the chance to commit crimes, prisons are no longer needed either. A triumph of humanism, courtesy of Silicon Valley. And then, there’s the flourishing new “marketplace” of “ideas.” Finally, the term “marketplace” no longer feels like a misnomer; cultural institutions have never been more efficient or responsive to the laws of supply and demand. Newspapers no longer publish articles that their readers are not interested in; the proliferation of self-tracking combined with social-networking data guarantees that everyone gets to read a highly customized newspaper (down to the word level!) that yields the highest possible click rate. No story goes unclicked, no headline untweeted; customized, individual articles are generated in the few seconds that pass between the click of a link and the loading of the page in one’s browser. The number of published books has skyrocketed—most of them are self-published—and they are perfectly efficient as well. Many even guarantee alternative endings—and in real time!—based on what the eye-tracking activity of readers suggests about their mood. Hollywood is alive and kicking; now that everyone wears smart glasses, a movie can have an infinite number of alternative endings, depending on viewers’ mood at a given moment as they watch. Professional critics are gone, having been replaced first by “crowds,” then by algorithms, and finally by customized algorithmic reviews—the only way to match films with customized alternative endings. The edgiest cultural publications even employ algorithms to write criticism of songs composed by other algorithms. But not all has changed: just like today, the system still needs imperfect humans to generate the clicks to suck the cash from advertisers. This brief sketch is not an excerpt from the latest Gary Shteyngart novel. Nor is it dystopian science fiction. In fact, there is a good chance that at this very moment, someone in Silicon Valley is making a pitch to investors about one of the technologies described above. Some may already have been built. A dystopia it isn’t; many extremely bright people—in Silicon Valley and beyond—find this frictionless future enticing and inevitable, as their memos and business plans would attest. I, for one, find much of this future terrifying, but probably not for the reasons you would expect. All too often, digital heretics like me get bogged down in finding faults with the feasibility of the original utopian schemes. Is perfect efficiency in publishing actually attainable? Can all environments be smart? Will people show up to vote just because they are playing a game? Such skeptical questions over the efficacy of said schemes are important, and I do entertain many of them in this book. But I also think that we, the heretics, also need to take Silicon Valley innovators at their word and have just a bit more faith in their ingenuity and inventiveness. These, after all, are the same people who are planning to scan all the world’s books and mine asteroids. Ten years ago, both ideas would have seemed completely crazy; today, only one of them does. So perhaps we should seriously entertain the pospossibility that Silicon Valley will have the means to accomplish some of its craziest plans. Perhaps it won’t overthrow the North Korean regime with tweets, but it could still accomplish a lot. This is where the debate ought to shift to a different register: instead of ridiculing the efficacy of their means, we also need to question the adequacy of the innovators’ ends. My previous book, The Net Delusion, shows the surprising resilience of authoritarian regimes, which have discovered their own ways to profit from digital technologies. While I was—and remain—critical of many Western efforts to promote “Internet freedom” in those regimes, most of my criticisms have to do with the means, not the ends, of the “Internet freedom agenda,” presuming that the ends entail a better climate for freedom of expression and more respect for human rights. In this book, I have no such luxury, and I question both the means and the ends of Silicon Valley’s latest quest to “solve problems.” I contend here that Silicon Valley’s promise of eternal amelioration has blunted our ability to do this questioning. Who today is mad enough to challenge the virtues of eliminating hypocrisy from politics? Or of providing more information—the direct result of self-tracking—to facilitate decision making? Or of finding new incentives to get people interested in saving humanity, fighting climate change, or participating in politics? Or of decreasing crime? To question the appropriateness of such interventions, it seems, is to question the Enlightenment itself. And yet I feel that such questioning is necessary. Hence the premise of this book: Silicon Valley’s quest to fit us all into a digital straightjacket by promoting efficiency, transparency, certitude, and perfection—and, by extension, eliminating their evil twins of friction, opacity, ambiguity, and imperfection—will prove to be prohibitively expensive in the long run. For various ideological reasons to be explained later in these pages, this high cost remains hidden from public view and will remain so as long as we, in our mindless pursuit of this silicon Eden, fail to radically question our infatuation with a set of technologies that are often lumped together under the deceptive label of “the Internet.” This book, then, attempts to factor in the true costs of this highly awaited paradise and to explain why they have been so hard to account for. Imperfection, ambiguity, opacity, disorder, and the opportunity to err, to sin, to do the wrong thing: all of these are constitutive of human freedom, and any concentrated attempt to root them out will root out that freedom as well. If we don’t find the strength and the courage to escape the silicon mentality that fuels much of the current quest for technological perfection, we risk finding ourselves with a politics devoid of everything that makes politics desirable, with humans who have lost their basic capacity for moral reasoning, with lackluster (if not moribund) cultural institutions that don’t take risks and only care about their financial bottom lines, and, most terrifyingly, with a perfectly controlled social environment that would make dissent not just impossible but possibly even unthinkable. The structure of this book is as follows. The next two chapters provide an outline and a critique of two dominant ideologies—what I call “solutionism” and “Internet-centrism”—that have sanctioned Silicon Valley’s great ameliorative experiment. In the seven ensuing chapters, I trace how both ideologies interact in the context of a particular practice or reform effort: promoting transparency, reforming the political system, improving efficiency in the cultural sector, reducing crime through smart environments and data, quantifying the world around us with the help of self-tracking and lifelogging, and, finally, introducing game incentives—what’s known as gamification—into the civic realm. The last chapter offers a more forward-looking perspective on how we can transcend the limitations of both solutionism and Internet-centrism and design and employ technology to satisfy human and civic needs. Now, why oppose such striving for perfection? Well, I believe that not everything that could be fixed should be fixed—even if the latest technologies make the fixes easier, cheaper, and harder to resist. Sometimes, imperfect is good enough; sometimes, it’s much better than perfect. What worries me most is that, nowadays, the very availability of cheap and diverse digital fixes tells us what needs fixing. It’s quite simple: the more fixes we have, the more problems we see. And yet, in our political, personal, and public lives—much like in our computer systems—not all bugs are bugs; some bugs are features. Ignorance can be dangerous, but so can omniscience: there is a reason why some colleges stick to need-blind admissions processes. Ambivalence can be counterproductive, but so can certitude: if all your friends really told you what they thought, you might never talk to them again. Efficiency can be useful, but so can inefficiency: if everything were efficient, why would anyone bother to innovate? The ultimate goal of this book, then, is to uncover the attitudes, dispositions, and urges that comprise the solutionist mind-set, to show how they manifest themselves in specific projects to ameliorate the human condition, and to hint at how and why some of these attitudes, dispositions, and urges can and should be resisted, circumvented, and unlearned. For only by unlearning solutionism—that is, by transcending the limits it imposes on our imaginations and by rebelling against its value system—will we understand why attaining technological perfection, without attending to the intricacies of the human condition and accounting for the complex world of practices and traditions, might not be worth the price. CHAPTER ONE Solutionism and Its Discontents “In the future, people will spend less time trying to get technology to work … because it will just be seamless. It will just be there. The Web will be everything, and it will also be nothing. It will be like electricity. … If we get this right, I believe we can fix all the world’s problems.” —ERIC SCHMIDT “‘Solutionism’ [interprets] issues as puzzles to which there is a solution, rather than problems to which there may be a response.” —GILLES PAQUET “The overriding question, ‘What might we build tomorrow?’ blinds us to questions of our ongoing responsibilities for what we built yesterday.” —PAUL DOURISH AND SCOTT D. MAINWARING Have you ever peeked inside a friend’s trash can? I have. And even though I’ve never found anything worth reporting—not to the KGB anyway—I’ve always felt guilty about my insatiable curiosity. Trash, like one’s sex life or temporary eating disorder, is a private affair par excellence; the less said about it, the better. While Mark Zuckerberg insists that all activities get better when performed socially, it seems that throwing away the garbage would forever remain an exception—one unassailable bastion of individuality to resist Zuckerberg’s tyranny of the social. Well, this exception is no more: BinCam, a new project from researchers in Britain and Germany, seeks to modernize how we deal with trash by making our bins smarter and—you guessed it—more social. Here is how it works: The bin’s inside lid is equipped with a tiny smartphone that snaps a photo every time someone closes it—all of this, of course, in order to document what exactly you have just thrown away. A team of badly paid humans, recruited through Amazon’s Mechanical Turk system, then evaluates each photo. What is the total number of items in the picture? How many of them are recyclable? How many are food items? After this data is attached to the photo, it’s uploaded to the bin owner’s Facebook account, where it can also be shared with other users. Once such smart bins are installed in multiple households, BinCam creators hope, Facebook can be used to turn recycling into a game-like exciting competition. A weekly score is calculated for each bin, and as the amounts of food waste and recyclable materials in the bins decrease, households earn gold bars and leaves. Whoever wins the most bars and tree leaves, wins. Mission accomplished; planet saved! Nowhere in the academic paper that accompanies the BinCam presentation do the researchers raise any doubts about the ethics of their undoubtedly well-meaning project. Should we get one set of citizens to do the right thing by getting another set of citizens to spy on them? Should we introduce game incentives into a process that has previously worked through appeals to one’s duties and obligations? Could the “goodness” of one’s environmental behavior be accurately quantified with tree leaves and gold bars? Should it be quantified in isolation from other everyday activities? Is it okay not to recycle if one doesn’t drive? Will greater public surveillance of one’s trash bins lead to an increase in eco-vigilantism? Will participants stop doing the right thing if their Facebook friends are no longer watching? Questions, questions. The trash bin might seem like the most mundane of artifacts, and yet it’s infused with philosophical puzzles and dilemmas. It’s embedded in a world of complex human practices, where even tiny adjustments to seemingly inconsequential acts might lead to profound changes in our behavior. It very well may be that, by optimizing our behavior locally (i.e., getting people to recycle with the help of games and increased peer surveillance), we’ll end up with suboptimal behavior globally, that is, once the right incentives are missing in one simple environment, we might no longer want to perform our civic duties elsewhere. One local problem might be solved—but only by triggering several global problems that we can’t recognize at the moment. A project like BinCam would have been all but impossible fifteen years ago. First, trash bins had no sensors that could take photos and upload them to sites like Facebook; now, tiny smartphones can do all of this on the cheap. Amazon didn’t have an army of bored freelancers who could do virtually any job as long as they received their few pennies per hour. (And even those human freelancers might become unnecessary once automated image-recognition software gets better.) Most importantly, there was no way for all our friends to see the contents of our trash bins; fifteen years ago, even our personal websites wouldn’t get the same level of attention from our acquaintances—our entire “social graph,” as the geeks would put it—that our trash bins might receive from our Facebook friends today. Now that we are all using the same platform—Facebook—it becomes possible to steer our behavior with the help of social games and competitions; we no longer have to save the environment at our own pace using our own unique tools. There is power in standardization! These two innovations—that more and more of our life is now mediated through smart sensor-powered technologies and that our friends and acquaintances can now follow us anywhere, making it possible to create new types of incentives—will profoundly change the work of social engineers, policymakers, and many other do-gooders. All will be tempted to exploit the power of these new techniques, either individually or in combination, to solve a particular problem, be it obesity, climate change, or congestion. Today we already have smart mirrors that, thanks to complex sensors, can track and display our pulse rates based on slight variations in the brightness of our faces; soon, we’ll have mirrors that, thanks to their ability to tap into our “social graph,” will nudge us to lose weight because we look pudgier than most of our Facebook friends. Or consider a prototype teapot built by British designer-cum-activist Chris Adams. The teapot comes with a small orb that can either glow green (making tea is okay) or red (perhaps you should wait). What determines the coloring? Well, the orb, with the help of some easily available open-source hardware and software, is connected to a site called Can I Turn It On? (http://www.caniturniton.com), which, every minute or so, queries Britain’s national grid for aggregate power-usage statistics. If the frequency figure returned by the site is higher than the baseline of 50 hertz, the orb glows green; if lower, red. The goal here is to provide additional information for responsible teapot use. But it’s easy to imagine how such logic can be extended much, much further, BinCam style. Why, for example, not reward people with virtual, Facebook-compatible points for not using the teapot in the times of high electricity usage? Or why not punish those who disregard the teapot’s warnings about high usage by publicizing their irresponsibility among their Facebook friends? Social engineers have never had so many options at their disposal. Sensors alone, without any connection to social networks or data repositories, can do quite a lot these days. The elderly, for example, might appreciate smart carpets and smart bells that can detect when someone has fallen over and inform others. Even trash bins can be smart in a very different way. Thus, a start-up with the charming name of BigBelly Solar hopes to revolutionize trash collecting by making solar-powered bins that, thanks to built-in sensors, can inform waste managers of their current capacity and predict when they would need to be emptied. This, in turn, can optimize trash-collection routes and save fuel. The city of Philadelphia has been experimenting with such bins since 2009; as a result, it cut its center garbage-collecting sorties from 17 to 2.5 times a week and reduced the number of staff from thirty-three to just seventeen, bringing in $900,000 in savings in just one year. Likewise, city officials in Boston have been testing Street Bump, an elaborate app that relies on accelerometers, the now ubiquitous motion detectors found in many smartphones, to map out potholes on Boston’s roads. The driver only has to turn the app on and start driving; the smartphone will do the rest and communicate with the central server as necessary. Thanks to a series of algorithms, the app knows how to recognize and disregard manhole covers and speed bumps, while diligently recording the potholes. Once at least three drivers have reported bumps in the same spot, the bump is recognized as a pothole. Likewise, Google relies on GPS-enabled Android phones to generate live information about traffic conditions: once you start using its map and disclose your location, Google knows where you are and how fast you are moving. Thus, it can make a good guess as to how bad the road situation is, feeding this information back into Google Maps for everyone to see. These days, it seems, just carrying your phone around might be an act of good citizenship. THE WILL TO IMPROVE (JUST ABOUT EVERYTHING!) That smart technology and all of our social connections (not to mention useful statistics like the real-time aggregate consumption of electricity) can now be “inserted” into our every mundane act, from throwing away our trash to making tea, might seem worth celebrating, not scrutinizing. Likewise, that smartphones and social-networking sites allow us to experiment with interventions impossible just a decade ago seems like a genuinely positive development. Not surprisingly, Silicon Valley is already awash with plans for improving just about everything under the sun: politics, citizens, publishing, cooking. Alas, all too often, this never-ending quest to ameliorate—or what the Canadian anthropologist Tania Murray Li, writing in a very different context, has called “the will to improve”—is shortsighted and only perfunctorily interested in the activity for which improvement is sought. Recasting all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized—if only the right algorithms are in place!—this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address. I call the ideology that legitimizes and sanctions such aspirations “solutionism.” I borrow this unabashedly pejorative term from the world of architecture and urban planning, where it has come to refer to an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions—the kind of stuff that wows audiences at TED Conferences—to problems that are extremely complex, fluid, and contentious. These are the kinds of problems that, on careful examination, do not have to be defined in the singular and all-encompassing ways that “solutionists” have defined them; what’s contentious, then, is not their proposed solution but their very definition of the problem itself. Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching “for the answer before the questions have been fully asked.” How problems are composed matters every bit as much as how problems are resolved. Solutionism, thus, is not just a fancy way of saying that for someone with a hammer, everything looks like a nail; it’s not just another riff on the inapplicability of “technological fixes” to “wicked problems” (a subject I address at length in The Net Delusion). It’s not only that many problems are not suited to the quick-and-easy solutionist tool kit. It’s also that what many solutionists presume to be “problems” in need of solving are not problems at all; a deeper investigation into the very nature of these “problems” would reveal that the inefficiency, ambiguity, and opacity—whether in politics or everyday life—that the newly empowered geeks and solutionists are rallying against are not in any sense problematic. Quite the opposite: these vices are often virtues in disguise. That, thanks to innovative technologies, the modern-day solutionist has an easy way to eliminate them does not make them any less virtuous. It may seem that a critique of solutionism would, by its very antireformist bias, be the prerogative of the conservative. In fact, many of the antisolutionist jibes throughout this book fit into the tripartite taxonomy of reactionary responses to social change so skillfully outlined by the social theorist Albert Hirschman. In his influential book The Rhetoric of Reaction, Hirschman argued that all progressive reforms usually attract conservative criticisms that build on one of the following three themes: perversity (whereby the proposed intervention only worsens the problem at hand), futility (whereby the intervention yields no results whatsoever), and jeopardy (where the intervention threatens to undermine some previous, hard-earned accomplishment). Although I resort to all three of these critiques in the pages that follow, my overall project does differ from the conservative resistance studied by Hirschman. I do not advocate inaction or deny that many (though not all) of the problems tackled by solutionists—from climate change to obesity to declining levels of trust in the political system—are important and demand immediate action (how exactly those problems are composed is, of course, a different matter; there is more than one way to describe each). But the urgency of the problems in question does not automatically confer legitimacy upon a panoply of new, clean, and efficient technological solutions so in vogue these days. My preferred solutions—or, rather, responses—are of a very different kind. It’s also not a coincidence that my critique of solutionism bears some resemblance to several critiques of the numerous earlier efforts to put humanity into too tight a straitjacket. Today’s straitjacket might be of the digital variety, but it’s hardly the first or the tightest. While the word “solutionism” may not have been used, many important thinkers have addressed its shortcomings, even if using different terms and contexts. I’m thinking, in particular, of Ivan Illich’s protestations against the highly efficient but dehumanizing systems of professional schooling and medicine, Jane Jacobs’s attacks on the arrogance of urban planners, Michael Oakeshott’s rebellion against rationalists in all walks of human existence, Hans Jonas’s impatience with the cold comfort of cybernetics; and, more recently, James Scott’s concern with how states have forced what he calls “legibility” on their subjects. Some might add Friedrich Hayek’s opposition to central planners, with their inherent knowledge deficiency, to this list. These thinkers have been anything but homogenous in their political beliefs; Ivan Illich, Friedrich Hayek, Jane Jacobs, and Michael Oakeshott would make a rather rowdy dinner party. But these highly original thinkers, regardless of political persuasion, have shown that their own least favorite brand of solutionist—be it Jacobs’s urban planners or Illich’s professional educators—have a very poor grasp not just of human nature but also of the complex practices that this nature begets and thrives on. It’s as if the solutionists have never lived a life of their own but learned everything they know from books—and those books weren’t novels but manuals for refrigerators, vacuum cleaners, and washing machines. Thomas Molnar, a conservative philosopher who, for his smart and vehement critique of technological utopianism written in the early 1960s, also deserves a place on the antisolutionist pantheon, put it really well when he complained that “when the utopian writers deal with work, health, leisure, life expectancy, war, crimes, culture, administration, finance, judges and so on, it is as if their words were uttered by an automaton with no conception of real life. The reader has the uncomfortable feeling of walking in a dreamland of abstractions, surrounded by lifeless objects; he manages to identify them in a vague way, but, on closer inspection, he sees that they do not really conform to anything familiar in shape, color, volume, or sound.” Dreamlands of abstractions are a dime a dozen these days; what works in Palo Alto is assumed to work in Penang. It’s not that solutions proposed are unlikely to work but that, in solving the “problem,” solutionists twist it in such an ugly and unfamiliar way that, by the time it is “solved,” the problem becomes something else entirely. Everyone is quick to celebrate victory, only no one remembers what the original solution sought to achieve. The ballyhoo over the potential of new technologies to disrupt education—especially now that several start-ups offer online courses to hundreds of thousands of students, who grade each other’s work and get no face time with instructors—is a case in point. Digital technologies might be a perfect solution to some problems, but those problems don’t include education—not if by education we mean the development of the skills to think critically about any given issue. Online resources might help students learn plenty of new facts (or “facts,” in case they don’t cross-check what they learn on Wikipedia), but such fact cramming is a far cry from what universities aspire to teach their students. As Pamela Hieronymi, a professor of philosophy at the University of California, Los Angeles (UCLA), points out in an important essay on the myths of online learning, “Education is not the transmission of information or ideas. Education is the training needed to make use of information and ideas. As information breaks loose from bookstores and libraries and floods onto computers and mobile devices, that training becomes more important, not less.” Of course, there are plenty of tools for increasing one’s digital literacy, but those tools go only so far; they might help you to detect erroneous information, but they won’t organize your thoughts into a coherent argument. Adam Falk, president of Williams College, delivers an even more powerful blow against solutionism in higher education when he argues that it would be erroneous to pretend that the solutions it peddles are somehow compatible with the spirit and goals of the university. Falk notes that, based on the research done at Williams, the best predictor of students’ intellectual success in college is not their major or GPA but the amount of personal, face-to-face contact they have with professors. According to Falk, averaging letter grades assigned by five random peers—as at least one much-lauded start-up in this space, Coursera, does—is not the “educational equivalent of a highly trained professor providing thoughtful evaluation and detailed response.” To pretend that this is the case, insists Falk, “is to deny the most significant purposes of education, and to forfeit its true value.” Here we have a rather explicit mismatch between the idea of education embedded in the proposed set of technological solutions and the time-honored idea of education still cherished at least by some colleges. In an ideal world, of course, both visions can coexist and prosper simultaneously. However, in the world we inhabit, where the administrators are as cost-conscious as ever, the approach that produces the most graduates per dollar spent is far more likely to prevail, the poverty of its intellectual vision notwithstanding. Herein lies one hidden danger of solutionism: the quick fixes it peddles do not exist in a political vacuum. In promising almost immediate and much cheaper results, they can easily undermine support for more ambitious, more intellectually stimulating, but also more demanding reform projects. KOOKS AND COOKS Once we leave the classroom and enter the kitchen, the limitations of solutionism are delineated in even sharper colors. Political philosopher Michael Oakeshott, conservative that he was, particularly liked emphasizing that cooking, like science or politics, is a very complex set of (mostly invisible) practices and traditions that guide us in preparing our meals. “It might be supposed that an ignorant man, some edible materials, and a cookery book compose together the necessities of a self-moved (or concrete) activity called cooking. But nothing is further from the truth,” he wrote in his 1951 essay “Political Education.” Rather, for Oakeshott the cookery book is “nothing more than an abstract of somebody’s knowledge of how to cook; it is the stepchild, not the parent of the activity.” “A cook,” he wrote in another essay, “is not a man who first has a vision of a pie and then tries to make it; he is a man skilled in cookery, and both his projects and his achievements spring from that skill.” Oakeshott didn’t much fear that our cooking habits would be destroyed by the proliferation of culinary literature; interpreting that literature was only possible within a rich tradition of cooking, so perusing such books might even strengthen one’s appreciation of the culinary culture. Or, as he himself put it, “the book speaks only to those who know already the kind of thing to expect from it and consequently how to interpret it.” He was not against using the book; rather, he took issue with people who thought that the book—rather than the tradition that produced it—was the main actor here. Whatever rules, recipes, and algorithms the book contained, all of them made sense only when interpreted and applied within the cooking tradition. For Oakeshott, the cookbook was the end (or an output), not the start (or an input), of that tradition. An argument against rationalists who refused to acknowledge the importance of practices and traditions, rather than a celebration of cookery books, it’s a surprisingly upbeat moment in Oakeshott’s thought. However, one can only wonder if Oakeshott would need to revise his judgment today, now that cooking books have been replaced with the kinds of sophisticated gadgetry that would have Buckminster Fuller, the archsolutionist who never stopped fantasizing about the perfect kitchen, brimming with envy. Paradoxically, as technologies get smarter, the maneuvering space for interpretation—what Oakeshott thought would bring cooks in touch with the world of practices and traditions—begins to shrink and potentially disappear entirely. New, smarter technologies make it possible to finally position, as it were, the cookery book’s instructions outside the tradition; almost no knowledge is required to cook with their help. Today’s technologies are no longer dumb, passive appliances. Some of them feature tiny, sophisticated sensors that “understand”—if that’s the right word—what’s going on in our kitchens and attempt to steer us, their masters, in the right direction. Here is modernity in a nutshell: We are left with possibly better food but without the joy of cooking. British magazine New Scientist recently covered a few such solutionist projects. Meet Jinna Lei, a computer scientist at the University of Washington who has built a system in which a cook is monitored by several video cameras installed in the kitchen. These cameras are clever: they can recognize the depth and shape of objects in their view and distinguish between, say, apples and bowls. Thanks to this benign surveillance, chefs can be informed whenever they have deviated from their chosen recipe. Each object has a number of activities associated with it—you don’t normally boil spoons or fry arugula—and the system tracks how well the current activity matches the object in use. “For example, if the system detects sugar pouring into a bowl containing eggs, and the recipe does not call for sugar, it could log the aberration,” Lei told New Scientist. To improve the accuracy of tracking, Lei is also considering adding a special thermal camera that would identify the user’s hands by body heat. The quest here is to turn the modern kitchen into a temple of modern-day Taylorism, with every task tracked, analyzed, and optimized. Solutionists hate making errors and love sticking to algorithms. That cooking thrives on failure and experimentation, that deviating from recipes is what creates culinary innovations and pushes a cuisine forward, is discarded as whimsical and irrelevant. For many such well-meaning innovators, the context of the practice they seek to improve doesn’t matter—not as long as efficiency can be increased. As a result, chefs are imagined not as autonomous virtuosi or gifted craftsmen but as enslaved robots who should never defy the commands of their operating systems. Another project mentioned in New Scientist is even more degrading. A group of computer scientists at Kyoto Sangyo University in Japan is trying to marry the logic of the kitchen to the logic of “augmented reality”—the fancy term for infusing our everyday environment with smart technologies. (Think of Quick Response Codes that can be scanned with a smartphone to unlock additional information or of the upcoming goggles from Google’s Project Glass, which use data streams to enhance your visual field.) To this end, the Japanese researchers have mounted cameras and projectors on the kitchen’s ceiling so that they can project instructions—in the form of arrows, geometric shapes, and speech bubbles guiding the cook through each step—right onto the ingredients. Thus, if you are about to cut a fish, the system will project a virtual knife and mark where exactly that it ought to cut into the fish’s body. And there’s also a tiny physical robot that sits on the countertop. Thanks to the cameras, it can sense that you’ve stopped touching the ingredients and inquire if you want to move on to the next step in the recipe. Now, what exactly is “augmented” about such a reality? It may be augmented technologically, but it also seems diminished intellectually. At best, we are left with “augmented diminished reality.” Some geeks stubbornly refuse to recognize that challenges and obstacles—which might include initial ignorance of the right way to cut the fish—enhance rather than undermine the human condition. To make cooking easier is not necessarily to augment it—quite the opposite. To subject it fully to the debilitating logic of efficiency is to deprive humans of the ability to achieve mastery in this activity, to make human flourishing impossible and to impoverish our lives. A more appropriate solution here would not make cooking less demanding but make its rituals less rigid and perhaps even more challenging. This is not a snobbish defense of the sanctified traditions of cooking. In a world where only a select few could master the tricks of the trade, such “augmented” kitchens would probably be welcome, if only for their promise to democratize access to this art. But this is not the world we inhabit: detailed recipes and instructional videos on how to cook the most exquisite dish have never been easier to find on Google. Do we really need a robot—not to mention surveillance cameras above our heads—to cook that stuffed turkey or roast that lamb? Besides, it’s not so hard to predict where such progress would lead: once inside our kitchens, these data-gathering devices would never leave, developing new, supposedly unanticipated functions. First, we’d install cameras in our kitchens to receive better instructions, then food and consumer electronics companies would tell us that they’d like us to keep the cameras to improve their products, and, finally, we’d discover that all our cooking data now resides on a server in California, with insurance companies analyzing just how much saturated fat we consume and adjusting our insurance premiums accordingly. Cooking abetted by smart technology could be a Trojan horse opening the way for far more sinister projects. None of this is to say that technology cannot increase our pleasure from cooking—and not just in terms of making our food tastier and healthier. Technology, used with some imagination and without the traditional solutionist fetishism of efficiency and perfection, can actually make the cooking process more challenging, opening up new vistas for experimentation and giving us new ways to violate the rules. Compare the impoverished culinary vision on offer in New Scientist with some of the fancy gadgetry embraced by the molecular gastronomy movement. From thermal immersion circulators for cooking at low temperature to printers with edible paper, from syringes used to produce weird noodles and caviar to induction cookers that send magnetic waves through metal pans, all these gadgets make cooking more difficult, more challenging, and more exciting. They can infuse any aspiring chef with great passion for the culinary arts—much more so than surveillance cameras or instruction-spewing robots. Strict adherence to recipes can produce predictable, albeit tasty, dishes—and occasionally this is just what we want. But such standardization can also make our kitchens as exciting as McDonald’s franchises. Celebrating innovation for its own sake is in bad taste. For technology truly to augment reality, its designers and engineers should get a better idea of the complex practices that our reality is composed of. As the molecular gastronomy example illustrates, to reject solutionism is not to reject technology. Nor is it to abandon all hope that the world around us can be ameliorated; technology could and should be part of this project. To reject solutionism is to transcend the narrow-minded rationalistic mind-set that recasts every instance of an efficiency deficit—like the lack of perfect, comprehensive instructions in the kitchen—as an obstacle that needs to be overcome. There are other, more fruitful, more humanistic, and more responsible ways to think about technology’s role in enabling human flourishing, but solutionists are unlikely to grasp them unless they complicate their dangerously reductionist account of the human condition. PASTEUR AND ZYNGA I’ll be the first to acknowledge that the problems posed by solutionism are not in any sense new; as already noted, generations of earlier thinkers have already addressed many related pitfalls and pathologies. And yet I feel that we are living through a resurgence of a very particular modern kind of solutionism. Today the most passionate solutionists are not to be found in city halls and government ministries; rather, they are to be found in Silicon Valley, trying to take the lessons they have learned from “the Internet”—and there’s never been a more deceptively didactic source of great lessons about “life, the universe and everything” (to use Douglas Adams’s memorable phrase)—and put them into practice in various civic initiatives and plans to fix the bugs of humanity. Why the scare quotes around “the Internet”? In the afterword to my first book, The Net Delusion, I made what I now believe to be one of its main, even if overlooked, points: the physical infrastructure we know as “the Internet” bears very little resemblance to the mythical “Internet”—the one that reportedly brought down the governments of Tunisia and Egypt and is supposedly destroying our brains—that lies at the center of our public debates. The infrastructure and design of this network of networks do play a certain role in sanctioning many of these myths—for example, the idea that “the Internet” is resistant to censorship comes from the unique qualities of its packet-switching communication mechanism—but “the Internet” that is the bane of public debates also contains many other stories and narratives—about innovation, surveillance, capitalism—that have little to do with the infrastructure per se. French philosopher Bruno Latour, writing of Louis Pasteur’s famed scientific accomplishments, distinguished between Pasteur, the actual historical figure, and “Pasteur,” the mythical almighty character who has come to represent the work of other scientists and entire social movements, like the hygienists, who, for their own pragmatic reasons, embraced Pasteur with open arms. But anyone interested in writing the history of that period cannot just deploy the name “Pasteur” as an unproblematic, objective term; it needs to be disassembled so that its various parts can be studied in their own right. The story of how these disparate parts—including the actual Louis Pasteur—have become “Pasteur,” the national hero of France whom we see in textbooks, is what the history of science, at least in its Latourian vision, should aspire to uncover. Now, I do not set out to write history in this book. If I did, I would indeed try to show the contingency and fluidity of the very idea of “the Internet” and attempt to trace how “the Internet” has come to mean what it means today. In this book, I’m interested in a much narrower slice of this story; namely, I want to explore how “the Internet” has become the impetus for many of the contemporary solutionist initiatives while also being the blinkers that prevent us from seeing their shortcomings. In other words, I’m interested in why and how “the Internet” excites—and why and how it confuses. I want to understand why and how iTunes or Wikipedia—some of the core mythical components of “the Internet”—have become models to think about the future of politics. How have Zynga and Facebook become models to think about civic engagement? How have Yelp’s and Amazon’s reviews become models to think about criticism? How has Google become a model for thinking about business and social innovation—as if it had a coherent philosophy—so that books with titles like What Would Google Do? can become best sellers? The arrival of “the Internet” both boosted and vindicated many of the solutionist attitudes that I describe in this book. “The Internet” has allowed solutionists to significantly expand the scope of their interventions, running experiments on a much grander scale. It has also given rise to a new set of beliefs—what I call “Internet-centrism”—the chief of which is the firm conviction that we are living through unique, revolutionary times, in which the previous truths no longer hold, everything is undergoing profound change, and the need to “fix things” runs as high as ever. “The Internet,” in short, has supplied solutionists with ample ammunition to ratchet up their war on inefficiency, ambiguity, and disorder, while also providing some new justification for doing so. But it has also supplied them with a set of assumptions about both how the world works and how it should work, about how it talks and how it should talk, recasting many issues and debates in a decidedly Internet-centric manner. Internet-centrism relates to “the Internet” very much like scientism relates to science: its epistemology tolerates no dissenting viewpoints, while all recent history is just about how the great spirit of “the Internet” presents itself to us. This book, then, is an effort to liberate our technology debates from the many unhealthy and erroneous assumptions about “the Internet.” In this, it’s much more normative than history aspires to be. Following the work of Latour and Thomas Kuhn, many historians of science have come to accept that, while the idea of “Science” with a capital S is even more chock-full of myths than the idea of “the Internet,” they have made peace with this discovery, reasoning that, as long as there are scientists who think there is this “Science” with a capital S out there, they are still worth studying, regardless of whether historians of science themselves actually share this belief. It’s an elegant and reassuring approach, but I find it very hard to pursue when thinking about “the Internet” and the corrosive influence that this idea is beginning to have on public discourse and the kinds of reform projects that are getting priority. In this sense, to point out the many limitations of solutionism without also pointing out the limitations of what I call “Internet-centrism” would not be very productive; without the latter, the former wouldn’t be half as powerful. So before we can embark on discussing the shortcomings of solutionism in areas like politics or crime prevention, it’s worth getting a better grasp of the pernicious intellectual influence of Internet-centrism—a task we turn to in the next chapter. Revealing Internet-centrism for what it is will make debunking solutionism much less difficult. CHAPTER TWO The Nonsense of “the Internet”—and How to Stop It “The internet is not territory to be conquered, but life to be preserved and allowed to evolve freely.” —NICOLAS MENDOZA, ALJAZEERA.COM “What made Blockbuster close? The Internet. What made At the Movies get canceled? The Internet. Who went tromping across my lawn and ruined my petunias? The Internet.” —ERIC SNIDER, CINEMATICAL BLOG These days, “the Internet” can mean just about anything. “The Next Battle for Internet Freedom Could Be over 3D Printing,” proclaimed the headline on TechCrunch, a popular technology blog, in August 2012. Given how fuzzy the very idea of “the Internet” is, derivative concepts like “Internet freedom” have become so all-encompassing and devoid of any actual meaning that they can easily cover the regulation of 3D printers, the thorny issues of net neutrality, and the rights of dissident bloggers in Azerbaijan. Instead of debating the merits of individual technologies and crafting appropriate policies and regulations, we have all but surrendered to catchall terms like “the Internet,” which try to bypass any serious and empirical debate altogether. Today, “the Internet” is regularly invoked to thwart critical thinking and exclude nongeeks from the discussion. Here is how one prominent technology blogger argued that Congress should not regulate facial-recognition technology: “All too many U.S. lawmakers are barely beyond the stage of thinking that the Internet is a collection of tubes; do we really want these guys to tell Facebook or any other social media company how to run its business?” You see, it’s all so complex—much more complex than health care or climate change—that only geeks should be allowed to tinker with the magic tubes. “The Internet” is holy—so holy that it lies beyond the means of democratic representation. That facial-recognition technology developed independently of “the Internet” and has its roots in the 1960s research funded by various defense agencies means little in this context. Once part of “the Internet,” any technology loses its history and intellectual autonomy. It simply becomes part of the grand narrative of “the Internet,” which, despite what postmodernists say about the death of metanarratives, is one metanarrative that is doing all right. Today, virtually every story is bound to have an “Internet” angle—and it’s the job of our Internet apostles to turn those little anecdotes into fairy tales about the march of Internet progress, just a tiny chapter in their cyber-Whig theory of history. “The Internet”: an idea that effortlessly fills minds, pockets, coffers, and even the most glaring narrative gaps. Whenever you hear someone tell you, “This is not how the Internet works”—as technology bloggers are wont to inform everyone who cares to read their scribblings—you should know that your interlocutor believes your views to be reactionary and antimodern. But where is the missing manual to “the Internet”—the one that explains how this giant series of tubes actually works—that the geeks claim to know by heart? Why are they so reluctant to acknowledge that perhaps there’s nothing inevitable about how various parts of this giant “Internet” work and fit together? Is it really true that Google can’t be made to work differently? Tacitly, of course, the geeks do acknowledge that there is nothing permanent about “the Internet”; that’s why they lined up to oppose the Stop Online Piracy Act (SOPA), which—oh, the irony—threatened to completely alter “how the Internet works.” So, no interventions will work “on the Internet”—except for those that will. SOPA was a bad piece of legislation, but there’s something odd about how the geeks can simultaneously claim that the Internet is fixed and permanent and work extremely hard in the background to keep it that way. Their theory stands in stark contrast to their practice—a common modern dissonance that they prefer not to dwell on. “The Internet” is also a way to shift the debate away from more concrete and specific issues, essentially burying it in obscure and unproductive McLuhanism that seeks to discover some nonexistent inner truths about each and every medium under the sun. Consider how Nicholas Carr, one of today’s most vocal Internet skeptics, frames the discussion about the impact that digital technologies have on our ability to think deep thoughts and concentrate. In his best-selling book The Shallows, Carr worries that “the Internet” is making his brain demand “to be fed the way the Net fed it—and the more it was fed, the hungrier it became.” He complains that “the Net … provides a high-speed system for delivering responses and rewards … which encourage the repetition of both physical and mental actions.” The book is full of similar complaints. For Carr, the brain is 100 percent plastic, but “the Internet” is 100 percent fixed. Does “the Net” that Carr writes about actually exist? Is there much point in lumping together sites like Instapaper—which lets users save Web pages in order to read them later, in an advertising-free and undisturbed environment—and, say, Twitter? Is it inevitable that Facebook should constantly prompt us to check new links? Should Twitter reward us for tweeting links that we never open? Or punish us? Or do nothing—as is the case now? Many of these are open questions—and the way in which technology companies resolve them depends, in part, on what we, their users, tell them (provided, of course, we can get our own act together). There may be some business hurdles to making the digital services we use less amenable to discussion, but this is where one has to explore the world of political economy, not that of neuroscience, even if the latter is the much more fashionable of the two. Carr, however, refuses to abandon the notion of “the Net,” with its predetermined goals and inherent features; instead of exploring the interplay between design, political economy, and information science, he keeps telling us that “the Net” is, well, shite. Alas, it won’t get any better until we stop thinking that there is a “Net” out there. How can we account for the diversity of logics and practices promoted by digital tools without having to resort to explanations that revolve around terms like “the Net”? “The Net” is a term that should appear on the last—not first!—page of our books about digital technologies; it cannot explain itself. Like Marshall McLuhan before him, Carr wants to score, rank, and compare different media and come up with some kind of quasi-scientific pecking order for them (McLuhan went as far as to calculate sense ratios for each medium that he “studied”). This very medium-centric approach overlooks the diversity of actual practices enabled by each medium. One may hate television for excessive advertising—but then, a publicly supported broadcasting system may have no need for advertising at all; TV programs don’t always have to be interrupted by ads. Video games might make us more violent—but, once again, they can do so many other things in so many different ways that it seems unfair to connect them only to one function. There’s very little that the New York Times has in common with the Sun or that NPR shares with Rush Limbaugh.
https://www.amazon.com/gp/product/B00ADNP310?ref_=kin_pc_dp