The New Ludditism in Literature


In a recent essay in n+1, Benjamin Kunkel, in a wide-ranging consideration of technology’s effects on contemporary culture and daily life, writes that the internet and its products feel forced upon us. For anyone who goes online daily—and increasingly that is most of us—there is a never-ending barrage of e-mail, articles of note (for their vulgarity or supposed profundity), amusing videos, invitations, profiles, photos, blog posts, news feeds, figurative “gifts,” and the like—and most of it is free, available to be guzzled down with a click. It is nigh impossible to simply dip into the internet; the irony is that if you have any awareness of how to navigate it, this endless stream of content, digital companions, and e-communiques becomes more numerous and oppressive, its depths cavernous and alluring, rather than simpler and streamlined.

What does it take to separate us from these omnipresent digital phenomena, and will that separation one day be impossible, when gadgets, screens, and Wi-Fi are everywhere? Even now, the term “going off the grid” is often used as a jesting hypothetical, something done by eccentrics and believers in an impending apocalypse. As a regular feature of electronic social discourse, waiting a day or two to answer an e-mail requires an explanation, if not an apology. “You don’t have a 3G-enabled phone with e-mail?” my friend asked me a few months ago (an eternity, in technological terms). He was joking, of course, but there was also some truth there, a frustrating and niggling feeling that with my once-cutting-edge Motorola, I was somehow missing out. To my irritation, it took a moment to focus, pull back, and realize that no, I didn’t need that.

Kunkel is correct that self-discipline is one of the great casualties of the internet age, but as thinking, independent beings, we only have ourselves to blame, and it is up each individual to recover what might be lost. Not every technology is inherently neutral—consider Monsanto’s “Terminator” seeds—but our laptops and e-mail clearly are. “No one is stopping you from stopping yourself,” Kunkel writes. “It’s just that many users of digital communications technology can’t stop. An inability to log off is hardly the most destructive habit you could acquire, but it seems unlikely there is any more widespread compulsion among the professional middle-class and their children than lingering online.”

The fear, as Kunkel attests, is that our willpower is inadequate, that, like in Infinite Jest or other visions of death-by-technology (it is no coincidence that many of these scenarios are found in books), we cannot resist our own creations. Technology is our reflecting pool, and each one of us is a potential Narcissus—isn’t a social networking profile or a YouTube video gone viral proof of such? What else to account for my Facebook friend, someone I barely know in fact, who has more than 2,200 pictures of herself online? The victims of this mania are—we are variously told—genuine emotional connection, privacy, attention spans, novel reading, and serious culture.

But not everyone is like this. Not everyone is as interconnected and digitally astute as those described in the previous paragraph, though a recent poll shows that only 14 percent of American adults don’t use a cell phone or the internet. Yet if we don’t consider the demographic, financial, and even geographic elements of the technology gap, we risk succumbing to the solipsism commonly attributed to our culture. After all, many of the elderly or the poor aren’t regular computer users. A kid living in poverty in East L.A. or East Timor may not have access to a cell phone or internet-connected computer, though he might have a library card or a few books (hence some of the recent arguments that widespread adoption of e-readers could impinge on book access for the poor). However, the hopes of the digital age—and those techno-evangelists who abet it, from Steve Jobs’s iPhone to President Obama’s plan to bring broadband to rural communities—are tied up, at least in part, in leveling this technological gap. Perhaps it’s not spoken of much because the proliferation of high technology seems assured; or, on the other hand, we may simply neglect those who don’t have the means to connect. Because if you’re not online today, can you be part of the conversation?

One benefit of fiction reading is supposed to be its remove from the day-to-day. Reading allows us to explore leisurely that which bends toward the eternal—the ideas and truths plumbed and desperately sought by generation after generation of writer. If we don’t have the patience and discipline to read and consider novels, stories, poetry, it’s we who suffer, because the writers are still out there, doing their work, the younger ones trying to ignore their vibrating phones, e-mail notifications, and g-chats.

Something must endure, and the history books—or whatever we create to contain and record our history—will not, I hope, be concerned with David at the Dentist or the latest celebrity nipple slip. That’s not to say that they don’t have their brief moments of (severely) relative importance, but these ephemera are clearly part of pop culture, not culture—full stop. But it also takes time, decades maybe, before we find what truly persists, despite our eagerness to anoint something as great or canonical. The acclaim heaped upon Irene Nemirovsky’s Suite Francaise was in part derived from the fact that she created acutely observed fictions about events nearly simultaneous with their occurrence; but importantly, it took fifty years for her work to be discovered (as her daughter refused for so long to read her mother’s notebooks), and her psychological acuity and powers of observation are better appreciated with the benefit of a half-century of hindsight. They are made timely precisely because a gap of time exists between these novels’ creation and our reading of them. Decades of World War II scholarship and soul-searching provide the interstitial tissue that heightens our understanding.

Somehow the descriptor “postmodern” is still the de facto term for describing progressive, innovative work, when the term itself was first used more than 130 years ago. We return to it because we have little else that can adequately supplant it, and its vague generality remains useful. Perhaps, also, it is in the very nature of the word to be continually relevant, or merely never outmoded, for it literally means “after modern,” and modernity is a constantly changing edifice, if it even exists in the physical world. This feeling may be reflective of a zeitgeist that now is so quicksilver and protean as to be unable to fairly diagnose itself. We don’t have the time between the writing and the discovery, and so postmodernism endures, stretches out into the future, into an unstable and meaningless infinity.

The artistic movements of the past, whether it was Surrealism or Romanticism, existed in slower periods, in eras of lesser connectivity. They had time to incubate, in communities of artists living, working, loving, drinking, and dying in close proximity to one another. Letters were thought out and slow in creation. Manifestos were written, and they were often read and taken seriously. Another byproduct of our current condition—and this, too, may change at any moment, with any subsequent innovation or fad—is that our ease of communication allows for wonderful opportunities for collaboration, but it doesn’t breed a tendency towards artistic movements or greater ambitions. We create mash-ups, not epics; parodies, not song cycles; memes, not disciplines.

In the last 30-odd years of literature, we have seen something more akin to trends than broad-based movements. Whether it’s maximalism, hysterical realism (originally coined as a term of derision), dirty realism, or absurdism, no single aesthetic view has managed to dominate the literary culture. This fragmentation is above all a blessing, connoting a tolerance for diversity in American literature. In the last decade, the rise of novelists Michael Chabon and Jonathan Lethem, who vehemently oppose and successfully transcend the artificial ghettos of genre, adds to this great leveling. And so fiction writers as distinct as Don DeLillo, Lorrie Moore, Marilynne Robinson, Cormac McCarthy, Toni Morrison, Aleksandar Hemon, George Saunders, Edward P. Jones, and numerous others like and wholly unlike them have strode forward to assume prominent places in American letters.

But just when any mode of writing seems viable, when the marketplace, readers, and critics are capable of embracing these exceedingly diverse voices, our technology may threaten our ability to adequately capture the current state of our society. To be sure, our technological progress and the internet’s manifold offspring are awe-inducing, capable of allowing us to communicate in a hundred different ways, many of them useful. But as this tremendous atomization takes place, as blogs and text-messaging and social networks and web video and the synergistic interactions between these media proliferate, the problem arises of how to authentically integrate these modes of communication into fiction without awkwardly drawing attention to the writer’s use of them; how to determine what is essential to the times and what is a troubling distraction.

With these challenges—surely to multiply in the coming years—in mind, will the term “historical fiction” soon encompass anything set before 2009? Or 2015? Or maybe that threshold has passed, and a novel written about the (post)modern world can only seem contemporary if taking place after 2006, replete with text messages, social networking, and the rest. Is it then possible for one of us to summon the powers of Nemirovsky to create an image of the current age that persists beyond a day, a week, or the lifespan of distracting viral internet media? Will more than a handful be able to concentrate long enough to read and discuss it? When Joseph O’Neill, in his novel Netherland, used Google Earth to express how deeply his protagonist misses his family, Dwight Garner described the scene as “closely observed, emotionally racking, un-self-consciously in touch with how we live now.” Will others like him be able to do the same, or will writers of literary fiction ignore the role of technology, relegate it to the sci-fi crowd, thereby eventually making themselves sadly anachronistic?

Not all is to be gained from a full-throated embrace of our light-speed communication. There remains a risk that a society will be created around us that allows no choice but to participate, to be fully connected. Already, there is the creeping pressure to be always plugged in, as Kunkel—who doesn’t own a cell phone—articulates, as do some of the authors whose work he surveys. I felt an uncomfortable sense of release recently, when during two separate weekends, I didn’t use my computer or the internet (and hardly my cell phone). Release because I could immerse myself in books and magazines at my own pace, without any sense of having missed something, but uncomfortable because that release is needed at all and was made possible only because I was traveling.

Will there be a sufficient backlash among some populations—here I propose the unoriginal and slightly pejorative New Ludditism—that we demand something slower, more ordered, and more sustainable? If we can make similar demands of our food production, as the Slow Food ethos advocates, then why not with our communication and culture, or at least certain aspects of it? The choice, once again, may lie with the individual, since this is a time in which even the most fervent grassroots movements quickly lose their way, and techno-skepticism is unlikely to fare any better.

There must be a terminal velocity, be it the end of Moore’s Law or the end of our productivity, to the concomitant diminishment of our attention spans and the progression of our technology. I do wonder though: will we still like ourselves when we get there, and will a healthy literary ecosystem be one of the losses along the way?

Share —
Published: June 26, 2009