Sunday, March 21, 2010

Vocabulary: bricolage

bricolage
Dictionary
bri·co·lage (brē'kō-läzh', brĭk'ō-)
n.
Something made or put together using whatever materials happen to be available: "Even the decor is a bricolage, a mix of this and that" (Los Angeles Times).

[French, from bricole, trifle, from Old French, catapult, from Old Italian briccola , of Germanic origin.]

Texts Without Context By MICHIKO KAKUTANI

March 21, 2010
Texts Without Context By MICHIKO KAKUTANI
In his deliberately provocative — and deeply nihilistic — new book, “Reality Hunger,” the onetime novelist David Shields asserts that fiction “has never seemed less central to the culture’s sense of itself.” He says he’s “bored by out-and-out fabrication, by myself and others; bored by invented plots and invented characters” and much more interested in confession and “reality-based art.” His own book can be taken as Exhibit A in what he calls “recombinant” or appropriation art.

Mr. Shields’s book consists of 618 fragments, including hundreds of quotations taken from other writers like Philip Roth, Joan Didion and Saul Bellow — quotations that Mr. Shields, 53, has taken out of context and in some cases, he says, “also revised, at least a little — for the sake of compression, consistency or whim.” He only acknowledges the source of these quotations in an appendix, which he says his publishers’ lawyers insisted he add.

“Who owns the words?” Mr. Shields asks in a passage that is itself an unacknowledged reworking of remarks by the cyberpunk author William Gibson. “Who owns the music and the rest of our culture? We do — all of us — though not all of us know it yet. Reality cannot be copyrighted.”

Mr. Shields’s pasted-together book and defense of appropriation underscore the contentious issues of copyright, intellectual property and plagiarism that have become prominent in a world in which the Internet makes copying and recycling as simple as pressing a couple of buttons. In fact, the dynamics of the Web, as the artist and computer scientist Jaron Lanier observes in another new book, are encouraging “authors, journalists, musicians and artists” to “treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind.”

It’s not just a question of how these “content producers” are supposed to make a living or finance their endeavors, however, or why they ought to allow other people to pick apart their work and filch choice excerpts. Nor is it simply a question of experts and professionals being challenged by an increasingly democratized marketplace. It’s also a question, as Mr. Lanier, 49, astutely points out in his new book, “You Are Not a Gadget,” of how online collectivism, social networking and popular software designs are changing the way people think and process information, a question of what becomes of originality and imagination in a world that prizes “metaness” and regards the mash-up as “more important than the sources who were mashed.”

Mr. Lanier’s book, which makes an impassioned case for “a digital humanism,” is only one of many recent volumes to take a hard but judicious look at some of the consequences of new technology and Web 2.0. Among them are several prescient books by Cass Sunstein, 55, which explore the effects of the Internet on public discourse; Farhad Manjoo’s “True Enough,” which examines how new technologies are promoting the cultural ascendancy of belief over fact; “The Cult of the Amateur,” by Andrew Keen, which argues that Web 2.0 is creating a “digital forest of mediocrity” and substituting ill-informed speculation for genuine expertise; and Nicholas Carr’s book “The Shallows” (coming in June), which suggests that increased Internet use is rewiring our brains, impairing our ability to think deeply and creatively even as it improves our ability to multitask.

Unlike “Digital Barbarism,” Mark Helprin’s shrill 2009 attack on copyright abolitionists, these books are not the work of Luddites or technophobes. Mr. Lanier is a Silicon Valley veteran and a pioneer in the development of virtual reality; Mr. Manjoo, 31, is Slate’s technology columnist; Mr. Keen is a technology entrepreneur; and Mr. Sunstein is a Harvard Law School professor who now heads the White House Office of Information and Regulatory Affairs. Rather, these authors’ books are nuanced ruminations on some of the unreckoned consequences of technological change — books that stand as insightful counterweights to early techno-utopian works like Esther Dyson’s “Release 2.0” and Nicholas Negroponte’s “Being Digital,” which took an almost Pollyannaish view of the Web and its capacity to empower users.

THESE NEW BOOKS share a concern with how digital media are reshaping our political and social landscape, molding art and entertainment, even affecting the methodology of scholarship and research. They examine the consequences of the fragmentation of data that the Web produces, as news articles, novels and record albums are broken down into bits and bytes; the growing emphasis on immediacy and real-time responses; the rising tide of data and information that permeates our lives; and the emphasis that blogging and partisan political Web sites place on subjectivity.

At the same time it’s clear that technology and the mechanisms of the Web have been accelerating certain trends already percolating through our culture — including the blurring of news and entertainment, a growing polarization in national politics, a deconstructionist view of literature (which emphasizes a critic’s or reader’s interpretation of a text, rather than the text’s actual content), the prominence of postmodernism in the form of mash-ups and bricolage, and a growing cultural relativism that has been advanced on the left by multiculturalists and radical feminists, who argue that history is an adjunct of identity politics, and on the right by creationists and climate-change denialists, who suggest that science is an instrument of leftist ideologues.

Even some outspoken cheerleaders of Internet technology have begun to grapple with some of its more vexing side effects. Steven Johnson, a founder of the online magazine Feed, for instance, wrote in an article in The Wall Street Journal last year that with the development of software for Amazon.com’s Kindle and other e-book readers that enable users to jump back and forth from other applications, he fears “one of the great joys of book reading — the total immersion in another world, or in the world of the author’s ideas — will be compromised.” He continued, “We all may read books the way we increasingly read magazines and newspapers: a little bit here, a little bit there.”

Mr. Johnson added that the book’s migration to the digital realm will turn the solitary act of reading — “a direct exchange between author and reader” — into something far more social and suggested that as online chatter about books grows, “the unity of the book will disperse into a multitude of pages and paragraphs vying for Google’s attention.”

WORRYING ABOUT the public’s growing attention deficit disorder and susceptibility to information overload, of course, is hardly new. It’s been 25 years since Neil Postman warned in “Amusing Ourselves to Death” that trivia and the entertainment values promoted by television were creating distractions that threatened to subvert public discourse, and more than a decade since writers like James Gleick (“Faster”) and David Shenk (“Data Smog”) described a culture addicted to speed, drowning in data and overstimulated to the point where only sensationalism and willful hyperbole grab people’s attention.

Now, with the ubiquity of instant messaging and e-mail, the growing popularity of Twitter and YouTube, and even newer services like Google Wave, velocity and efficiency have become even more important. Although new media can help build big TV audiences for events like the Super Bowl, it also tends to make people treat those events as fodder for digital chatter. More people are impatient to cut to the chase, and they’re increasingly willing to take the imperfect but immediately available product over a more thoughtfully analyzed, carefully created one. Instead of reading an entire news article, watching an entire television show or listening to an entire speech, growing numbers of people are happy to jump to the summary, the video clip, the sound bite — never mind if context and nuance are lost in the process; never mind if it’s our emotions, more than our sense of reason, that are engaged; never mind if statements haven’t been properly vetted and sourced.

People tweet and text one another during plays and movies, forming judgments before seeing the arc of the entire work. Recent books by respected authors like Malcolm Gladwell (“Outliers”), Susan Faludi (“The Terror Dream”) and Jane Jacobs (“Dark Age Ahead”) rely far more heavily on cherry-picked anecdotes — instead of broader-based evidence and assiduous analysis — than the books that first established their reputations. And online research enables scholars to power-search for nuggets of information that might support their theses, saving them the time of wading through stacks of material that might prove marginal but that might have also prompted them to reconsider or refine their original thinking.

“Reading in the traditional open-ended sense is not what most of us, whatever our age and level of computer literacy, do on the Internet,” the scholar Susan Jacoby writes in “The Age of American Unreason.” “What we are engaged in — like birds of prey looking for their next meal — is a process of swooping around with an eye out for certain kinds of information.”

TODAY’S TECHNOLOGY has bestowed miracles of access and convenience upon millions of people, and it’s also proven to be a vital new means of communication. Twitter has been used by Iranian dissidents; text messaging and social networking Web sites have been used to help coordinate humanitarian aid in Haiti; YouTube has been used by professors to teach math and chemistry. But technology is also turning us into a global water-cooler culture, with millions of people sending each other (via e-mail, text messages, tweets, YouTube links) gossip, rumors and the sort of amusing-entertaining-weird anecdotes and photographs they might once have shared with pals over a coffee break. And in an effort to collect valuable eyeballs and clicks, media outlets are increasingly pandering to that impulse — often at the expense of hard news. “I have the theory that news is now driven not by editors who know anything,” the comedian and commentator Bill Maher recently observed. “I think it’s driven by people who are” slacking off at work and “surfing the Internet.” He added, “It’s like a country run by ‘America’s Funniest Home Videos.’ ”

MSNBC’s new program “The Dylan Ratigan Show,” which usually focuses on business and politics, has a “While you were working ...” segment in which viewers are asked to send in “some of the strangest and outrageous stories you’ve found on the Internet,” and the most e-mailed lists on popular news sites tend to feature articles about pets, food, celebrities and self-improvement. For instance, at one point on March 11, the top story on The Washington Post’s Web site was “Maintaining a Sex Life,” while the top story on Reddit.com, a user-generated news link site, was “(Funny) Sexy Girl? Do Not Trust Profile Pictures!”

Given the constant bombardment of trivia and data that we’re subjected to in today’s mediascape, it’s little wonder that noisy, Manichean arguments tend to get more attention than subtle, policy-heavy ones; that funny, snarky or willfully provocative assertions often gain more traction than earnest, measured ones; and that loud, entertaining or controversial personalities tend to get the most ink and airtime. This is why Sarah Palin’s every move and pronouncement is followed by television news, talk-show hosts and pundits of every political persuasion. This is why Glenn Beck and Rush Limbaugh on the right and Michael Moore on the left are repeatedly quoted by followers and opponents. This is why a gathering of 600 people for last month’s national Tea Party convention in Nashville received a disproportionate amount of coverage from both the mainstream news media and the blogosphere.

Digital insiders like Mr. Lanier and Paulina Borsook, the author of the book “Cyberselfish,” have noted the easily distracted, adolescent quality of much of cyberculture. Ms. Borsook describes tech-heads as having “an angry adolescent view of all authority as the Pig Parent,” writing that even older digerati want to think of themselves as “having an Inner Bike Messenger.”

For his part Mr. Lanier says that because the Internet is a kind of “pseudoworld” without the qualities of a physical world, it encourages the Peter Pan fantasy of being an entitled child forever, without the responsibilities of adulthood. While this has the virtues of playfulness and optimism, he argues, it can also devolve into a “Lord of the Flies”-like nastiness, with lots of “bullying, voracious irritability and selfishness” — qualities enhanced, he says, by the anonymity, peer pressure and mob rule that thrive online.

Digital culture, he writes in “You Are Not a Gadget,” “is comprised of wave after wave of juvenilia,” with rooms of “M.I.T. Ph.D. engineers not seeking cancer cures or sources of safe drinking water for the underdeveloped world but schemes to send little digital pictures of teddy bears and dragons between adult members of social networks.”

AT THE SAME time the Internet’s nurturing of niche cultures is contributing to what Cass Sunstein calls “cyberbalkanization.” Individuals can design feeds and alerts from their favorite Web sites so that they get only the news they want, and with more and more opinion sites and specialized sites, it becomes easier and easier, as Mr. Sunstein observes in his 2009 book “Going to Extremes,” for people “to avoid general-interest newspapers and magazines and to make choices that reflect their own predispositions.”

“Serendipitous encounters” with persons and ideas different from one’s own, he writes, tend to grow less frequent, while “views that would ordinarily dissolve, simply because of an absence of social support, can be found in large numbers on the Internet, even if they are understood to be exotic, indefensible or bizarre in most communities.” He adds that studies of group polarization show that when like-minded people deliberate, they tend to reinforce one another and become more extreme in their views.

One result of this nicheification of the world is that consensus and common ground grow ever smaller, civic discourse gets a lot less civil, and pluralism — what Isaiah Berlin called the idea that “there are many different ends that men may seek and still be fully rational, fully men, capable of understanding each other and sympathizing and deriving light” from “worlds, outlooks, very remote from our own” — comes to feel increasingly elusive.

As Mr. Manjoo observes in “True Enough: Learning to Live in a Post-Fact Society” (2008), the way in which “information now moves through society — on currents of loosely linked online groups and niche media outlets, pushed along by experts and journalists of dubious character and bolstered by documents that are no longer considered proof of reality” — has fostered deception and propaganda and also created what he calls a “Rashomon world” where “the very idea of objective reality is under attack.” Politicians and voters on the right and left not only hold different opinions from one another, but often can’t even agree over a shared set of facts, as clashes over climate change, health care and the Iraq war attest.

THE WEB’S amplification of subjectivity applies to culture as well as politics, fueling a phenomenon that has been gaining hold over America for several decades, with pundits squeezing out reporters on cable news, with authors writing biographies animated by personal and ideological agendas, with tell-all memoirs, talk-show confessionals, self-dramatizing blogs and carefully tended Facebook and MySpace pages becoming almost de rigeur.

As for the textual analysis known as deconstruction, which became fashionable in American academia in the 1980s, it enshrined individual readers’ subjective responses to a text over the text itself, thereby suggesting that the very idea of the author (and any sense of original intent) was dead. In doing so, deconstruction uncannily presaged arguments advanced by digerati like Kevin Kelly, who in a 2006 article for The New York Times Magazine looked forward to the day when books would cease to be individual works but would be scanned and digitized into one great, big continuous text that could be “unraveled into single pages” or “reduced further, into snippets of a page,” which readers — like David Shields, presumably — could then appropriate and remix, like bits of music, into new works of their own.

As John Updike pointed out, Mr. Kelly’s vision would in effect mean “the end of authorship” — hobbling writers’ ability to earn a living from their published works, while at the same time removing a sense of both recognition and accountability from their creations. In a Web world where copies of books (and articles and music and other content) are cheap or free, Mr. Kelly has suggested, authors and artists could make money by selling “performances, access to the creator, personalization, add-on information” and other aspects of their work that cannot be copied. But while such schemes may work for artists who happen to be entrepreneurial, self-promoting and charismatic, Mr. Lanier says he fears that for “the vast majority of journalists, musicians, artists and filmmakers” it simply means “career oblivion.”

Other challenges to the autonomy of the artist come from new interactive media and from constant polls on television and the Web, which ask audience members for feedback on television shows, movies and music; and from fan bulletin boards, which often function like giant focus groups. Should the writers of television shows listen to fan feedback or a network’s audience testing? Does the desire to get an article on a “most e-mailed” list consciously or unconsciously influence how reporters and editors go about their assignments and approaches to stories? Are literary-minded novelists increasingly taking into account what their readers want or expect?

As reading shifts “from the private page to the communal screen,” Mr. Carr writes in “The Shallows,” authors “will increasingly tailor their work to a milieu that the writer Caleb Crain describes as ‘groupiness,’ where people read mainly ‘for the sake of a feeling of belonging’ rather than for personal enlightenment or amusement. As social concerns override literary ones, writers seem fated to eschew virtuosity and experimentation in favor of a bland but immediately accessible style.”

For that matter, the very value of artistic imagination and originality, along with the primacy of the individual, is increasingly being questioned in our copy-mad, postmodern digital world. In a recent Newsweek cover story pegged to the Tiger Woods scandal, Neal Gabler, the author of “Life: the Movie: How Entertainment Conquered Reality,” absurdly asserts that celebrity is “the great new art form of the 21st century.”

Celebrity, Mr. Gabler argues, “competes with — and often supersedes — more traditional entertainments like movies, books, plays and TV shows,” and it performs, he says, “in its own roundabout way, many of the functions those old media performed in their heyday: among them, distracting us, sensitizing us to the human condition, and creating a fund of common experience around which we can form a national community.”

However impossible it is to think of “Jon & Kate Plus Eight” or “Jersey Shore” as art, reality shows have taken over wide swaths of television, and memoir writing has become a rite of passage for actors, politicians and celebrities of every ilk. At the same time our cultural landscape is brimming over with parodies, homages, variations, pastiches, collages and others forms of “appropriation art” — much of it facilitated by new technology that makes remixing, and cutting-and-pasting easy enough for a child.

It’s no longer just hip-hop sampling that rules in youth culture, but also jukebox musicals like “Jersey Boys” and “Rock of Ages,” and works like “The League of Extraordinary Gentlemen,” which features characters drawn from a host of classic adventures. Fan fiction and fan edits are thriving, as are karaoke contests, video games like Guitar Hero, and YouTube mash-ups of music and movie, television and visual images. These recyclings and post-modern experiments run the gamut in quality. Some, like Zachary Mason’s “Lost Books of the Odyssey,” are beautifully rendered works of art in their own right. Some, like J. J. Abram’s 2009 “Star Trek” film and Amy Heckerling’s 1995 “Clueless” (based on Jane Austen’s “Emma”) are inspired reinventions of classics. Some fan-made videos are extremely clever and inventive, and some, like a 3-D video version of Picasso’s “Guernica” posted on YouTube, are intriguing works that raise important and unsettling questions about art and appropriation.

All too often, however, the recycling and cut-and-paste esthetic has resulted in tired imitations; cheap, lazy re-dos; or works of “appropriation” designed to generate controversy like Mr. Shields’s “Reality Hunger.” Lady Gaga is third-generation Madonna; many jukebox or tribute musicals like “Good Vibrations” and “The Times They Are A-Changin’ ” do an embarrassing disservice to the artists who inspired them; and the rote remaking of old television shows into films (from “The Brady Bunch” to “Charlie’s Angels” to “Get Smart”), not to mention the recycling of video games into movies (like “Tomb Raider” and “Resident Evil”) often seem as pointless as they are now predictable.

Writing in a 2005 Wired article that “new technologies redefine us,” William Gibson hailed audience participation and argued that “an endless, recombinant, and fundamentally social process generates countless hours of creative product.” Indeed, he said, “audience is as antique a term as record, the one archaically passive, the other archaically physical. The record, not the remix, is the anomaly today. The remix is the very nature of the digital.”

To Mr. Lanier, however, the prevalence of mash-ups in today’s culture is a sign of “nostalgic malaise.” “Online culture,” he writes, “is dominated by trivial mash-ups of the culture that existed before the onset of mash-ups, and by fandom responding to the dwindling outposts of centralized mass media. It is a culture of reaction without action.”

He points out that much of the chatter online today is actually “driven by fan responses to expression that was originally created within the sphere of old media,” which many digerati mock as old-fashioned and passé, and which is now being destroyed by the Internet. “Comments about TV shows, major movies, commercial music releases and video games must be responsible for almost as much bit traffic as porn,” Mr. Lanier writes. “There is certainly nothing wrong with that, but since the Web is killing the old media, we face a situation in which culture is effectively eating its own seed stock.”

March 21, 2010
FURTHER READING
Words, Version 2.0

A bookshelf for further reading:

REALITY HUNGER: A MANIFESTO
By David Shields
219 pages. Alfred A. Knopf. $24.95.

YOU ARE NOT A GADGET:
A Manifesto
By Jaron Lanier
209 pages. Alfred A. Knopf. $24.95.

THE SHALLOWS: WHAT THE INTERNET IS DOING TO OUR BRAINS
By Nicholas Carr
288 pages. W. W. Norton & Company. $26.95. (Scheduled for release in June.)

TRUE ENOUGH: LEARNING TO LIVE IN A POST-FACT SOCIETY
By Farhad Manjoo
250 pages. John Wiley & Sons. $25.95.

THE AGE OF AMERICAN UNREASON
By Susan Jacoby
357 pages. Vintage Books. $15.95.

INFOTOPIA: HOW MANY MINDS PRODUCE KNOWLEDGE
By Cass R. Sunstein
273 pages. Oxford University Press. $15.95.

GOING TO EXTREMES: HOW LIKE MINDS UNITE AND DIVIDE
By Cass R. Sunstein
199 pages. Oxford University Press. $21.95.

THE CULT OF THE AMATEUR
By Andrew Keen
256 pages. Doubleday. $14.00.

Wednesday, March 10, 2010

What we can learn from Singapore's health-care model By Matt Miller

What we can learn from Singapore's health-care model By Matt Miller
Wednesday, March 3, 2010; 10:45 AM







We interrupt Washington's feud over the president's "way forward" for a brief word on a path not taken, courtesy of the only rich nation that boasts universal coverage with health outcomes better than ours while spending one-fifth as much per person on health care. Introducing (drum roll please): Singapore.



Yes, it's an island city-state of just 5 million people. Yes, it's more or less a benevolent dictatorship. And, yes, until recently, bringing chewing gum into Singapore could land you in jail. But Singapore, a poor country a few decades ago, now boasts a higher per capita income (when adjusted for local purchasing power) than the United States. And here's the astonishing fact: Singapore spends less than 4 percent of its GDP on health care. We spend 17 percent (and Singapore's somewhat younger population doesn't begin to explain the difference). Matching Singapore's performance in our $15 trillion economy would free up $2 trillion a year for other public and private purposes.



Do I have I your attention?



Today we can't find cash to recruit a new generation of great teachers, rebuild our roads and bridges, pay down the national debt, or invest in better airports, high-speed rail, a clean energy revolution or any of a hundred other things sensible patriots know we should do to renew the country. We can't do these things in large part because the Medical Industrial Complex vacuums up every spare dollar in sight. It's only slightly melodramatic to assert that if we could run our health-care system as efficiently as Singapore's, we could solve most of our other problems.



So how does Singapore do it?



In health circles it's always conservatives who bring up Singapore, because of the primacy it places on personal responsibility. According to Phua Kai Hong of the National University of Singapore, roughly one-third of health spending in Singapore is paid directly by individuals (who typically buy catastrophic coverage as well); in the United States, by contrast, nearly 90 percent is picked up by third-party insurers, employers and governments. Singaporeans make these payments out of earnings as well as from health savings accounts. The system is chock-full of incentives for thrift. If you want a private hospital room, for example, you pay through the nose; most people choose less expensive wards.



Conservatives are right: Singaporeans have the kind of "skin in the game" that promotes prudence.



But that's only half the story. There's also a massive public role. For starters, adequate savings for retirement and health expenses are mandated by government (employees must sock away 20 percent of earnings each year, to which employers add 13 percent). Public hospitals provide 80 percent of the acute care, setting affordable pricing benchmarks with which private providers compete. Supply-side rules that favor training new family doctors over pricey specialists are more extensive than similar notions Hillary Clinton pushed in the '90s. And in Singapore, if a child is obese, they don't get Rose Garden exhortations from the first lady. They get no lunch and mandatory exercise periods during school.



There's more (including an ample safety net for the poor), but you get the gist: Singapore achieves world-class results thanks to a bold, unconventional synthesis of liberal and conservative approaches. It's further to the left and further to the right than what President Obama or his foes now seek. The island's real ideology is pragmatic problem-solving. It works thanks to cultural traditions that let this eclectic blend flourish. The system is nurtured by talented, highly paid officials who have the luxury of governing for the long-term without being buffeted much by politics.



We obviously can't transplant Singapore's approach wholesale to the United States. But the reason we can't emulate even some of Singapore's success has to do with that iron law of health-care politics: Every dollar of health-care "waste" is somebody's dollar of income. As a stable advanced democracy, we're so overrun by groups with stakes in today's waste that real efficiency gains are perennially blocked.



Any hope for something better starts with tallying the price of today's paralysis. Think about that $2 trillion the next time you see states, citing budget woes, shut the door to college on tens of thousands of poor American students. Or when the next firm moves jobs overseas because health costs here are soaring. Or when the next bridge collapses. Thanks, Medical Industrial Complex!



We return now to our regularly scheduled political battle, which (no matter the outcome, according to some projections) will leave health costs headed to more than 20 percent of GDP by 2019.



Matt Miller, a senior fellow at the Center for American Progress and co-host of public radio's "Left, Right & Center," writes a weekly column for The Post. He can be reached at mattino2@gmail.com.