Tumgik
katebushwick · 5 years
Text
Chasing Innovation
One early September day I finally managed to have a long conversation with David, a business innovation consultant in his midthirties. In 2012, after pursuing an undergraduate degree in physics and a graduate degree in design, David had founded Newfound, a design and innovation consultancy firm, together with two business partners. During the three months before our conversation, I had participated in different innovation workshops organized by Newfound in New York. David was the most verbal and articulate among the innovation consultants I worked with, and I was eager to have a one-on-one conversation with him. However, it was extremely difficult to schedule a meeting because his consultancy practice at the time of my fieldwork required him to travel extensively inside and outside the United States. One afternoon, however, he texted me to say that if I still wanted to interview him, he could make himself available during a twohour slot the following morning. I immediately responded with “ Yes!” The next morning I arrived at the same shared workspace on the twenty-first floor of a new office building in Manhattan’s midtown in which Newfound’s workshops took place. I found David sitting in a nook next to a floor-toceiling window, looking at the busy street below. Upon seeing me, he smiled and without further ado said “Go for it.” I was not disappointed by our conversation. David responded in detail to each of my questions, taking them in directions that I had not anticipated. As our conversation drew to a close, I asked him what the participants in the innovation workshops he facilitates find the hardest to learn. “ The hardest thing of all is finding ways to do this in your job,” he immediately responded. “Some people come to us and say, ‘I want a job in innovation.’ 2 / Introduction And I’m like, ‘There are no jobs in innovation! Go be innovative in whatever you do!’ ” David’s tone became frustrated. “ The world deserves people who know something about a thing and then choose to innovate that thing. Like, HR managers should be innovative HR managers. And product managers should innovate methods of product management. People should be innovating in place!” He went on to explain that “people look at what we do and think of innovation as something separate from the field of knowledge and experience that they have. But, in fact,” he argued, “you gotta have a minefield of knowledge and experience to innovate—people, teams, organizations, change management.” David paused for a second and added, “and innovation processes: so I get to innovate innovation, you know?” he laughed. “I’ve been doing this for seven years, and I’m an amateur. So you cannot just show up and do this.” After the interview, as I walked along Broadway to the Times Square subway station, I kept thinking about David’s words. David argued that innovation has become such a popular buzzword that everyone—companies and people—wanted to become innovative. His words resonated with some of the flickering advertisements, billboards, and storefronts that surrounded me on the busy street and that announced new products, services, and technologies by using some form of the word innovation. Indeed, it would be difficult to find a more ubiquitous trope than innovation in today’s business world. A 2012 article in the Wall Street Journal presented data that pointed to innovation’s exponential increase in visibility: A search of annual and quarterly reports filed with the Securities and Exchange Commission shows companies mentioned some form of the word “innovation” 33,528 times [in 2011], which was a 64% increase from five years before that. More than 250 books with “innovation” in the title have been published in the last three months, most of them dealing with business, according to a search of Amazon.com. . . . Apple Inc. and Google Inc. mentioned innovation 22 times and 14 times, respectively, in their most recent annual reports. But they were matched by Procter & Gamble Co. (22 times), Scotts Miracle-Gro Co. (21 times) and Campbell Soup Co. (18 times). . . . Four in 10 executives say their company now has a chief innovation officer. (Kwoh 2012) However, later, sitting in the subway train on my way to Brooklyn, I realized that David suggested that the immense popularity of innovation has become a double-edged sword. People often came to him thinking they could innovate independently of domains of professional practice and indepth knowledge of such context-specific domains because innovation’s The Ubiquity and Ambiguity of Routinized Business Innovation / 3 popularity turned it into a reified notion, a kind of catchall phrase that was fast becoming devoid of meaning. His assessment resonates with a widespread suspicion. In tandem with the data about the rising ubiquity of the notion of innovation, the same Wall Street Journal article argued that “like the once ubiquitous buzzwords ‘synergy’ and ‘optimization,’ innovation is in danger of becoming a cliché—if it isn’t one already” (Kwoh 2012). A 2013 article in the Atlantic went as far as suggesting that although “mentions of innovation are resurgent,” “actual innovation might be in decline” (Green 2013). Quoting a George Mason University economist, the article argued that since the 1970s, “the forward march of technological progress has hit something of a dry spell, regardless of what all the talk about innovation may indicate.” It concluded with the puzzling fact that “measurable innovation might be on the decline, but, for some reason, we just can’t stop talking about it” (Green 2013). Finally, back home in Park Slope, as I was transcribing the interview with David I noticed a third dimension to his commentary. Although David first argued that people erroneously think that there is “a job in innovation,” he then acknowledged that his job is precisely such a job. David first claimed that people should be innovating in their own domains of professional practice and that to do so they must have in-depth knowledge of these context-specific domains: the “people, teams, organizations, [and] change management.” At this point, however, he suddenly paused and added “and innovation processes,” thus turning the spotlight to himself. Knowledge of “innovation processes” was something innovation consultants must be intimately familiar with if they wanted to “innovate innovation”—that is, to offer their clients better and more advanced strategies of innovation. David’s expert knowledge was thus a metalevel kind of knowledge of innovation, one that could be applied not only to consumer products and processes across different domains but also to itself. David belongs to the steadily growing number of innovation consultants, a professional group of people who help companies innovate their products, services, and structures by means of general, rule-governed innovation strategies that transcend specific contexts. As the Wall Street Journal article noted, “the innovation trend has given birth to an attendant consulting industry, and Fortune 100 companies pay innovation consultants $300,000 to $1 million for work on a single project, which can amount to $1 million to $10 million a year,” according to estimates (Kwoh 2012). Thus, reading the full transcription of the interview with David later that night, it struck me as provocative of more questions than answers. How can we explain the fact that according to David, the popularity of the idea 4 / Introduction of innovation has led people to want to master innovation as if it were a “thing” that could be abstracted from the context of different business practices when he presented himself as someone whose professional practice revolved around the development of innovation strategies that transcended the specific contexts of business practices? What should we make of the fact that David emphasized that he has been offering innovation consulting services for a number of years and yet still considers himself “an amateur. So you cannot just show up and do this,” when the notion that one can “just show up and do this” has been popularized in large part because of the many short innovation workshops and executive training sessions that innovation consultants such as David offer to business executives? Lastly, how should we reconcile the widespread suspicion that innovation has lost its specificity together with the rise of numerous innovation consulting firms that have developed highly specific innovation strategies as well as with the fact that many of these consulting firms have been successfully selling their services to different kinds of business organizations, from small start-ups to established Fortune 500 companies? An Undertheorized Dimension of Post-Fordist Flexible Accumulation Since the 1980s, post-Fordism has been at the center of critical studies of capitalism. In this context, scholars have focused on the nature and implications of the development of new strategies to reduce further the turnover time of capital, that is, the time it takes for capital to complete a cycle from the capitalist’s investment of capital in the means of production to the return of capital to the capitalist after the sale of commodities (Azari-Rad 1999). Their analytic focus has tended to be on the development of more efficient production and distribution technologies. For example, they have discussed the transition to part-time and temporary labor force (Muehlebach and Shoshan 2012; Ho 2009), cheaper manufacturing of goods in small batches and new distribution systems such as just-in-time inventory-flow delivery systems (Elam 1994; Shead 2017), geographical dispersal and mobility (Esser and Hirsch 1994), and the ability to take advantage of up-todate information through computerization and electronic means of communication (Zaloom 2006; Holmes and Marcus 2006). However, post-Fordist flexible accumulation depends on reducing the turnover time of capital not only via more efficient production and distribution technologies but also via a higher rate of product innovation. Organi- The Ubiquity and Ambiguity of Routinized Business Innovation / 5 zations must not only instantaneously respond to but also orchestrate and anticipate market changes by generating a constant stream of ideas for new products and services. The plethora of studies of post-Fordism has thus left relatively undertheorized a key dimension of post-Fordist flexible accumulation, namely, the “acceleration in the pace of product innovation together with the exploration of highly specialized and small-scale market niches” (Harvey 1990, 156). Against this backdrop, what kind of professional expertise might emerge in response to organizations’ need to routinize the fast production of ideas for new products and services? How might such an expertise help organizations generate solutions to future crises whose nature they cannot know in advance, namely, the introduction by their competitors of new products and services that can upend their operations? Put more broadly, what kind of professional expertise might help organizations prepare themselves for and constantly generate the unpredictable in a predictable way, the future in the present, the unknown by means of the known? Scholars have argued at length that the post-Fordist development of more efficient production and distribution technologies has had concrete societal implications. For example, the transition to part-time and temporary labor forces has affected workers’ well-being in numerous ways (Muehlebach and Shoshan 2012). Against this backdrop, how might the promise of the fast innovation of any entity by means of abstract, rule-governed strategies affect cultural notions of newness as well as individuals’ relation to their world—including their own lives—when such a world and lives are seen through the prism of endless innovation within reach? The very idea of the intentional design of organizational structures meant to routinize the fast production of new cultural entities takes us to a relatively uncharted theoretical domain in cultural anthropology. Anthropologists have tended to view innovation as the result of copying errors in the process of social learning and the diffusion of social practices (Boas 1896; Kroeber 1940) or as the contingent, loosely guided, and often unconscious product of individuals’ experimentation with existing practices and constraints (often glossed as “improvisation” or “emergence”) in response to unexpected new situations and crises (White 1943, 339–40; Mead 1953; LĂ©viStrauss 1966, 17–19; Bateson 1967, 148; Bourdieu 1977, 79; Chibnik 1981; Gell 1998, 215; Hannerz 1992; Hallam and Ingold 2007; Pandian 2015).1 In light of this intellectual tradition, what theory might account for a professional expertise that turns on the ability to systematize the fast production of ideas for new cultural entities by means of the development of rule-governed strategies that become part of the organization’s everyday practice? 6 / Introduction The Book’s Argument Based on a four-year ethnographic study of routinized business innovation norms and practices as they find expression in the work of innovation consultants, I address these and additional questions, offering a threefold argument. First, the consultants I worked with were not selling their clients snake oil that entailed little more than the appearance of entrepreneurship and an organizational cool branded with an unspecific catchall phrase. Rather, they were busy developing and helping their clients learn to implement highly specific and rule-governed strategies of generating and imagining ideas for new products and services. Such strategies problematize a number of assumptions about both the business organization and the creative imagination. On the one hand, scholars have rarely viewed creative imagination as one of business organizations’ key dimensions. Yet the rise of norms and practices of business innovation, in which ideation strategies play an important role, suggests that creative imagination is fast becoming one of business organizations’ key components. On the other hand, scholars have often conceptualized creative imagination in terms of fleeting liminality, evanescence, a radical individual property, and a horizon that is removed from the here and now. Yet many strategies of business innovation bring creative imagination into the office or conference room as a stable property or “technology” (Sneath, Holbraad, and Pedersen 2009) that a number of people can generate, share, and debate together for a sustained length of time. Business innovation thus turns out to be a sphere of professional practice that generates new cultural entities by reconciling a professional ethos—with its ideals of rationality, systematicity, and reliability—and a modern-Romantic creative ethos, with its ideals of unpredictable emergence (Wilf 2014a). The possibility of such reconciliation has captured the imagination of business executives and the wider public and played a key role in the rise of a professional class of innovation consultants. Second, many consultants’ substantial achievements notwithstanding, contemporary business innovation takes place in an economic and organizational environment that prizes speed and instantaneous results. This environment significantly shapes the social life of business innovation. Clients’ pressure for immediate results pushes innovation consultants to streamline the production of insights and ideas for new products and services. They consequently abstract and decontextualize the innovation process from the market to which it purports to refer. Although consultants argue that their strategies are oriented toward and take into account the consumer, the latter is often erased in the process of innovation. I show two The Ubiquity and Ambiguity of Routinized Business Innovation / 7 forms of this erasure. In the first, the innovation process discards the need to engage with end users altogether because of a belief that all the needed information about future innovative products and services already inheres in existing products and services. In the second, although the innovation process begins with data collected from end users, these data undergo textual transformations that gradually decontextualize them from any meaningful connection to users. Third, in addition to decoupling the innovation process from the market, some strands of business innovation have become self-reflexive and self-sustaining professional practices whose role is to mediate post-Fordist normative ideals of speed, instantaneity, and creative flexibility both to innovators and to their clients in addition to, and often at the expense of, generating end results that can actually be monetized. The rhetorical power of such practices to signal to clients and to innovators that “innovation is now taking place” emanates from their multimodal resonance with widespread ideologies of organizational creativity. This rhetorical power is responsible for business innovation’s contemporary status as a bulletproof panacea for any entity in need of innovation, including one’s life and self. These different, interrelated, and sometimes contradictory dimensions of routinized business innovation underlie David’s commentary. David complained that people think that innovation is “a thing” that can be abstracted from contextual factors, yet he presented his own professional practice as one that has reached the kind of level of generality that makes innovation appear to be “a thing” that transcends contexts, a perception that has also been encouraged by business innovation’s self-reflexivity, reification, and decoupling from the market. David lamented the fact that people think they can quickly master the principles of innovation and that they do not understand that business innovation consists of highly specialized skills and procedures, yet it is innovation consultants who have formulated easily learnable principles and recipes of innovation, disseminated them in relatively short training sessions, and applied them in concrete innovation sessions to quickly generate insights. Routinized business innovation is thus neither the empty shell that its detractors claim it to be nor is it the holy grail of organizational success that its supporters insist it is. Rather, innovation consultants constantly need to negotiate the tension between their desire to come up with specific practices that could lead to ideas for new monetizable products and services— a goal that requires time and sensitivity to context—and the need to speed up the innovation process and signal to their clients and to themselves that “innovation is now taking place”—an achievement that requires them to 8 / Introduction decontextualize, abstract, and reify the innovation process. To understand this complexity, an ethnographic approach that is sensitive to innovators’ everyday practice is needed. As the author of a recent Wired magazine article noted, “the overuse and generalization of the term ‘innovation’ has led to a loss of understanding of what it is we need when we say we need more innovation. We lose sight of the specific skills and behavior needed to be innovative. . . . We should start talking about innovation as a series of separate skills and behaviors” (O’Bryan 2013). Against this backdrop, I provide a detailed analysis of the skills and behaviors of business innovation consultants based on participant observation in a number of key institutional sites in which they develop, crystalize, and apply those skills and behaviors and inculcate them to business people who are later supposed to implement them in their own organizations. In doing so, I unpack both the potentialities and cultural contradictions of routinized business innovation and tease out their theoretical and practical implications. Commodity Fetishism, “Unmet Consumer Needs,” and the Production of the Future The study of business innovation provides an opportunity to engage with and contribute to critical studies of capitalism as a future-producing and future-oriented social configuration. One focus in this strand of research has been capitalism’s future-oriented discursive practices. For example, in his study of biotechnological start-ups in the United States and India, Sunder Rajan highlights “the grammar of biocapital,” which he describes as a promissory futuristic discourse, an orientation to the future when there is nothing in the present that prefigures it (Sunder Rajan 2006). This orientation is based in an ideology and culture of risk taking (Sunder Rajan 2006, 110; Comaroff and Comaroff 2000; Appadurai 2011; Miyazaki 2007; Maurer 2002; Riles 2004; Preda 2009) and is a condition of possibility for biotechnological start-ups, which depend on significant capital investment when there are no tangible products and revenues in the present that can justify such an investment (see also Taussig, Hoeyer, and Helmreich 2013). Other studies have focused on the production of new subjectivities for capitalism. For example, Rudnyckyj has studied the ways in which moderate Muslims in Southeast Asia learn to reconfigure their approach to Islam and their understanding of themselves as Muslims and thus, “to make the religion compatible with principles for corporate success found in Euro-American management texts, self-help manuals, and life-coaching sessions” (Rudnyckyj 2010). Similarly, Dumit has argued that the pharmaceutical industry expands its The Ubiquity and Ambiguity of Routinized Business Innovation / 9 market by making Americans perceive themselves as subjects who are inherently ill and in need of chronic treatment (Dumit 2012). I complement these studies by arguing that innovation consultants produce the future not only by means of discursive practices and the production of new subjectivities but also by engaging with existing products as future-producing sites in which this future already inheres in embryonic form, awaiting the innovator’s intervention to help it materialize by means of specific practices that involve the innovator’s corporeality and imagination. The conditions of possibility that underlie this approach include culturally specific notions of form, potentiality, evolution, determinism, and prediction. This mode of producing the future provides an opportunity to engage with what Marx called commodity fetishism (Marx 1978). Marx argued that under capitalism, commodities appear to have a nature or life of their own that is reflected in their price. Although it is human labor that is responsible for products’ existence and “life,” this labor remains concealed from consumers (Horkheimer and Adorno 2002). Recent studies of commodity fetishism have tended to focus on branding, that is, strategies of imbuing specific products with quasi-human personality traits with which consumers can identify (Arvidsson 2006; Foster 2007; Lury 2004; Moore 2003; Manning 2010; Lee and LiPuma 2002; Gershon 2017). Against this backdrop, I theorize a different form of commodity fetishism in the course of which innovators conceptually transform existing products into quasi persons that are endowed with a unique creative potential for developing into new products. The innovator does not assign specific personality traits to a specific product but rather invests it with a potential for creative development that, to be sure, is responsible for its present form but also for its future, potentially highly different forms. These innovation strategies migrate to spheres outside of the business world, too, such as that of self-help, where commodities and technologies eventually become models of creative development that human individuals are asked to emulate, as if products’ potential for creative development were antecedent to that of human individuals. Business innovation’s future-oriented approach ultimately turns on efforts to tap into “unmet consumer needs.” Scholars have studied “consumer needs” and their production under capitalism primarily through the prism of the ways in which marketers and advertisers recruit consumers to specific roles and encourage them to experience and inhabit needs associated with those roles, which existing products can presumably satisfy (Mazzarella 2003; Moore 2003; Applbaum 2003; Lury 2004). I highlight instead the coconstitution of products and consumers in the course of the 10 / Introduction innovation process. Ideas for new products shape, and are shaped by, innovators’ ideas about consumers. Future products and “unmet consumer needs” thus come to share an interrelatedly emergent and contingent nature that is nevertheless shaped by the specific post-Fordist business environment in which it is anchored and by the innovator who mediates between them and whose self and expertise, too, are constituted in this process of mediation. The Ethnographic Setting and Fieldwork Beginning in April 2012, I conducted ethnographic fieldwork with four innovation consulting firms, mainly in New York City. The bulk of the fieldwork took place with two of these firms, Newfound and Brandnew.2 Newfound was founded in 2012. It has offered innovation corporate training as well as contract work with individual companies on specific projects. Since its foundation, it has collaborated with companies from the banking, apparel, food, education, and tourism sectors and industries on a wide range of projects. Its founders and facilitators base their expertise in design, business management, and advertising. They trace most of their professional lineage to design thinking, a highly influential user-centered design and innovation method that is widely associated with the iconic Silicon Valley innovation consultancy firm IDEO and Stanford’s Institute of Design.3 This lineage has ties to the psychological study of creativity in that design thinking’s key method of ideation—brainstorming—is embedded in the context of the psychological study of creative problem-solving in the mid-twentieth century (Osborn 1953, xiv). I attended four different innovation workshops given by Newfound, the longest of which extended to five weeks. Participants in these workshops came mostly from the start-up sector and the creative industries. The cost of Newfound’s workshops was in the range of a few hundred dollars. Each workshop was usually led by two facilitators and attended by fifteen participants. The shorter workshops focused on the transmission of abstract principles, whereas the longer ones were structured around specific problems presented by real clients. By trying to solve a client’s problem by means of the innovation strategies inculcated in a workshop, participants hoped to gain hands-on experience and what they considered to be crucial skills in the contemporary marketplace. Clients hoped to benefit from the insights generated in the workshops and were consequently willing to underwrite some of their costs. Brandnew was founded in 1994. Since its founding it has collaborated with major companies from different sectors on a vast spectrum of con- The Ubiquity and Ambiguity of Routinized Business Innovation / 11 sumer products and services, one of which has become a standard of innovation in the field of consumer electronics. Its facilitators base their expertise in cognitive science and the study of creative problem-solving with a focus on engineering problems in addition to business management. Participants in its workshops and training sessions tended to be senior executives in large, established companies, some of which were Fortune 500 companies. They were mostly C-level executives (e.g., Chief Innovation Officers) with business management degrees. The cost of Brandnew’s workshops was in the range of a few thousand dollars. Each workshop was usually led by four facilitators and attended by twenty-five participants. In addition to attending Brandnew’s workshops, I participated in a course on business innovation in one of the top five US business schools.4 The course focused on the core principles of Brandnew’s signature innovation strategy. It lasted six weeks and was attended by close to seventy students. It has been offered a few times a year at this school. Although this book does not provide a systematic comparison between the two consultancies, juxtaposed, Newfound and Brandnew offer a good view of the wide spectrum of routinized business innovation strategies and services that are now prevalent in the business world and of the wide range of executives and entrepreneurs interested in mastering and incorporating these strategies. Newfound’s focus on design thinking provides a window into an innovation strategy that has become highly popular both within and outside the business world. In contrast, Brandnew provides insights into consultancies that offer more specialized proprietary innovation strategies. The innovators who work for Brandnew explicitly reject design thinking and its adherence to brainstorming as a method of creative ideation in favor of a much more systematic, quasi-algorithmic approach to creative problem-solving inspired by the field of engineering and cognitive science. Their “no-nonsense” approach found expression in the fact that Brandnew’s workshops that I attended took place in dull, windowless hotel conference rooms, whereas Newfound’s workshops took place in a trendy shared workspace in a new office building—the kind of workspace that has become identified with the start-up sector and the creative industries. Equipped with floor-to-ceiling windows, open spaces, long communal tables, espresso machines, games, and other forms of a “fun” atmosphere, Newfound’s choice of location reflects the younger demographics of its clients, which stands in contrast to Brandnew’s clients, who tend to be senior executives in established companies.5 Inasmuch as the consulting firms I worked with collaborated with major companies in a wide array of sectors, they provide a platform from which 12 / Introduction it is possible to generalize about the normative ideals and practices of routinized business innovation in the contemporary moment. That said, I am not suggesting that the innovation strategies that these consulting firms develop and disseminate exhaust the entire spectrum of innovation practices that exist now. Indeed, it is important to emphasize that the scope of the analysis I present in this book is intentionally limited in two ways. First, the routinized business innovation strategies developed by the innovation consultants I worked with are highly abstract, formalized, and characterized by rule-governed rationality. These features make those strategies applicable to products and services across different business sectors. Such strategies are thus different from “in-house,” frequently informal innovation strategies and routines developed by many companies that are meant to be applied only to the specific products and services those companies produce and that are not immediately relevant to companies in other business sectors (cf. Moeran and Christensen 2013). Consider, for example, Google’s famous “20% time” policy, which Google’s founders, Larry Page and Sergey Brin, highlighted in their 2004 IPO letter: “We encourage our employees, in addition to their regular projects, to spend 20% of their time working on what they think will most benefit Google. This empowers them to be more creative and innovative” (quoted in D’Onfro 2015). To begin, this innovation strategy is highly unspecific, offering no clear procedures for innovation save for the general allocation of “free” time. Second, it is characterized by a low level of routinization. It does not get “formal management oversight—Googlers aren’t forced to work on additional projects and there are no written guidelines about it” (D’Onfro 2015). Indeed, it appears that only “10% of Googlers are using” this policy because “it became too difficult for employees to take time off from their normal jobs.” In addition, as a former top Google executive put it, for those who do use this policy, “it’s really 120% time,” that is, 20 percent additional time to their normal jobs. Third, even if it were routinized and enforced, this innovation policy could not be easily applied in other business sectors, such as the pharmaceutical industry, where a highly different production model prevails and a high level of collaboration between many people and coordination with regulatory authorities is required. Thus, in comparison with “in-house,” informal innovation strategies and routines, innovation workshops and business school courses provide a vantage point from which it is possible to discern higher-level normative ideals and practices of business innovation as innovators reflexively construct and understand them. The scope of the analysis I present in this book is intentionally limited in a second way. I focus primarily on the idea-generation dimension of rou- The Ubiquity and Ambiguity of Routinized Business Innovation / 13 tinized business innovation, although in practice successful innovation consists of other dimensions, such as market analysis, feasibility considerations, regulatory issues, and organizational politics and resources (Akrich, Callon, and Latour 2002). Indeed, it is indicative that although in practice, idea generation plays a relatively minor role in the overall innovation process (Schumpeter 1943, 132), it has almost always remained the primary focus of the innovation strategies that the consultancies I worked with developed as well as the dimension that their clients were most eager to learn, master, and implement in their home organizations. Rather than assume that this discrepancy distorts the reality of business innovation, I take it to be an important dimension of this reality, one that is indicative of the cultural order of business innovation that begs for a detailed explanation and analysis. The rise of innovation as a key dimension of the contemporary business world as well as the public fascination with innovation have largely been propelled by the fact that norms and practices of business innovation resonate with powerful ideologies of creative agency and selfhood in the modern West. A focus on the idea-generation dimension of the innovation process is thus justified both by this dimension’s saliency in the field of business innovation consulting services and by the role it plays in business innovation’s broader appeal outside the business world. During my fieldwork I attended innovation workshops, training sessions, courses, and conferences. Although participants and facilitators in Brandnew’s and Newfound’s workshops were aware of my presence as an ethnographer, I engaged in data collection, ideation sessions, data analysis, and presentation of final insights to clients as a full participant. I was paired with other participants and worked in teams on specific innovation problems. I complemented these forms of direct participant observation with formal interviews and many informal conversations with innovation consultants and the participants in the innovation sessions and workshops they organized. Against the backdrop of the lack of specificity in critical discussions about business innovation, my purpose is to describe and give voice to what innovation consultants do and how they understand and explain to others what they do as well as to anthropologically theorize this ethnographic material in order to better account for routinized business innovation as a salient contemporary cultural phenomenon. Outline of Chapters The book is divided into three parts that reflect its threefold argument. Chapters 1 and 2 focus on the concrete innovation strategies innovation 14 / Introduction consultants develop and inculcate as well as on the cultural contradictions with which they need to contend and the discursive means with which they do so. Chapters 3 and 4 focus on the ways in which the contemporary business environment’s emphasis on speedy and instantaneous results shapes the social life of innovation strategies, which become pervaded by decontextualization and increasingly decoupled from the market to which they purport to refer. Finally, chapters 5 and 6 focus on the self-reflexive and performative nature of business innovation, including the ways in which it shapes the innovator’s self and notions of selfhood in the wider public. The second half of this introduction provides a detailed analysis of one historical context that explains the emergence of business innovation as a key dimension of the contemporary business world. This context revolves around a series of transformations in the ways in which organizational and management theorists understood and managed the business organization throughout the twentieth century with respect to the role of uncertainty. Until the mid-twentieth century, organizational and management theorists approached uncertainty as an undesirable feature of organizations, one that must be eliminated as soon as possible. In contrast, in the second half of the twentieth century, they began to conceptualize organizations as entities whose logic encompasses uncertainty as a natural component that provides a crucial resource for their survival by allowing them to cope with unforeseen events in their internal and external environments and even to generate unforeseen events in the form of ideas for new products. Organizational design consequently focused on developing structures that could integrate and generate uncertainty as a routine dimension of organizations’ logic of operation by tapping into and harnessing employees’ creative agency. A number of organizational theorists turned to the creative arts in general and jazz music in particular in search of adequate organizational models. Chapter 1 unpacks in detail the results of these conceptual transformations as they find expression in the radical productivity of the innovation strategies developed by consultants when they are enacted in practice. Drawing on ethnographic examples from one of Brandnew’s workshops in which its signature innovation strategy was inculcated and put to use, the chapter highlights two procedures that account for this strategy’s productivity. The first procedure helps the innovator imagine new products by means of the deformation of existing products according to a series of well-defined steps. The innovator’s strict adherence to a highly focused, rule-governed procedure of imagination and his complete agnosticism to the status of the entities that he deforms by means of this procedure account for the latter’s potential for radical productivity. The second procedure is systematic ab- The Ubiquity and Ambiguity of Routinized Business Innovation / 15 duction. Abduction is the reasoning process typically theorized in the context of scientific practice in the course of which the scientist, in view of a strange situation, forms a hypothesis such that if it were true the situation would cease to be strange. Faced with the deformed objects the innovator created by means of the first procedure, he must think of the functions those objects might be able to perform for a hypothetical consumer such that their strange forms would make sense. Brandnew’s innovation strategy in effect systematizes and professionalizes abduction. The chapter contextualizes the emergence of this productive organizational structure in the broader hypercompetitive world of business innovation that is dominated by the idea that any existing business organization faces the immediate danger of being undone by up-and-coming competitors who are about to launch new “disruptive” products. This framework requires business organizations to prepare themselves for imminent crises whose exact nature they cannot know until they emerge by constantly producing the potential solutions for them in advance in the form of a steady stream of ideas for new products and services. However, innovation consultants must not only develop productive innovation strategies but also contend with the macrosociological landscapes in which business innovation is anchored. Set within a specifically Western modern normative framework, consultants’ promise to build and foster a stable corporate culture of innovation and organizational creativity embodies a basic cultural contradiction because modern-Romantic normative ideals of creative agency connote unpredictability and resistance to formalization and routinization. Their promise to help corporations build a culture of innovation that will generate a stable pipeline of ideas for new products and services is thus the promise to routinize that which ideologically cannot be routinized and whose value is precisely in its resistance to being routinized and professionalized. Against this backdrop, chapter 2 draws on fieldwork in Brandnew’s workshop to analyze the ways in which workshop facilitators attempt to reframe this cultural contradiction and thus encourage workshop participants to inhabit the—on the surface, counterintuitive—idea that innovation can and should be routinized, formalized, and rationalized. They do so by means of different ritual communicative events. They first bring into being the specific macrosociological order that opposes a Romantic ethos (associated with mercurial human creativity) and a professional ethos (associated with rule-governed rationality). During the workshop this macrosociological order then becomes the basis for suggested transformations in the roles that participants inhabit with respect to innovation, namely, from associating innovation with a Romantic ethos at the beginning of the workshop to accepting at its end 16 / Introduction that a professional ethos can lead to successful innovation as a permanent feature of the organization. In chapter 3 I argue that although it might appear that Romantic notions of creativity have been eradicated from Brandnew’s innovation strategy by means of the latter’s algorithmic-like structure, in practice those notions have become this strategy’s very condition of possibility, albeit in a different and rather hidden guise. Focusing on Brandnew’s innovation strategy as it is explained in a business management book, I argue that this strategy transforms human creativity, understood as an unruly property, into a manageable and reliable resource by displacing it from the innovator and consumer to the nonhuman elements of the innovation process, namely, the products and services that are in need of innovation. Brandnew’s consultants argue that all the information the innovator needs in order to generate ideas for future successful products can be found in the history of the evolution of existing successful products. This evolution reveals crucial information about products’ “creative potential” to develop into new products that will tap into consumers’ “unmet needs” before consumers know that they have those needs. The innovator consequently is not required to engage with consumers at all in the ideation phase of the innovation process, only with products. At stake is a double inversion in which the innovator transforms the product into a quasi person endowed with a unique creative potential for development and growth and the consumer into a static, inert, quasi object whose needs and wants emerge deterministically and can be algorithmically inferred in advance by the innovator based on the rule-governed analysis of the product. Creativity is thereby both retained and tamed. The decontextualization that characterizes Brandnew’s innovation strategy can also be found in innovation strategies such as design thinking that, in contrast to it, explicitly emphasize the importance of empathizing with consumers by directly engaging with them. As I demonstrate in chapter 4, at the core of this decontextualization stands the most ubiquitously used material artifact in business innovation, namely, the Post-it note. Drawing on fieldwork in one of Newfound’s workshops, I argue that whereas existing explanations attribute the Post-it note’s ubiquity to the fact that it is a convenient tool with which to conduct brainstorming sessions, an important reason for its omnipresence lies elsewhere: it enables innovators to quickly generate insights in line with post-Fordist ideals of speed and instantaneity. First, Post-it notes enable innovators to produce pseudodata and to decouple data from the market under the guise of its reflection. In the course of the innovation process, innovators represent data about con- The Ubiquity and Ambiguity of Routinized Business Innovation / 17 sumers by means of a series of textual artifacts of decreasing dimensions until the data are represented in the form of single words and even single graphic sketches on single Post-it notes. This kind of representation results in decontextualization and pragmatic ambiguity, that is, signs that point to a wide spectrum of potential objects for those who are supposed to interpret them. Such ambiguity and decontextualization are one condition of possibility for the faster production of ideas for new products, because context is weight. Once the innovator loses the context, he or she can move through the ideation phase more quickly. Second, Post-it notes’ weak adhesive properties enable the innovator to arrange such pseudodata on conventional visual templates of what a valid insight should look like. Such templates might include a two-by-two matrix or a Venn diagram. When innovators arrange and combine Post-it notes with one another on such templates, the result is a quickly generated “ritual insight,” that is, an insight that receives its validity from the conventional prestige of the ready-made visual template that underlies it. Thus, shaped by a post-Fordist business environment that mandates the quick production of insights, Newfound’s innovation strategy is pervaded by decontextualization, too, even though, as opposed to Brandnew’s strategy, it begins with, and purports to be focused on and empathetic toward, consumers. Against this backdrop, in chapter 5 I address the puzzling fact that although business innovation is often decoupled from the market to which it purportedly refers, this decoupling has only partially undermined the perception of its value in and outside the business world. The reason lies in innovators’ efforts to signal to clients by means of different performative practices that “innovation is now taking place.” Drawing on fieldwork with both Brandnew and Newfound, I argue that innovators use specific material artifacts and communicative practices to mediate the notion that their expertise is based in the ideals of flexibility, speed, minimalism, free information flow, and organizational creativity. However, these acts of mediation also have unintended consequences. They clutter the work of innovation and create centers of gravity, opacity, and rigidness. In other words, they both mediate and undermine the ideals with which innovators would like to be associated. I explore this contradiction as it finds expression in innovators’ efforts to mediate their workspace, expert body of knowledge, thought processes, and selves as organizationally creative. In chapter 6 I look at the migration of norms and practices of routinized business innovation outside the business world as a consequence of the rising prestige, visibility, and bulletproof status of those norms and prac- 18 / Introduction tices. I provide an in-depth analysis of “life design,” a set of commercially successful strategies developed by business innovators to help individuals “innovate” their lives and thereby achieve happiness. I argue that the same modern-Romantic notions of the self that provided innovation consultants with a model of creative potentiality and the cultural conditions of possibility for developing design thinking strategies for innovating technologies are now ironically being transformed as a result of the fact that the self has become the subject of those strategies as if it were a technology in need of innovation. The chapter unpacks what reflexivity means for the self as technology, what constitutes a well-designed life, what prototyping potential future lives entails, how the normative ideals of speed and instantaneity that suffuse business innovation affect notions of self-transformation when one’s life is approached as an object of innovation, what the presentation of self in the quest for a well-designed life means when it is the object of brainstorming sessions, and what socioeconomic conditions of possibility enable such a method of “self-innovation,” to begin with. In the conclusion I first tease out a number of theoretical points about routinized business innovation. I then provide a general sociological argument about the function that innovation consultancies perform in the business world, namely, the function of an institutional myth that organizations are ready to embrace as a ritualized, though not necessarily effective, way to cope with the uncertainty and ambiguity that pervade business innovation. The conclusion ends by drawing parallels between knowledge production in anthropology and the arguments made in the book about knowledge production in business innovation. Based on this comparison, I argue that business innovation provides a cautionary tale in light of which recent calls made by anthropologists to revamp and “innovate” anthropological training and work in the model of design should be critiqued. “How Did You Get from the Village Vanguard to Wall Street?” The person asking me this question, a chief innovation officer in a Fortune 500 company whom I met in an innovation workshop, was not interested in the actual route one should take if one wanted to go from the Village Vanguard jazz club, located in Manhattan’s West Village, to the city’s financial hub on and around Wall Street in Manhattan’s downtown. A New Yorker for many years, this person could probably generate the shortest and most efficient route in an instant. Rather, he asked me this question after I had described to him my previous research on the rise of academic jazz music The Ubiquity and Ambiguity of Routinized Business Innovation / 19 programs in the United States (Wilf 2014a). His question was a figurative expression of his surprise at the, on the surface, total disconnect between my previous and current research. To him, the short physical distance between those iconic meccas of the jazz and business worlds was in inverse proportion to what he considered to be the long conceptual distance that separated them. And yet, as I argue in the remainder of this chapter, throughout the twentieth century the conceptual distance between the two worlds has gradually become smaller as a result of a series of transformations in the ways in which organizational and management theorists understood and managed the business organization with respect to the role of creative uncertainty. Those transformations culminated in the idea that business organizations should adopt some of the organizational features that are found in the creative arts in general and jazz music in particular if they want to boost their organizational creativity and potential to innovate. These conceptual shifts heralded the transformation of creativity into an alienable means of capitalist production and of business innovation into a key dimension of the business world, thus providing an important contextual and historical backdrop for the story told in this book. Contexts and Histories: From Designing Predictability to Incorporating Uncertainty Joseph Schumpeter has provided an early and highly influential definition of business innovation. Innovation, according to Schumpeter, is the creation of any new economic structure that can be monetized and commercialized. Such structures can include “the introduction of new commodities[,] . . . technological change in the production of commodities already in use, the opening up of new markets or of new sources of supply, Taylorization of work, improved handling of material, the setting up of new business organizations such as department stores—in short, any ‘doing things differently’ in the realm of economic life—all these are instances of what we shall refer to by the term Innovation” (Schumpeter 1939, 84). The notion of innovation as something that is not limited to technological change in a narrow sense has recently found expression in Clayton Christensen’s (1997) highly influential book The Innovator’s Dilemma, in which he clarifies that “technology . . . means the processes by which an organization transforms labor, capital, materials, and information into products and services of greater value. . . . This concept of technology therefore extends beyond engineering 20 / Introduction and manufacturing to encompass a range of marketing, investment, and managerial processes. Innovation refers to a change in one of these technologies” (xiii). Throughout the twentieth century (i.e., before the recent exponential rise in the number and visibility of innovation consultants), different kinds of professionals—such as psychologists, sociologists, economists, designers, and organizational theorists—had already developed and disseminated business innovation as a policy-driven concept (Godin 2008, 41; Scott 2003, 38). Two strands of research played a particularly important role in this history: (1) economics and (2) management and organizational research. Economists contributed to the study of innovation via the quantification and measurement of productivity in relation to technological change and its commercialization (Christensen 1997; Godin 2008, 34; Schumpeter 1939). Meanwhile, organization and management theorists worked to identify and design organizational models that could boost productivity. Although organization studies did not exist as an institutionalized scholarly field until the late 1940s, by that time the subject already had important precursory work in the contributions of administrative and management theorists (such as Frederick Taylor), who, from the end of the nineteenth century and throughout the first half of the twentieth century, attempted to formulate managerial principles and rationalize and standardize production. These theorists approached organizations as instruments designed to attain specific and predetermined goals in the most efficient and rational way, which was itself amenable to clear formulation. Taylor’s scientific management of production was “the culmination of a series of developments occurring in the United States between 1880 and 1920 in which engineers took the lead in endeavoring to rationalize industrial organizations” (Scott 2003, 38; see also Shenhav 1999). The image of the organization as a welloiled machine in which different parts work in precise and reliable coordination with one another and nothing is left to chance governed these engineers’ vision. They restructured the tasks workers performed as well as the design of the workspace in an attempt to facilitate efficient and reliable coordination. Ultimately, they also restructured the principles of managerial decision-making. Taylor famously argued that under scientific management arbitrary power, arbitrary dictation, ceases; and every single subject, large and small, becomes the question for scientific investigation, for reduction to law. . . . The man at the head of the business under scientific management is governed by rules and laws which have been developed through hundreds of experiments just as much as the workman is, The Ubiquity and Ambiguity of Routinized Business Innovation / 21 and the standards which have been developed are equitable. (Quoted in Scott 2003, 39) Their goal was to reduce uncertainty and even eliminate it entirely or, if it should arise, to resolve it by means of predetermined, rational procedures. Industrial psychologists influenced subsequent approaches to organizational design. As opposed to their predecessors, they viewed the organization as a much more complex entity. They highlighted the existence of discrepancies between intended organizational goals and the goals organizations actually pursue and between the ideal of formal structure and the reality of informal structure. Key among those industrial psychologists was Elton Mayo, who, through a series of studies, demonstrated that individuals do not always function as atomistic, rational, and economic agents but rather follow a complex set of motivations that involve feelings and sentiments that are based in group solidarity (Scott 2003, 62). Mayo’s findings led to a heightened focus on the capacity of managerial leadership to influence the behavior of subordinates. Managers were encouraged to be more sensitive to workers’ psychological and social needs. This organizational perspective highlighted emotional control, anger management, empathy, and strong interpersonal skills as key managerial resources (Illouz 2008). It subsequently led to managerial notions such as job enrichment, employees’ participation in decision-making, and work satisfaction. In contrast to the rational system approach, this framework acknowledged uncertainty as a possible component of organizational reality. However, similar to the rational system approach, its goal was to train managers and restructure the work environment in such a way that this uncertainty would not arise or, if it should arise, it could immediately be resolved by managers who were equipped with adequate emotional skills. In contrast, the third dominant approach in organization studies, which emerged after World War II, conceptualized the business organization as an entity whose logic encompasses uncertainty and flexibility not as undesirable features but as natural components that provide a crucial resource for the organization’s survival (Scott 2003, 82–101). Inspired by cybernetics and information theory, this approach emphasized the distinction between different systems in terms of their complexity. In less complex systems such as simple machines, the interdependence between parts is rigid, and the behavior of each part is highly constrained. These systems are nonreactive to their environment. They function well in stable environments and are suitable for the completion of predetermined, unchanging tasks according to predetermined schemes of operation. In contrast, in more complex 22 / Introduction systems such as social systems and business organizations, the interdependence between parts is less constrained. These systems are loosely coupled and flexible. Uncertainty is one of their key dimensions. Proponents of this approach viewed uncertainty as an organizational resource rather than an anomaly that must be eliminated as quickly as possible. They argued that complex systems can successfully cope with and even mobilize uncertainty because they are able to process informational input of different kinds— both internally and externally derived—and thus change their means for the attainment of specific goals and the goals themselves according to shifting contextual conditions (cf. Akrich, Callon, and Latour 2002, 189). A significant share of organizational theory subsequently focused on determining proper work flows, control systems, and information-processing templates in relation to human individuals’ ability to manage and capitalize on uncertainty. Karl Weick, one of the key figures in this strand of research, argued that “the basic raw materials on which organizations operate are informational inputs that are ambiguous, uncertain, equivocal”; hence, the goal of organizing should be to establish “a workable level of certainty” in the context of which human individuals could function well (Weick 1969, 40; see also Scott 2003, 98). On the one hand, theorists pointed to the limitations of human individuals as information processors in terms of their “low channel capacity, lack of reliability, and poor computational ability”; on the other hand, they pointed to the advantages of “the human element,” such as “its large memory capacity, its large repertory of responses, its flexibility in relating these responses to information inputs, and its ability to react creatively when the unexpected is encountered” (Haberstroh 1965, 1176; see also Scott 2003, 95). They consequently defined the outstanding task for system designers as “how to create structures that will overcome the limitations and exploit the strengths of each system component, including the individual participants” (Scott 2003, 95). Inspired by nascent psychological research on creativity (Guilford 1950; Osborn 1953; Rossman 1935), they approached “individual participants,” especially their potential to act creatively, as crucial resources that can enable business organizations to function better vis-Ă -vis the increased uncertainty and volatility that characterize their institutional environment. Significantly, a key strand in this research agenda turned to the creative arts in general, and jazz improvisation in particular, as sources of inspiration for the design of organizational structures that could cultivate and tap into employees’ ability to respond flexibly to conflicting and ambiguous information inputs and “to react creatively when the unexpected is encountered” (Haberstroh 1965, 1176). Theorists argued that the jazz template The Ubiquity and Ambiguity of Routinized Business Innovation / 23 could provide inspiration for the design of business organizational structures that were not only flexible enough to cope with unexpected events but also capable of producing unexpected or “virtual” events in the form of novel ideas for new products. These ideas could then be developed into innovations in fields in which to remain stagnant is to perish (Akgun et al. 2007; Dyba 2000; Kamoche and Cunha 2001; Mantere, Sillince, and Hamalainen 2007; Moorman and Miner 1998). Incorporating Jazz Improvisation At the 1995 Academy of Management National Conference in Vancouver, British Columbia, a symposium titled “Jazz as a Metaphor for Organizing in the 21st Century” took place. The symposium consisted of a series of scholarly presentations, “a demonstration and discussion of jazz improvisation by panelists who were professional jazz musicians, followed by a concert and social event during which these musicians regaled the audience with superb jazz” (Meyer, Frost, and Weick 1998, 540). The presentations, together with additional articles on this topic, were eventually published in the top-tier journal Organization Science. The authors explained that the symposium had been organized in response to the significant changes in the nature of the challenges that organizations would have to cope with in the twenty-first century. As one author put it, to come up with organizational models that would be adequate to this changing environment, we need a model of a group of diverse specialists living in a chaotic, turbulent environment; making fast, irreversible decisions; highly interdependent on one another to interpret equivocal information; dedicated to innovation and the creation of novelty. Jazz players do what managers find themselves doing: fabricating and inventing novel responses without a prescripted plan and without certainty of outcomes; discovering the future that their action creates as it unfolds. (Barrett 1998, 605) Elsewhere the same author added The mechanistic, bureaucratic model for organizing—in which people do routine, repetitive tasks, in which rules and procedures are devised to handle contingencies, and in which managers are responsible for planning, monitoring and creating command and control systems to guarantee compliance—is no longer adequate. Managers will face more rather than less interactive complexity and uncertainty. This suggests that jazz improvisation is a useful meta- 24 / Introduction phor for understanding organizations interested in learning and innovation. To be innovative, managers—like jazz musicians—must interpret vague cues, face unstructured tasks, process incomplete knowledge, and yet they must take action anyway. (Barrett 1998, 620; see also Weick 1998) The term jazz music encompasses a wide range of stylistic genres and is thus a fuzzy category with porous boundaries. Almost all of the organizational theorists who turned to jazz in search of organizational models focused on straight-ahead jazz, in which a group of musicians improvise on a given (“standard”) tune that consists of a melody and a basic harmonic sequence (a string of chords). Players improvise on these minimal structures by using a stock of conventional building blocks such as short phrases and modes of articulation, which they combine in inventive ways and in response to the real-time contribution of their bandmates. The real-time, improvised nature of this art form means that it is inherently an emergent phenomenon; that is, it results in new meaningful structures that to a great extent cannot be anticipated in advance. Although it is not creation ex nihilo because mature improvisers must master different stylistic conventions and have a thorough knowledge of the canon, the actual outcome of group improvisation remains uncertain and unpredictable. Its creativity resides precisely in these features.6 Organizational theorists who turned to the jazz metaphor usually relied on ethnographies of jazz improvisation or their own experience as semiprofessional jazz musicians to emphasize a number of jazz’s features that are related to the uncertainty that pervades it and that is constitutive of jazz’s very creativity.7 For example, Barrett (1998, 609–12) enumerates a number of features of jazz improvisation that are directly related to creative uncertainty and follows these with concrete advice on how these features can be used in organizational design. First, Barrett argues that jazz musicians intentionally disrupt their habituated playing patterns and put themselves in unfamiliar musical situations that are likely to produce errors and unexpected outcomes. They keep pushing themselves beyond their own comfort zone and thus ensure that their playing does not become stagnant and predictable. Second, musicians use the unexpected outcomes and errors that result from this emphasis as resources and musical opportunities to redefine the context: something that at one point seems like an error subsequently becomes coherent within this new context. In this way, musicians constantly generate and develop new meaningful structures. Third, musicians use minimal structures of communication and planning, which foster flexibility and indeterminacy. A player has only a tune’s basic harmonic The Ubiquity and Ambiguity of Routinized Business Innovation / 25 structure and melody to improvise on as well as the ongoing contribution of his bandmates. These minimal structures foster uncertainty of information. Fourth, the jazz band is structured around distributed task negotiation and synchronization between bandmates. This means that information constantly flows in all directions rather than hierarchically. With respect to each of these features, Barrett makes concrete suggestions for organizational design whose goal is to infuse the business organization with creativity and to create the organizational conditions of possibility for continued innovation. First, organizational leaders must encourage and require their employees to abandon habituated modes of doing things and instead to take risks. Second, they must change their modes of evaluating their employees by treating the latter’s errors as an inseparable part of learning rather than as punishable events. This recommendation first treats errors as an inevitable outcome of learning and then embraces them as a resource. By creating “organizational climates that value errors as a source of learning . . . organizational leaders can create an aesthetic of imperfection and an aesthetic of forgiveness that construes errors as a source of learning that might open new lines of inquiry” (Barrett 1998, 619).8 Third, organizational leaders must develop the equivalent of minimal structures that will sustain maximum flexibility and maintain ambiguity while providing employees with sufficient orientation. Such equivalent structures might be “credos, stories, myths, visions, slogans, mission statements, trademarks” (612). Fourth, organizational leaders must cultivate a work environment characterized by “distributed, multiple leadership in which people take turns leading various projects as their expertise is needed” (618). These and similar recommendations were the outcome of the paradigmatic shift in organization studies that culminated in the realization that contingency and uncertainty have become part and parcel of the environments within which many organizations must function and that a flexible organizational structure has a better chance of coping with such environments for two reasons. First, a flexible organization can better respond to unexpected events in its external and internal environments. Second, it can generate unexpected events that are essential for innovation. Many programmatic calls for business organizations to adopt organizational models from the jazz world and the creative arts have been motivated by the hope that such models can foster a culture of innovation and new product development. If the jazz band enables musicians to produce unexpected events continuously and then to elaborate some of these events into new meaningful structures, then, it is hoped, an organization that adopts the jazz band’s organizational model might be able to produce unexpected ideas continu- 26 / Introduction ously and then develop them into viable innovations in business sectors and niches in which to remain stagnant is to perish. It is for this reason that organizational models inspired by jazz improvisation have been discussed predominantly in the context of “new product development in turbulent environments” (Akgun et al. 2007; Moorman and Miner 1998), “product innovation” (Kamoche and Cunha 2001), and the functioning of small software organizations (Dyba 2000) rather than in the context of organizational change in general (Mantere, Sillince, and Hamalainen 2007). The distinction anthropologists have made between “possible uncertainty” and “potential uncertainty” can clarify the appeal of this organizational strategy. Whereas “possible uncertainty . . . is dependent on past knowledge, calculation, and evaluation (the chances of a particular risk being realized),” “potential uncertainty, by contrast, does not derive from the question of whether one future possibility or another will be realized (as in the case of possible uncertainty) but from a virtual domain with the capacity to generate a broad variety of actualizations” (Samimian-Darash 2013, 4, emphasis added). The “actualizations” that this “virtual domain” can generate may have never taken place before and hence are not known and cannot be known in advance. Creativity in jazz is based in “potential uncertainty.” Organizational theorists found inspiration in the idea that the jazz band functions as a “virtual domain” in which musicians can generate a wide variety of new and hitherto unthought-of musical events and then develop them into new structures whose full meaning becomes apparent only retrospectively because of their emergent nature: The improviser can begin by playing a virtual random series of notes, with little or no intention as how it will unfold. These notes become the material to be shaped and worked out, like pieces of a puzzle. The improviser begins to enter into a dialogue with her material: prior selections begin to fashion subsequent ones as these are aligned and reframed in relation to prior patterns. (Barrett 1998, 615, emphasis added) Organizational theorists found jazz’s “virtual domain” appealing in light of their belief that in turbulent environments that require organizations to incessantly develop new products, services, and structures, organizational structures that engage with uncertainty only in the form of calculating the chances of the realization of a particular possibility that is already formulated and imagined in advance (“possible uncertainty”) might not be very useful. Although managers certainly make conjectures about what might be “the next big thing” and are engaged in calculating the probability that this The Ubiquity and Ambiguity of Routinized Business Innovation / 27 or that next “big thing” will actually materialize, to gain a competitive advantage in the marketplace, their organizations must also develop structures that can constantly generate new events that were not hitherto thought of and allow for their development into new products.9 Some of the innovators I worked with specifically referred to the stage in the innovation process in which ideas for new products and services are generated as “the virtual situation,” thus pointing at the fact that “potential uncertainty” rather than “possible uncertainty” was the form of uncertainty to which they oriented their professional practice. The organizational need to foster this specific configuration of creative uncertainty and virtuality that will give rise to hitherto unimagined events explains why many organizational theorists have turned to the creative arts in general in search of new organizational models. Modern Western art is based in the Romantic idea of the creative, as opposed to the imitative, imagination (Abrams 1971). Whereas the imitative form of imagination entails the representation of existing worlds, the creative form of imagination entails the creation of new, hitherto unimagined worlds. I will discuss these and related modern-Romantic normative ideals of creativity in detail in subsequent chapters, for they provided some of the cultural conditions of possibility for the innovation strategies that the innovators I worked with developed. The intimate historical and cultural links between routinized business innovation and normative ideals and practices of creative agency have contributed to making business innovation a powerful cultural trope that has captured the imagination of business executives and the wider public. At the same time, these links have also produced significant complications for innovation consultants who have to convince their potential clients that they have developed the means to routinize creativity and transform it from a mercurial human faculty, as it has for long been understood in the modern Western popular imagination, into a reliable and stable organizational source of ideas for new products and services. Before unpacking these contradictions, however, it is first necessary to have a better sense of what routinized business innovation strategies actually look like and what their added value for business organizations might be.
0 notes
katebushwick · 5 years
Text
All men are designers. All that we do, almost all the time, is design, for design is basic to all human activity. The planning and patterning of any act towards a desired, foreseeable end constitutes the design process. Any attempt to separate design, to make it a thing-by-itself, works counter to the inherent value, of design as the primary underlying matrix of life. Design is com- posing an epic poem, executing a mural, painting a masterpiece, writing a concerto. But design is also cleaning and reorganizing a desk drawer, pulling an impacted tooth, baking an apple pie, choosing sides for a back-lot baseball game, and educating a child. Design is the conscious effort to impose meaningful order. The order and delight we find in frost flowers on a window pane, in the hexagonal perfection of a honeycomb, in leaves, or in the architecture of a rose, reflect man's preoccupation with pattern, the constant attempt to understand an ever-changing, highly complex existence by imposing order on it - but these things are not the product of design. They possess only the order we ascribe to them. The reason we enjoy things in nature is that we see an economy of means, simplicity, elegance and an essential tightness in them. But they are not design. Though they have pattern, order, and beauty, they lack conscious intention. If we call them design, we artificially ascribe our own values to an accidental side issue. The streamlining of a trout's body is aesthetically satisfying to us, but to the trout it is a by-product of swimming efficiency. The aesthetically satisfying spiral growth pattern found in sunflowers, pineapples, pine cones, or the arrangement of leaves on a stem can be explained by the Fibonacci sequence (each member is the sum of the two previous members: 1, 1, 2, 3, 5, 8, 13, 21, 34 .. .), but the plant is only concerned with improving photosynthesis by exposing a maximum of its surface. Similarly, the beauty we find in the tail of a peacock, although no doubt even more attractive to a peahen, is the result of intra-specific selection (which, in the case cited, may even ultimately prove fatal to the species). Intent is also missing from the random order system of a pile of coins. If, however, we move the coins around and arrange them according to size and shape, we add the element of intent and produce some sort of symmetrical alignment. This sym- metrical order system is a favourite of small children, unusually primitive peoples, and some of the insane, because it is so easy to understand. Further shifting of the coins will produce an infinite number of asymmetrical arrangements which require a higher level of sophistication and greater participation on the part of the viewer to be understood and appreciated. While the aesthetic values of the symmetrical and asymmetrical designs differ, both can give ready satisfaction since the underlying intent is clear. Only marginal patterns (those lying in the threshold area between symmetry and asymmetry) fail to make the designer's intent clear. The ambiguity of these 'threshold cases' produces a feeling of unease in the viewer. But apart from these threshold cases there are an infinite number of possible satisfactory arrangements of the coins. Importantly, none of these is the one right answer, though some may seem better than others. Shoving coins around on a board is a design act in miniature because design as a problem-solving activity can never, by definition, yield the one right answer: it will always produce an infinite number of answers, some 'righter' and some 'wronger'. The Brightness' of any design solution will depend on the meaning with which we invest the arrangement. Design must be meaningful. And 'meaningful' replaces the semantically loaded noise of such expressions as 'beautiful1, 'ugly', 'cool', 'cute', 'disgusting', 'realistic', 'obscure', 'abstract', and 'nice', labels convenient to a bankrupt mind when con- fronted by Picasso's 'Guernica', Frank Lloyd Wright's Falling-f water, Beethoven's Eroica, Stravinsky's Le Sacre du printemps, Joyce's Finnegans Wake. In all of these we respond to that which has meaning. The mode of action by which a design fulfils its purpose is its function. 'Form follows function', Louis Sullivan's battle cry of the i88os and 18905, was followed by Frank Lloyd Wright's 'Form and function are one'. But semantically, all the statements from Horatio Greenough to the German Bauhaus are meaningless. The concept that what works well will of necessity look well has been the lame excuse for all the sterile, operating-room-like furniture and implements of the twenties and thirties. A dining table of the period might have a top, well proportioned in glisten- ing white marble, the legs carefully nurtured for maximum strength with minimum materials in gleaming stainless steel. And the first reaction on encountering such a table is to lie down on it and have your appendix extracted. Nothing about the table says: 'Dine off me.' Le style internationaland die neue Sachlichkeit have let us down rather badly in terms of human value. Le Corbusier's house as la machine a habiter and the packing-crate houses evolved in the Dutch De Stijl movement reflect a perversion of aesthetics and utility. 'Should I design it to be functional,' the students say, 'or to be aesthetically pleasing?' This is the most heard, the most understandable, and the most mixed-up question in design today. 'Do you want it to look good, or to work ?' Barricades erected between what are really just two of the many aspects of function. It is all quite simple: aesthetic value is an inherent part of function. A simple diagram will show the dynamic actions and relationships that make up the function complex: It is now possible to go through the six parts of the function complex (above) and to define every one of its aspects. METHOD: The interaction of tools, processes, and materials. An honest use of materials, never making the material seem that which it is not, is good method. Materials and tools must be used optimally, never using one material where another can do the job less expensively and/or more efficiently. The steel beam in a house, painted a fake wood grain; the moulded plastic bottle designed to look like expensive blown glass; the 1967 New Eng- land cobbler's bench reproduction ('worm holes $i extra') dragged into a twentieth-century living room to provide dubious footing for Martini glass and ash tray: these are all perversions of materials, tools, and processes. And this discipline of using a suitable method extends naturally to the field of the fine arts as well. Alexander Calder's 'The Horse', a compelling sculpture at the Museum of Modern Art in New York, was shaped by the particular material in which it was conceived. Calder decided that boxwood would give him the specific colour and texture he desired in his sculpture. But boxwood comes only in rather narrow planks of small sizes. (It is for this reason that it tradition- ally has been used in the making of small boxes: hence its name.) The only way he could make a fair-sized piece of sculpture out of a wood that only comes in small pieces was to interlock them somewhat in the manner of a child's toy. The Horse', then, is a piece of sculpture, the aesthetic of which was largely determined by method. For the final execution at the Museum of Modern Art Calder chose to use thin slats of walnut, a wood similar in texture. When early Swedish settlers in what is now Delaware decided to build, they had at their disposal trees and axes. The material was a round tree trunk, the tool an axe, and the process a simple kerf cut into the log. The inevitable result of this combination of tools, materials, and process is a log cabin. From the log cabin in the Delaware Valley of 1680 to Paolo Soleri's desert home in twentieth-century Arizona is no jump at all. Soleri's house is as much the inevitable result of tools, materials, and processes as is the log cabin. The peculiar viscosity of the desert sand where Soleri built his home made his unique method possible. Selecting a mound of desert sand, Soleri criss-crossed it with V-shaped channels cut into the sand, making a pattern somewhat like the ribs of a whale. Then he poured concrete in the channels, forming, when set, the roof- beams of the house-to-be. He added a concrete skin for the roof and bulldozed the sand out from underneath to create the living space itself. He then completed the structure by setting in car windows garnered from automobile junkyards. Soleri's creative use of tools, materials, and processes was a tour deforce that gave us a radically new building method. Dow Chemical's 'self-generating' styrofoam dome is the pro- duct of another radical approach to building methods. The foundation of the building can be a 12-inch-high circular retaining wall. To this wall a 4-inch wide strip of styrofoam is attached which raises as it goes around the wall from zero to 4 inches in height, forming the base for the spiral dome. On the ground in Paolo Soleri: Carved earth form for the original drafting room and interior of the ceramics workshop. Photos by Stuart Weiner. the centre, motorized equipment operates two spinning booms, one with an operator and the other holding a welding machine. The booms move around, somewhat like a compass drawing a circle, and they rise with a spiralling motion at about 30 feet a minute. Gradually they move in towards the centre. A man sit- ting in the saddle feeds an 'endless' 4 x 4-inch strip of styrofoam into the welding machine, which heat-welds it to the previously hand-laid styrofoam. As the feeding mechanism follows its circular, rising, but ever-diminishing diameter path, this spiral process creates the dome. Finally, a hole 36 inches in diameter is left in the top, through which man, mast, and movement arm can be re- moved. The hole is then closed with a clear plastic pop-in bubble or a vent. At this point the structure is translucent, soft, but still entirely without doors or windows. The doors and windows are then cut (with a minimum of effort; in fact the structure is still so soft that openings could be cut with one's fingernail), and the structure is sprayed inside and out with latex-modified concrete. The dome is ultra-lightweight, is secured to withstand high wind speeds and great snow loads, is vermin-proof, and inexpensive. Several of these 54-foot-diameter domes can be easily joined together into a cluster. All these building methods demonstrate the elegance of solution possible with a creative interaction of tools, materials, and processes. USE: 'Does it work?' A vitamin bottle should dispense pills singly. An ink bottle should not tip over. A plastic-film package covering sliced pastrami should withstand boiling water. As in any reasonably conducted home, alarm-clocks seldom travel through the air at speeds approaching five hundred miles per hour, 'streamlining' clocks is out of place. Will a cigarette lighter designed like the tail fin of an automobile (the design of that auto- mobile was copied from a pursuit plane of the Korean War) give more efficient service? Look at some hammers: they are all different in weight, material, and form. The sculptor's mallet is fully round, permitting constant rotation in the hand. The jeweller's chasing hammer is a precision instrument used for fine work on metal. The prospector's pick is delicately balanced to add to the swing of his arm when cracking rocks. The ball-point pen with a fake polyethylene orchid surrounded by fake styrene carrot leaves sprouting out of its top, on the other hand, is a tawdry perversion of design for use. But the results of the introduction of a new device are never predictable. In the case of the automobile, a fine irony developed. One of the earliest criticisms of the car was that, unlike 'old Dobbin', it didn't have the sense 'to find its way home' whenever its owner was incapacitated by an evening of genteel drinking. No one foresaw that mass acceptance of the car would put the American bedroom on wheels, offering everyone a new place to copulate (and privacy from supervision by parents and spouses). Nobody expected the car to accelerate our mobility, thereby creating the exurbant sprawl and the dormitory suburbs that strangle our larger cities; or to sanction the killing of fifty thou- sand people per annum, brutalising us and making it possible, as Philip Wylie says 'to see babies with their jaws ripped off on the corner of Maine and Maple'; or to dislocate our societal groupings, thus contributing to our alienation; and to put every yut, yahoo, and prickamouse from sixteen to sixty in permanent hock to the tune of $80 a month. In the middle forties, no one foresaw that, with the primary use function of the automobile solved, it would emerge as a combination status symbol and disposable, chrome-plated codpiece. But two greater ironies were to follow. In the early sixties, when people began to fly more, and to rent standard cars at their destination, the businessman's clients no longer saw the car he owned and therefore could not judge his 'style of life' by it. Most of Detroit's Baroque exuberance sub- sided, and the automobile again came closer to being a transportation device. Money earmarked for status demonstration was now spent on boats, colour television sets, and other ephemera. The last irony is still to come: with carbon monoxide fumes poisoning our atmosphere, the electric car, driven at low speeds and with a cruising range of less than one hundred miles, reminiscent of the turn of the century, may soon make an anachronistic comeback. Anachronistic because the days of individual transportation devices are numbered. The automobile gives us a typical case history of seventy years of the perversion of design for use. NEED: Much recent design has satisfied only evanescent wants and desires, while the genuine needs of man have often been neglected by the designer. The economic, psychological, spiritual, technological, and intellectual needs of a human being are usually more difficult and less profitable to satisfy than the carefully engineered and manipulated 'wants' inculcated by fad and fashion. People seem to prefer the ornate to the plain as they prefer day-dreaming to thinking and mysticism to rationalism. As they seek crowd pleasures and choose widely travelled roads rather than solitude and lonely paths, they seem to feel a sense of security in crowds and crowdedness. Horror vacui is horror of inner as well as outer vacuum. The need for security-through-identity has been perverted into role-playing. The consumer, unable or unwilling to live a strenuous life, can now act out the role by appearing caparisoned in Naugahyde boots, pseudo-military uniforms, voyageur's shirts, little fur jackets, and all the other outward trappings of Davy Crockett, Foreign Legionnaires, and Cossack Hetmans. (The apotheosis of the ridiculous: a 'be- your-own-Paul-Bunyan-kit, beard included', neglecting the fact that Paul Bunyan is the imaginary creature of an advertising firm early in this century.) The furry parkas and elk-hide boots are obviously only role- playing devices, since climatic control makes their real use redundant. A short ten months after the Scott Paper Company introduced disposable paper dresses for QQC, it was possible to buy throwaway paper dresses ranging from $20 to $149.50. With increased consumption, the price of the 99c dress could have dropped to 40c. And a 40c paper dress is a good idea. Typically, industry perverted the idea and chose to ignore an important need- fulfilling function of the design: disposable dresses inexpensive enough to make disposability economically feasible for the consumer. Greatly accelerated technological change has been used to create technological obsolescence. This year's product often incorporates enough technical changes to make it really superior to last year's offering. The economy of the market place, however, is still geared to a static philosophy of purchasing-owning' rather than a dynamic one of 'leasing-using', and price policy has not resulted in lowered consumer cost. If a television set, for instance, is to be an every-year affair, rather than a once- in-a- lifetime purchase, the price must reflect it. Instead, the real values of real things have been driven out by false values of false things, a sort of Gresham's Law of Design. As an attitude, 'Let them eat cake' has been thought of as a manufacturer's basic right. And by now people, no longer 'turned on' by a loaf of bread, can differentiate only between frostings. Our profit-oriented and consumer-oriented Western society has become so over specialised that few people experience the pleasures and benefits of full life, and many never participate in even the most modest forms of creative activity which might help to keep their sensory and intellectual faculties alive. Members of a 'civilised' community or nation depend on the hands, brains, and imaginations of experts. But however well trained these experts may be, unless they have a sense of ethical, intellectual, and artistic responsibility, then morality and an intelligent, 'beautiful', and elegant quality of life will suffer in astronomical proportions under our present-day system of mass production and private capital. TELESIS: 'The deliberate, purposeful utilisation of the processes of nature and society to obtain particular goals' (American College Dictionary, 1961). The telesic content of a design must reflect the times and conditions that have given rise to it, and must fit in with the general human socio-economic order in which it is to operate. The uncertainties and the new and complex pressures in our society make many people feel that the most logical way to regain lost values is to go out and buy Early American furniture, put a hooked rug on the floor, buy ready-made phoney ancestor portraits, and hang a flint-lock rifle over the fireplace. The gas-light so popular in our subdivisions is a dangerous and senseless anachronism that only reflects an insecure striving for the 'good old days' by consumer and designer alike. Our twenty-year love affair with things Japanese - Zen Buddhism, the architecture of the Ise Shrine and Katsura Imperial Palace, haiku poetry, Hiroshige and Hokusai block-prints, the music of koto and samisen, lanterns and sake sets, green tea liqueur and sukiyaki and tempura - has triggered an intemperate demand by consumers who disregard telesic aptness. By now it is obvious that our interest in things Japanese is not just a passing fad or fashion but rather the result of a major cultural confrontation. As Japan was shut off for nearly two hundred years from the Western world under the Tokugawa Shogunate, its cultural expressions flourished in a pure (although somewhat inbred) form in the imperial cities of Kyoto and Edo (now Tokyo). The Western world's response to an in-depth knowledge of things Japanese is comparable only to the European reaction to things classical, which we are now pleased to call the Renaissance. Nonetheless, it is not possible to translate things from one culture to another. The floors of traditional Japanese homes are covered by floor mats. These mats are 3x6 feet in size and consist of rice straw closely packed inside a cover of woven rush. The long sides are bound with black linen tape. While tatami mats impose a module (homes are spoken of as six-, eight-, or twelve-mat homes), their primary purposes are to absorb sounds and to act as a sort of wall-to-wall vacuum cleaner which filters particles of dirt through the woven surface and retains them in the inner core of rice straw. Periodically these mats (and the dirt within them) are discarded, and new ones are installed. Japanese feet encased in clean, sock- like tabi (the sandal-like street shoe, or geta, having been left at the door) are also designed to fit in with this system. Western- style leather-soled shoes and spike heels would destroy the surface of the mats and also carry much more dirt into the house. The increasing use of regular shoes and industrial precipitation make the use of tatami difficult enough in Japan and absolutely ridiculous in the United States, where high cost makes periodic disposal and reinstallation ruinously expensive. But a tatami-covered floor is only part of the larger design system of the Japanese house. Fragile, sliding paper walls and tatami give the house definite and significant acoustical proper- ties that have influenced the design and development of musical instruments and even the melodic structure of Japanese speech, poetry, and drama. A piano, designed for the reverberating insulated walls and floors of Western homes and concert halls, cannot be introduced into a Japanese home without reducing the brilliance of a Rachmaninoff concerto to a shrill cacophony. Similarly, the fragile quality of a Japanese samisen cannot be fully appreciated in the reverberating box that constitutes the American house. Americans who try to couple a Japanese interior with an American living experience in their search for exotica find that elements cannot be ripped out of their telesic context with impunity. ASSOCIATION: Our psychological conditioning, often going back to earliest childhood memories, comes into play and pre- disposes us, or provides us with antipathy against a given value. Increased consumer resistance in many product areas testifies to design neglect of the associational aspect of the function complex. After two decades, the television-set industry, for instance, has not yet resolved the question of whether a television set should carry the associational values of a piece of furniture (a lacquered mah-jongg chest of the Ming Dynasty) or of technical equipment (a portable tube tester). Television receivers that carry new associations (sets for children's rooms in bright colours and materials, enhanced by tactilely pleasant but non-working controls and pre-set for given times and channels, clip-on swivel sets for hospital beds, etc., etc.) might not only clear up the astoundingly large back inventory of sets in warehouses, but also create new markets. And what shape is most appropriate to a vitamin bottle: a candy jar of the Gay Nineties, a perfume bottle, or a 'Danish modern' style salt shaker ? The response of many designers has been like that so unsuccessfully practised by Hollywood: the public has been pictured as totally unsophisticated, possessed of neither taste nor discrimination. A picture emerges of a moral weakling with an IQ of about 70, ready to accept whatever specious values the unholy trinity of Motivation Research, Market Analysis, and Sales have decided is good for him. In short, the associational values of design have degenerated to the lowest common denominator, determined more by inspired guesswork and piebald graphic charts rather than by the genuinely felt wants of the consumer. Many products already successfully embody values of high associational content, either accidentally or 'by design'. The Sucaryl bottle by Raymond Loewy Associates for Abbott Laboratories communicates both table elegance and sweetening agent without any suggestion of being medicine-like. The Lettera 22 portable typewriter by Olivetti establishes an immediate aura of refined elegance, precision, extreme portability, and businesslike efficiency, while its two-toned carrying case of canvas and leather connotes 'all- climate-proof. Abstract values can be communicated directly to everyone, and this can be simply demonstrated. If the reader is asked to choose which one of the figures below he would rather call Takete or Maluma (both are words devoid of all meaning in any known language), he will easily call the one on the right Takete (W. Koehler, Gestalt Psychology), Many associational values are really universal, providing for unconscious, deep-seated drives and compulsions. Even totally meaningless sounds and shapes can, as demonstrated, mean the same thing to all of us. The unconscious relationship between spectator expectation and the configuration of the object can be experimented with and manipulated. This will not only enhance the 'chair-ness' of a chair, for instance, but also load it with associational values of, say, elegance, formality, portability, or what-have-you. AESTHETICS: Here dwells the traditionally bearded artist, mythological figure, equipped with sandals, mistress, garret, and easel, pursuing his dream-shrouded designs. The cloud of mystery surrounding aesthetics can (and should be) dispelled. The dictionary definition, 'a theory of the beautiful, in taste and Art' leaves us not much better off than before. Nonetheless we know that aesthetics is a tool, one of the most important ones in the repertory of the designer, a tool that helps in shaping his forms and colours into entities that move us, please us, and are beautiful, exciting, filled with delight, meaningful. Because there is no ready yardstick for the analysis of aesthetics, it is simply considered to be a personal expression fraught with mystery and surrounded with nonsense. We 'know what we like' or dislike and let it go at that. Artists themselves begin to look at their productions as auto-therapeutic devices of self- expression, confuse licence and liberty, and forsake all discipline. They are often unable to agree on the various elements and attributes of design aesthetics. If we contrast the 'Last Supper' by Leonardo da Vinci with an ordinary piece of wallboard, we will understand how both operate in the area of aesthetics. In the work of so-called 'pure' art, the main job is to operate on a level of inspiration, delight, beauty, catharsis ... in short, to serve as a propagandistic communications device for the Holy Church at a time when a largely pre-literate population \vas exposed to a few non-verbal stimuli. But the 'Last Supper' also had to fill the other requirements of function; aside from the spiritual, its use was to cover a wall. In terms of method it had to reflect the material (pigment and vehicle), tools (brushes and painting knives), and processes (individualistic brushwork) employed by Leonardo. It had to fulfil the human need for spiritual satisfaction. And it had to work on the associational and telesic plane, providing reference points from the Bible. Finally, it had to make 30 identification through association easier for the beholder through such clichés as the racial type, garb, and posture of the Saviour. 'The Last Supper', by Leonardo da Vinci. Earlier 'Last Supper' versions, painted during the sixth and seventh centuries, saw Christ lying or reclining in the place of honour. For nearly a thousand years, the well- mannered did not sit at the table. Leonardo da Vinci disregarded the reclining position followed by earlier civilisations and painters for Jesus and the Disciples. To make the 'Last Supper' acceptable to Italians of his time, on an associational plane, Leonardo sat the crowd around the last supper table on chairs or benches in the proper positions of his (Leonardo's) time. Unfortunately the scriptural account of St John resting his head on the Saviour's bosom presented an unsolvable positioning problem to the artist, once everybody was seated according to the Renaissance custom. On the other hand, the primary use of wallboard is to cover a wall. But an increased choice of textures and colours applied by the factory shows that it, too, must fulfil the aesthetic aspect of function. No one argues that in a great work of art such as the 'Last Supper', prime functional emphasis is aesthetic, with use (to cover a wall) subsidiary. The main job of wallboard is its use in covering a wall, and the aesthetic assumes a highly subsidiary position. But both examples must operate in all six areas of the function complex. Designers often attempt to go beyond the primary functional requirements of method, use, need, telesis, association, and aesthetics; they strive for a more concise statement: precision, simplicity. In a statement so conceived, we find a degree of aesthetic satisfaction comparable to that found in the logarithmic spiral of a chambered nautilus, the ease of a seagull's flight, the strength of a gnarled tree trunk, the colour of a sunset. The particular satisfaction derived from the simplicity of a thing can be called elegance. When we speak of an 'elegant' solution, we refer to something consciously evolved by men which reduces the complex to the simple: Euclid's Proof that the number of primes is infinite, from the field of mathematics, will serve: 'Primes' are numbers which are not divisible, like 3, 17, 23, etc. One would imagine as we get higher in the numerical series, primes would get rarer, crowded out by the ever-increasing products of small numbers, and that we would finally arrive at a very high number which would be the highest prime, the last numerical virgin. Euclid's Proof demonstrates in a simple and elegant way that this is not true and that to whatever astronomical regions we ascend, we shall always find numbers which are not the product of smaller ones but are generated by immaculate conceptions, as it were. Here is the proof: assume that P is the hypothetically highest prime; then imagine a number equal to 1 x 2 x 3 x 4 ... x P. This number is expressed by the numerical symbol (P!). Now add to it 1: (P! + 1). This number is obviously not divisible by P or any number less than P (because they are all contained in (P!)); hence (P! + 1) is either a prime higher than P or it contains a prime factor higher than P . . . Q.E.D. The deep satisfaction evoked by this proof is aesthetic as well as intellectual: a type of enchantment with the near-perfect.
0 notes
katebushwick · 5 years
Text
The scholarly focus on texts had ideological consequences. With their attention directed to texts, scholars often went on to assume, often without reflection, that oral verbalization was essentially the same as the written verbalization they normally dealt with, and that oral art forms were to all intents and purposes simply texts, except for the fact that they were not written down. The impression grew that, apart from the oration (governed by written rhetorical rules), oral art forms were essentially unskillful and not worth serious study. Not all, however, lived by these assumptions. From the midsixteenth century on, a sense of the complex relationships of writing and speech grew stronger (Cohen 1977). But the relentless dominance of textuality in the scholarly mind is shown by the fact that to this day no concepts have yet been formed for effectively, let alone gracefully, conceiving of oral art as such without reference, conscious or unconscious, to writing. This is so even though the oral art forms which developed during the tens of thousands of years before writing obviously had no connection with writing at all. We have the term ‘literature’, which essentially means ‘writings’ (Latin literatura, from litera, letter of the alphabet), to cover a given body of written 10 orality and literacy materials – English literature, children’s literature – but no comparably satisfactory term or concept to refer to a purely oral heritage, such as the traditional oral stories, proverbs, prayers, formulaic expressions (Chadwick 1932–40, passim), or other oral productions of, say, the Lakota Sioux in North America or the Mande in West Africa or of the Homeric Greeks. As noted above, I style the orality of a culture totally untouched by any knowledge of writing or print, ‘primary orality’. It is ‘primary’ by contrast with the ‘secondary orality’ of present-day high-technology culture, in which a new orality is sustained by telephone, radio, television, and other electronic devices that depend for their existence and functioning on writing and print. Today primary oral culture in the strict sense hardly exists, since every culture knows of writing and has some experience of its effects. Still, to varying degrees many cultures and subcultures, even in a high-technology ambiance, preserve much of the mind-set of primary orality. The purely oral tradition or primary orality is not easy to conceive of accurately and meaningfully. Writing makes ‘words’ appear similar to things because we think of words as the visible marks signaling words to decoders: we can see and touch such inscribed ‘words’ in texts and books. Written words are residue. Oral tradition has no such residue or deposit. When an often-told oral story is not actually being told, all that exists of it is the potential in certain human beings to tell it. We (those who read texts such as this) are for the most part so resolutely literate that we seldom feel comfortable with a situation in which verbalization is so little thing-like as it is in oral tradition. As a result – though at a slightly reduced frequency now – scholarship in the past has generated such monstrous concepts as ‘oral literature’. This strictly preposterous term remains in circulation today even among scholars now more and more acutely aware how embarrassingly it reveals our inability to represent to our own minds a heritage of verbally organized materials except as some variant of writing, even when they have nothing to do with writing at all. The title of the great Milman Parry Collection of Oral Literature at Harvard University monumentalizes the state of awareness of an earlier generation of scholars rather than that of its recent curators. One might argue (as does Finnegan 1977, p. 16) that the term the orality of language 11 ‘literature’, though devised primarily for works in writing, has simply been extended to include related phenomena such as traditional oral narrative in cultures untouched by writing. Many originally specific terms have been so generalized in this way. But concepts have a way of carrying their etymologies with them forever. The elements out of which a term is originally built usually, and probably always, linger somehow in subsequent meanings, perhaps obscurely but often powerfully and even irreducibly. Writing, moreover, as will be seen later in detail, is a particularly pre-emptive and imperialist activity that tends to assimilate other things to itself even without the aid of etymologies. Though words are grounded in oral speech, writing tyrannically locks them into a visual field forever. A literate person, asked to think of the word ‘nevertheless’, will normally (and I strongly suspect always) have some image, at least vague, of the spelled-out word and be quite unable ever to think of the word ‘nevertheless’ for, let us say, 60 seconds without adverting to any lettering but only to the sound. This is to say, a literate person cannot fully recover a sense of what the word is to purely oral people. In view of this pre-emptiveness of literacy, it appears quite impossible to use the term ‘literature’ to include oral tradition and performance without subtly but irremediably reducing these somehow to variants of writing. Thinking of oral tradition or a heritage of oral performance, genres and styles as ‘oral literature’ is rather like thinking of horses as automobiles without wheels. You can, of course, undertake to do this. Imagine writing a treatise on horses (for people who have never seen a horse) which starts with the concept not of horse but of ‘automobile’, built on the readers’ direct experience of automobiles. It proceeds to discourse on horses by always referring to them as ‘wheelless automobiles’, explaining to highly automobilized readers who have never seen a horse all the points of difference in an effort to excise all idea of ‘automobile’ out of the concept ‘wheelless automobile’ so as to invest the term with a purely equine meaning. Instead of wheels, the wheelless automobiles have enlarged toenails called hooves; instead of headlights or perhaps rear-vision mirrors, eyes; instead of a coat of lacquer, something called hair; instead of gasoline for fuel, hay, and so on. In the end, horses are only what they are not. No matter how accurate and 12 orality and literacy thorough such apophatic description, automobile-driving readers who have never seen a horse and who hear only of ‘wheelless automobiles’ would be sure to come away with a strange concept of a horse. The same is true of those who deal in terms of ‘oral literature’, that is, ‘oral writing’. You cannot without serious and disabling distortion describe a primary phenomenon by starting with a subsequent secondary phenomenon and paring away the differences. Indeed, starting backwards in this way – putting the car before the horse – you can never become aware of the real differences at all. Although the term ‘preliterate’ itself is useful and at times necessary, if used unreflectively it also presents problems which are the same as those presented by the term ‘oral literature’, if not quite so assertive. ‘Preliterate’ presents orality – the ‘primary modeling system’ – as an anachronistic deviant from the ‘secondary modeling system’ that followed it. In concert with the terms ‘oral literature’ and ‘preliterate’, we hear mention also of the ‘text’ of an oral utterance. ‘Text’, from a root meaning ‘to weave’, is, in absolute terms, more compatible etymologically with oral utterance than is ‘literature’, which refers to letters etymologically/(literae) of the alphabet. Oral discourse has commonly been thought of even in oral milieus as weaving or stitching – rhapso¯idein, to ‘rhapsodize’, basically means in Greek ‘to stitch songs together’. But in fact, when literates today use the term ‘text’ to refer to oral performance, they are thinking of it by analogy with writing. In the literate’s vocabulary, the ‘text’ of a narrative by a person from a primary oral culture represents a back-formation: the horse as an automobile without wheels again. Given the vast difference between speech and writing, what can be done to devise an alternative for the anachronistic and selfcontradictory term ‘oral literature’? Adapting a proposal made by Northrop Frye for epic poetry in The Anatomy of Criticism (1957, pp. 248–50, 293–303), we might refer to all purely oral art as ‘epos’, which has the same Proto-IndoEuropean root, wekw-, as the Latin word vox and its English equivalent ‘voice’, and thus is grounded firmly in the vocal, the oral. Oral performances would thus be felt as ‘voicings’, which is what they are. But the more usual meaning of the term epos, (oral) epic poetry (see Bynum 1967), would somewhat interfere with the orality of language 13 an assigned generic meaning referring to all oral creations. ‘Voicings’ seems to have too many competing associations, though if anyone thinks the term buoyant enough to launch, I will certainly aid efforts to keep it afloat. But we would still be without a more generic term to include both purely oral art and literature. Here I shall continue a practice common among informed persons and resort, as necessary, to self-explanatory circumlocutions – ‘purely oral art forms’, ‘verbal art forms’ (which would include both oral forms and those composed in writing, and everything in between), and the like. At present the term ‘oral literature’ is, fortunately, losing ground, but it may well be that any battle to eliminate it totally will never be completely won. For most literates, to think of words as totally dissociated from writing is simply too arduous a task to undertake, even when specialized linguistic or anthropological work may demand it. The words keep coming to you in writing, no matter what you do. Moreover, to dissociate words from writing is psychologically threatening, for literates’ sense of control over language is closely tied to the visual transformations of language: without dictionaries, written grammar rules, punctuation, and all the rest of the apparatus that makes words into something you can ‘look’ up, how can literates live? Literate users of a grapholect such as standard English have access to vocabularies hundreds of times larger than any oral language can manage. In such a linguistic world dictionaries are essential. It is demoralizing to remind oneself that there is no dictionary in the mind, that lexicographical apparatus is a very late accretion to language as language, that all languages have elaborate grammars and have developed their elaborations with no help from writing at all, and that outside of relatively hightechnology cultures most users of languages have always got along pretty well without any visual transformations whatsoever of vocal sound. Oral cultures indeed produce powerful and beautiful verbal performances of high artistic and human worth, which are no longer even possible once writing has taken possession of the psyche. Nevertheless, without writing, human consciousness cannot achieve its fuller potentials, cannot produce other beautiful and powerful creations. In this sense, orality needs to produce and is destined to produce writing. Literacy, as will be seen, is absolutely necessary for the development 14 orality and literacy not only of science but also of history, philosophy, explicative understanding of literature and of any art, and indeed for the explanation of language (including oral speech) itself. There is hardly an oral culture or a predominantly oral culture left in the world today that is not somehow aware of the vast complex of powers forever inaccessible without literacy. This awareness is agony for persons rooted in primary orality, who want literacy passionately but who also know very well that moving into the exciting world of literacy means leaving behind much that is exciting and deeply loved in the earlier oral world. We have to die to continue living. Fortunately, literacy, though it consumes its own oral antecedents and, unless it is carefully monitored, even destroys their memory, is also infinitely adaptable. It can restore their memory, too. Literacy can be used to reconstruct for ourselves the pristine human consciousness which was not literate at all – at least to reconstruct this consciousness pretty well, though not perfectly (we can never forget enough of our familiar present to reconstitute in our minds any past in its full integrity). Such reconstruction can bring a better understanding of what literacy itself has meant in shaping man’s consciousness toward and in high-technology cultures. Such understanding of both orality and literacy is what this book, which is of necessity a literate work and not an oral performance, attempts in some degree to achieve.
EARLY AWARENESS OF ORAL TRADITION The new awakening in recent years to the orality of speech was not without antecedents. Several centuries before Christ, the pseudonymous author of the Old Testament book that goes by his Hebrew nom de plume, Qoheleth (‘assembly speaker’), or its Greek equivalent, Ecclesiastes, clearly adverts to the oral tradition on which his writing draws: ‘Besides being wise, Qoheleth taught the people knowledge, and weighed, scrutinized, and arranged many proverbs. Qoheleth sought to find pleasing sayings, and to write down true sayings with precision’ (Ecclesiastes 12:9–10). ‘Write down . . . sayings.’ Literate persons, from medieval florilegia collectors to Erasmus (1466–1536) or Vicesimus Knox (1752–1821) and beyond, have continued to put into texts sayings from oral tradition, though it is significant that at least from the Middle Ages and Erasmus’ age, in western culture at least, most collectors culled the ‘sayings’ not directly from spoken utterance but from other writings. The Romantic Movement was marked by concern with the distant past and with folk culture. Since then, hundreds of collectors, beginning with James McPherson (1736–96) in Scotland, Thomas Percy (1729–1811) in England, the Grimm brothers Jacob (1785–1863) and Wilhelm (1786–1859) in Germany, or Francis James Child (1825–96) in the United States, have worked over parts of oral or quasi-oral or near-oral tradition more or less directly, giving it new respectability. By the start of the twentieth century, the Scottish scholar Andrew Lang (1844–1912) and others had pretty well discredited the view that oral folklore was simply the left-over debris of a ‘higher’ literary mythology – a view generated quite naturally by the chirographic and typographic bias discussed in the preceding chapter. Earlier linguists had resisted the idea of the distinctiveness of spoken and written languages. Despite his new insights into orality, or perhaps because of them, Saussure takes the view that writing simply represents spoken language in visible form (1959, pp. 23–4) as do Edward Sapir, C. Hockett and Leonard Bloomfield. The Prague Linguistic Circle, especially J. Vachek and Ernst Pulgram, noted some distinction between written and spoken language, although in concentrating on linguistic universals rather than developmental factors they made little use of this distinction (Goody 1977, p. 77). THE HOMERIC QUESTION Given a long-standing awareness of oral tradition among literates and given Lang’s and others’ demonstration that purely oral cultures could generate sophisticated verbal art forms, what is new in our new understanding of orality? The new understanding developed over various routes, but it can perhaps best be followed in the history of the ‘Homeric question’. For over two millennia literates have devoted themselves to the study of Homer, with varying mixtures of insight, misinformation and prejudice, conscious and unconscious. Nowhere do the contrasts between orality and literacy or the blind spots of the unreflective chirographic or typographic mind show in a richer context. The ‘Homeric question’ as such grew out of the nineteenth-century higher criticism of Homer which had matured together with the higher criticism of the Bible, but it had roots reaching back to classical antiquity. (See Adam Parry 1971, drawn on heavily here in the next few pages.) Men of letters in western classical antiquity had the modern discovery of primary oral cultures 17 occasionally shown some awareness that the Iliad and the Odyssey differed from other Greek poetry and that their origins were obscure. Cicero suggested that the extant text of the two Homeric poems was a revision by Pisistratus of Homer’s work (which Cicero thought of, however, as itself a text), and Josephus even suggested that Homer could not write, but he did so in order to argue that Hebrew culture was superior to very ancient Greek culture because it knew writing, rather than to account for anything about the style or other features in the Homeric works. From the beginning, deep inhibitions have interfered with our seeing the Homeric poems for what they in fact are. The Iliad and the Odyssey have been commonly regarded from antiquity to the present as the most exemplary, the truest and the most inspired secular poems in the western heritage. To account for their received excellence, each age has been inclined to interpret them as doing better what it conceived its poets to be doing or aiming at. Even when the Romantic Movement had reinterpreted the ‘primitive’ as a good rather than a regrettable stage of culture, scholars and readers generally still tended to impute to primitive poetry qualities that their own age found fundamentally congenial. More than any earlier scholar, the American classicist Milman Parry (1902–35) succeeded in undercutting this cultural chauvinism so as to get into the ‘primitive’ Homeric poetry on this poetry’s own terms, even when these ran counter to the received view of what poetry and poets ought to be. Earlier work had vaguely adumbrated Parry’s in that the general adulation of the Homeric poems had often been accompanied by some uneasiness. Often the poems were felt to be somehow out of line. In the seventeenth century François HĂ©delin, AbbĂ© d’Aubignac et de Meimac (1604–76), in a spirit more of rhetorical polemic than of true learning, attacked the Iliad and the Odyssey as badly plotted, poor in characterization, and ethically and theologically despicable, going on to argue that there never had been a Homer and that the epics attributed to him were no more than collections of rhapsodies by others. The classical scholar Richard Bentley (1662–1742), famous for proving that the socalled Epistles of Phalaris were spurious and for indirectly occasioning Swift’s antitypographic satire, The Battle of the Books, thought that there was indeed a man named Homer but that the various songs that he 18 orality and literacy ‘wrote’ were not put together into the epic poems until about 500 years later in the time of Pisistratus. The Italian philosopher of history, Giambattista Vico (1668–1744),believed that there had been no Homer but that the Homeric epics were somehow the creations of a whole people. Robert Wood (c. 1717–71), an English diplomat and archaeologist, who carefully identified some of the places referred to in the Iliad and the Odyssey, was apparently the first whose conjectures came close to what Parry finally demonstrated. Wood believed that Homer was not literate and that it was the power of memory that enabled him to produce this poetry. Wood strikingly suggests that memory played a quite different role in oral culture from that which it played in literate culture. Although Wood could not explain just how Homer’s mnemonics worked, he does suggest that the ethos of Homeric verse was popular rather than learned. Jean Jacques Rousseau (1821, pp. 163–4), citing PĂšre Hardouin (neither mentioned by Adam Parry) thought it most likely that Homer and his contemporaries among the Greeks had no writing. Rousseau does, however, see as a problem the message on a tablet which, in Book VI of the Iliad, Belerephon carried to the King of Lycia. But there is no evidence that the ‘signs’ on the tablet calling for Belerephon’s own execution were in a true script (see below, pp. 83–5). In fact, in the Homeric account they sound like some sort of crude ideographs. The nineteenth century saw the development of the Homeric theories of the so-called Analysts, initiated by Friedrich August Wolf (1759– 1824), in his 1795 Prolegomena. The Analysts saw the texts of the Iliad and the Odyssey as combinations of earlier poems or fragments, and set out to determine by analysis what the bits were and how they had been layered together. But, as Adam Parry notes (1971, pp. xiv–xvii), the Analysts assumed that the bits being put together were simply texts, no alternative having suggested itself to their minds. Inevitably, the Analysts were succeeded in the early twentieth century by the Unitarians, often literary pietists, insecure cultists grasping at straws, who maintained that the Iliad and the Odyssey were so well structured, so consistent in characterization, and in general such high art that they could not be the work of an unorganized succession of redactors but must be the creation of one man. This was more or less the the modern discovery of primary oral cultures 19 predominant opinion when Parry was a student and beginning to form his own opinions. MILMAN PARRY’S DISCOVERY Like much trail-blazing intellectual work, Milman Parry’s grew out of insights as deep and sure as they were difficult to make explicit. Parry’s son, the late Adam Parry (1971, pp. ix-lxii), has beautifully traced the fascinating development of his father’s thought, from his MA thesis at the University of California at Berkeley in the early 1920s till his untimely death in 1935. Not every element in Parry’s plenary vision was entirely new. The fundamental axiom governing his thought from the early 1920s on, ‘the dependence of the choice of words and word-forms on the shape of the [orally composed] hexameter line’ in the Homeric poems (Adam Parry 1971, p. xix), had been anticipated in the work of J. E. Ellendt and H. DĂŒntzer. Other elements in Parry’s germinal insight had also been anticipated. Arnold van Gennep had noted formulary structuring in poetry of oral cultures of the present age, and M. Murko had recognized the absence of exact verbatim memory in oral poetry of such cultures. More importantly, Marcel Jousse, the Jesuit priest and scholar, who had been reared in a residually oral peasant milieu in France and who spent most of his adult life in the Middle East soaking up its oral culture, had sharply differentiated the oral composition in such cultures from all written composition. Jousse (1925) had styled oral cultures and the personality structures they produced verbomoteur (‘verbomotor’ – regrettably, Jousse’s work has not been translated into English; see Ong 1967b, pp. 30, 147–8, 335–6). Milman Parry’s vision included and fused all these insights and others to provide a provable account of what Homeric poetry was and of how the conditions under which it was produced made it what it was. Parry’s vision, however, even where partly anticipated by these earlier scholars, was his own, for when it initially presented itself to him in the early 1920s, he apparently did not even know of the existence of any of these scholars just mentioned (Adam Parry 1971, p. xxii). Doubtless, of course, subtle influences in the air at the time that had influenced earlier scholars were also influencing him. 20 orality and literacy As matured and demonstrated in his Paris doctoral dissertation (Milman Parry 1928), Parry’s discovery might be put this way: virtually every distinctive feature of Homeric poetry is due to the economy enforced on it by oral methods of composition. These can be reconstructed by careful study of the verse itself, once one puts aside the assumptions about expression and thought processes engrained in the psyche by generations of literate culture. This discovery was revolutionary in literary circles and would have tremendous repercussions elsewhere in cultural and psychic history. What are some of the deeper implications of this discovery, and particularly of Parry’s use of the axiom earlier noted, ‘the dependence of the choice of words and word-forms on the shape of the hexameter line’? DĂŒntzer had noted that the Homeric epithets used for wine are all metrically different and that the use of a given epithet was determined not by its precise meaning so much as by the metrical needs of the passage in which it turned up (Adam Parry 1971, p. xx). The appositeness of the Homeric epithet had been piously and grossly exaggerated. The oral poet had an abundant repertoire of epithets diversified enough to provide an epithet for any metrical exigency that might arise as he stitched his story together – differently at each telling, for, as will be seen, oral poets do not normally work from verbatim memorization of their verse. Now, it is obvious that metrical needs in one way or another determine the selection of words by any poet composing in meter. But the general presumption had been that proper metrical terms somehow suggested themselves to the poetic imagination in a fluid and largely unpredictable way, correlated only with ‘genius’ (that is, with an ability essentially inexplicable). Poets, as idealized by chirographic cultures and even more by typographic cultures, were not expected to use prefabricated materials. If a poet did echo bits of earlier poems, he was expected to modulate these into his own ‘kind of thing’. Certain practices, it is true, went against this presumption, notably the use of phrase books providing standard ways of saying things for those writing postclassical Latin poetry. Latin phrase books flourished, particularly after the invention of printing made compilations easily multipliable, and they continued to flourish far through the nineteenth century, when the Gradus ad Parnassum was much in use by schoolboys (Ong 1967b, pp. the modern discovery of primary oral cultures 21 85–6; 1971, pp. 77, 261–3; 1977, pp. 166, 178). The Gradus provided epithetic and other phrases from classical Latin poets, with the long and short syllables all conveniently marked for metrical fit, so that the aspirant poet could assemble a poem from the Gradus as boys might assemble a structure from an old Erector set or Meccano set or from a set of Tinker Toys. The over-all structure could be of his own making but the pieces were all there before he came along. This kind of procedure, however, was viewed as tolerable only in beginners. The competent poet was supposed to generate his own metrically fitted phrases. Commonplace thought might be tolerated, but not commonplace language. In An Essay on Criticism (1711) Alexander Pope expected the poet’s ‘wit’ to guarantee that when he treated ‘what oft was thought’ he did it in such a way that readers found it ‘ne’er so well expressed’. The way of putting the accepted truth had to be original. Shortly after Pope, the Romantic Age demanded still more originality. For the extreme Romantic, the perfect poet should ideally be like God Himself, creating ex nihilo: the better he or she was, the less predictable was anything and everything in the poem. Only beginners or permanently poor poets used prefabricated stuff. Homer, by the consensus of centuries, was no beginner poet, nor was he a poor poet. Perhaps he was even a congenital ‘genius’, who had never been through a fledgling stage at all but could fly the moment he was hatched – like the precocious Mwindo, the Nyanga epic hero, the ‘Little-One-just-Born-He-Walked’. In any case, in the Iliad and the Odyssey Homer was normally taken to be fully accomplished, consummately skilled. Yet it now began to appear that he had had some kind of phrase book in his head. Careful study of the sort Milman Parry was doing showed that he repeated formula after formula. The meaning of the Greek term ‘rhapsodize’, rhapsoÂŻidein, ‘to stitch song together’ (rhaptein, to stitch; ÂŻoide, song), became ominous: Homer stitched together prefabricated parts. Instead of a creator, you had an assembly-line worker. This idea was particularly threatening to far-gone literates. For literates are educated never to use clichĂ©s, in principle. How to live with the fact that the Homeric poems, more and more, appeared to be made up of clichĂ©s, or elements very like clichĂ©s? By and large, as Parry’s work had proceeded and was carried forward by later scholars, it became 22 orality and literacy evident that only a tiny fraction of the words in the Iliad and the Odyssey were not parts of formulas, and to a degree devastatingly predictable formulas. Moreover, the standardized formulas were grouped around equally standardized themes, such as the council, the gathering of the army, the challenge, the despoiling of the vanquished, the hero’s shield, and so on and on (Lord 1960, pp. 68–98). A repertoire of similar themes is found in oral narrative and other oral discourse around the world. (Written narrative and other written discourses use themes, too, of necessity, but the themes are infinitely more varied and less obtrusive.) The entire language of the Homeric poems, with its curious mix of early and late Aeolic and Ionic peculiarities, was best explained not as an overlaying of several texts but as a language generated over the years by epic poets using old set expressions which they preserved and/or reworked largely for metrical purposes. After being shaped and reshaped centuries earlier, the two epics were set down in the new Greek alphabet around 700–650 BC, the first lengthy compositions to be put into this alphabet (Havelock 1963, p. 115). Their language was not a Greek that anyone had ever spoken in day-to-day life, but a Greek specially contoured through use of poets learning from one another generation after generation. (Traces of a comparable special language are familiar even today, for example, in the peculiar formulas still found in the English used for fairy tales.) How could any poetry that was so unabashedly formulary, so constituted of prefabricated parts, still be so good? Milman Parry faced up squarely to this question. There was no use denying the now known fact that the Homeric poems valued and somehow made capital of what later readers had been trained in principle to disvalue, namely, the set phrase, the formula, the expected qualifier – to put it more bluntly, the clichĂ©. Certain of these wider implications remained to be worked out later in great detail by Eric A. Havelock (1963). Homeric Greeks valued clichĂ©s because not only the poets but the entire oral noetic world or thought world relied upon the formulaic constitution of thought. In an oral culture, knowledge, once acquired, had to be constantly repeated or it would be lost: fixed, formulaic thought patterns were essential for wisdom and effective administration. But, by Plato’s day (427?–347 the modern discovery of primary oral cultures 23 BC) a change had set in: the Greeks had at long last effectively interiorized writing – something which took several centuries after the development of the Greek alphabet around 720–700 BC (Havelock 1963, p. 49, citing Rhys Carpenter). The new way to store knowledge was not in mnemonic formulas but in the written text. This freed the mind for more original, more abstract thought. Havelock shows that Plato excluded poets from his ideal republic essentially (if not quite consciously) because he found himself in a new chirographically styled noetic world in which the formula or clichĂ©, beloved of all traditional poets, was outmoded and counterproductive. All these are disturbing conclusions for a western culture that has identified closely with Homer as part of an idealized Greek antiquity. They show Homeric Greece cultivating as a poetic and noetic virtue what we have regarded as a vice, and they show that the relationship between Homeric Greece and everything that philosophy after Plato stood for was, however superficially cordial and continuous, in fact deeply antagonistic, if often at the unconscious rather than the conscious level. The conflict wracked Plato’s own unconscious. For Plato expresses serious reservations in the Phaedrus and his Seventh Letter about writing, as a mechanical, inhuman way of processing knowledge, unresponsive to questions and destructive of memory, although, as we now know, the philosophical thinking Plato fought for depended entirely on writing. No wonder the implications here resisted surfacing for so long. The importance of ancient Greek civilization to all the world was beginning to show in an entirely new light: it marked the point in human history when deeply interiorized alphabetic literacy first clashed head-on with orality. And, despite Plato’s uneasiness, at the time neither Plato nor anyone else was or could be explicitly aware that this was what was going on. Parry’s concept of the formula was worked out in the study of Greek hexameter verse. As others have dealt with the concept and developed it, various disputes have inevitably arisen as to how to contain or extend or adapt the definition (see Adam Parry 1971, p. xxviii, n. 1). One reason for this is that in Parry’s concept there is a deeper stratum of meaning not immediately apparent from his definition of the formula, ‘a group of words which is regularly employed under the same metrical conditions to express a given essential idea’ (Adam Parry 24 orality and literacy 1971, p. 272). This stratum has been explored most intensively by David E. Bynum in The Daemon in the Wood (1978, pp. 11–18, and passim). Bynum notes that ‘Parry’s “essential ideas” are seldom altogether so simple as the shortness of Parry’s definition or the usual brevity of formulas themselves, the conventionality of the epic style, or the banality of most formulas’ lexical reference may suggest’ (1978, p. 13). Bynum distinguishes between ‘formulaic’ elements and ‘strictly formulary (exactly repeated) phrases’ (cf. Adam Parry 1971, p. xxxiii, n. 1). Although these latter mark oral poetry (Lord 1960, pp. 33–65), in such poetry they occur and recur in clusters (in one of Bynum’s instances, for example, high trees attend the commotion of a terrific warrior’s approach – 1978, p. 18). The clusters constitute the organizing principles of the formulas, so that the ‘essential idea’ is not subject to clear, straightforward formulation but is rather a kind of fictional complex held together largely in the unconscious. Bynum’s impressive book focuses in great part around the elemental fiction which he styles the Two Tree pattern and which he identifies in oral narrative and associated iconography around the world from Mesopotamian and Mediterranean antiquity through oral narrative in modern Yugoslavia, Central Africa, and elsewhere. Throughout, ‘the notions of separation, gratuity, and an unpredictable danger’ cluster around one tree (the green tree) and ‘the ideas of unification, recompense, reciprocity’ cluster about the other (the dry tree, hewn wood) – 1978, p. 145. Bynum’s attention to this and other distinctively oral ‘elemental fiction’ helps us to make some clearer distinctions between oral narrative organization and chirographic-typographic narrative organization than have previously been possible. Such distinctions will be attended to in this book on grounds different from but neighboring on Bynum’s. Foley (1980a) has shown that exactly what an oral formula is and how it works depends on the tradition in which it is used, but that there is ample common ground in all traditions to make the concept valid. Unless it is clearly indicated otherwise, I shall understand formula and formulary and formulaic here as referring quite generically to more or less exactly repeated set phrases or set expressions (such as proverbs) in verse or prose, which, as will be seen, do have a function in oral culture more crucial and the modern discovery of primary oral cultures 25 pervasive than any they may have in a writing or print or electronic culture. (Cf. Adam Parry 1971, p. xxxiii, n. 1.) Oral formulaic thought and expression ride deep in consciousness and the unconscious, and they do not vanish as soon as one used to them takes pen in hand. Finnegan (1977, p. 70) reports, with apparently some surprise, Opland’s observation that when Xhosa poets learn to write, their written poetry is also characterized by a formulaic style. It would in fact be utterly surprising if they could manage any other style, especially since formulaic style marks not poetry alone but, more or less, all thought and expression in primary oral culture. Early written poetry everywhere, it seems, is at first necessarily a mimicking in script of oral performance. The mind has initially no properly chirographic resources. You scratch out on a surface words you imagine yourself saying aloud in some realizable oral setting. Only very gradually does writing become composition in writing, a kind of discourse – poetic or otherwise – that is put together without a feeling that the one writing is actually speaking aloud (as early writers may well have done in composing). As noted later here, Clanchy reports how even the eleventh-century Eadmer of Canterbury seems to think of composing in writing as ‘dictating to himself’ (1979, p. 218). Oral habits of thought and expression, including massive use of formulaic elements, sustained in use largely by the teaching of the old classical rhetoric, still marked prose style of almost every sort in Tudor England some two thousand years after Plato’s campaign against oral poets (Ong 1971, pp. 23–47). They were effectively obliterated in English, for the most part, only with the Romantic Movement two centuries later. Many modern cultures that have known writing for centuries but have never fully interiorized it, such as Arabic culture and certain other Mediterranean cultures (e.g. Greek – Tannen 1980a), rely heavily on formulaic thought and expression still. Kahlil Gibran has made a career of providing oral formulary products in print to literate Americans who find novel the proverb-like utterances that, according to a Lebanese friend of mine, citizens of Beirut regard as commonplace. 26 orality and literacy CONSEQUENT AND RELATED WORK Many of Milman Parry’s conclusions and emphases have of course been somewhat modified by subsequent scholarship (see, for example, Stoltz and Shannon 1976), but his central message about orality and its implications for poetic structures and for aesthetics has revolutionized for good Homeric studies and other studies as well, from anthropology to literary history. Adam Parry (1971, pp. xliv-lxxx) has described some of the immediate effects of the revolution which his father wrought. Holoka (1973) and Haymes (1973) have recorded many others in their invaluable bibliographical surveys. Although Parry’s work has been attacked and revised in some of its details, the few totally unreceptive reactions to his work have mostly by now simply been put aside as products of the unreflective chirographictypographic mentality which at first blocked any real comprehension of what Parry was saying and which his work itself has now rendered obsolete. Scholars are still elaborating and qualifying the fuller implications of Parry’s discoveries and insights. Whitman (1958) early supplemented it with his ambitious outline of the Iliad as structured by the formulaic tendency to repeat at the end of an episode elements from the episode’s beginning; the epic is built like a Chinese puzzle, boxes within boxes, according to Whitman’s analysis. For understanding orality as contrasted with literacy, however, the most significant developments following upon Parry have been worked out by Albert B. Lord and Eric A. Havelock. In The Singer of Tales (1960), Lord carried through and extended Parry’s work with convincing finesse, reporting on lengthy field trips and massive taping of oral performances by Serbo-Croatian epic singers and of lengthy interviews with these singers. Earlier, Francis Magoun and those who studied with him and Lord at Harvard, notably Robert Creed and Jess Bessinger, were already applying Parry’s ideas to the study of Old English poetry (Foley 1980b, p. 490). Havelock’s Preface to Plato (1963) has extended Parry’s and Lord’s findings about orality in oral epic narrative out into the whole of ancient oral Greek culture and has shown convincingly how the beginnings of Greek philosophy were tied in with the restructuring of thought brought about by writing. Plato’s exclusion of poets from his the modern discovery of primary oral cultures 27 Republic was in fact Plato’s rejection of the pristine aggregative, paratactic, oral-style thinking perpetuated in Homer in favor of the keen analysis or dissection of the world and of thought itself made possible by the interiorization of the alphabet in the Greek psyche. In a subsequent work, Origins of Western Literacy (1976), Havelock attributes the ascendency of Greek analytic thought to the Greeks’ introduction of vowels into the alphabet. The original alphabet, invented by Semitic peoples, had consisted only of consonants and some semivowels. In introducing vowels, the Greeks reached a new level of abstract, analytic, visual coding of the elusive world of sound. This achievement presaged and implemented their later abstract intellectual achievements. The line of work initiated by Parry has yet to be joined to work in the many fields with which it can readily connect. But a few important connections have already been made. For example, in his magisterial and judicious work on The Epic in Africa (1979), Isidore Okpewho brings Parry’s insights and analyses (in this case as elaborated in Lord’s work) to bear on the oral art forms of cultures quite different from the European, so that the African epic and the ancient Greek epic throw reciprocal light on one another. Joseph C. Miller (1980) treats African oral tradition and history. Eugene Eoyang (1977) has shown how neglect of the psychodynamics of orality has led to misconceptions about early Chinese narrative, and other authors collected by Plaks (1977) have examined formulary antecedents to literary Chinese narrative. Zwettler has dealt with Classical Arabic poetry (1977). Bruce Rosenberg (1970) has studied the survival of the old orality in American folk preachers. In a festschrift in honor of Lord, John Miles Foley (1981) has collected new studies on orality from the Balkans to Nigeria and New Mexico and from antiquity to the present. And other specialized work is now appearing. Anthropologists have gone more directly into the matter of orality. Drawing not only on Parry and Lord and Havelock but also on others’ work, including early work of my own on the effect of print on sixteenth-century thought processes (Ong 1958b – cited by Goody from a 1974 reprinting), Jack Goody (1977) has convincingly shown how shifts hitherto labeled as shifts from magic to science, or from the so-called ‘prelogical’ to the more and more ‘rational’ state of consciousness, or from LĂ©vi-Strauss’s ‘savage’ mind to domesticated 28 orality and literacy thought, can be more economically and cogently explained as shifts from orality to various stages of literacy. I had earlier suggested (1967b, p. 189) that many of the contrasts often made between ‘western’ and other views seem reducible to contrasts between deeply interiorized literacy and more or less residually oral states of consciousness. The late Marshall McLuhan’s well-known work (1962, 1964) has also made much of ear-eye, oral-textual contrasts, calling attention to James Joyce’s precociously acute awareness of ear-eye polarities and relating to such polarities a great amount of otherwise quite disparate scholarly work brought together by McLuhan’s vast eclectic learning and his startling insights. McLuhan attracted the attention not only of scholars (Eisenstein 1979, pp. x–xi, xvii) but also of people working in the mass media, of business leaders, and of the generally informed public, largely because of fascination with his many gnomic or oracular pronouncements, too glib for some readers but often deeply perceptive. These he called ‘probes’. He generally moved rapidly from one ‘probe’ to another, seldom if ever undertaking any thorough explanation of a ‘linear’ (that is, analytic) sort. His cardinal gnomic saying, ‘The medium is the message’, registered his acute awareness of the importance of the shift from orality through literacy and print to electronic media. Few people have had so stimulating an effect as Marshall McLuhan on so many diverse minds, including those who disagreed with him or believed they did. However, if attention to sophisticated orality-literacy contrasts is growing in some circles, it is still relatively rare in many fields where it could be helpful. For example, the early and late stages of consciousness which Julian Jaynes (1977) describes and relates to neurophysiological changes in the bicameral mind would also appear to lend themselves largely to much simpler and more verifiable description in terms of a shift from orality to literacy. Jaynes discerns a primitive stage of consciousness in which the brain was strongly ‘bicameral’, with the right hemisphere producing uncontrollable ‘voices’ attributed to the gods which the left hemisphere processed into speech. The ‘voices’ began to lose their effectiveness between 2000 and 1000 BC. This period, it will be noted, is neatly bisected by the invention of the alphabet around 1500 BC, and Jaynes indeed believes that writing helped bring about the breakdown of the original bicamerality. The the modern discovery of primary oral cultures 29 Iliad provides him with examples of bicamerality in its unselfconscious characters. Jaynes dates the Odyssey a hundred years later than the Iliad and believes that wily Odysseus marks a breakthrough into the modern self-conscious mind, no longer under the rule of the ‘voices’. Whatever one makes of Jaynes’s theories, one cannot but be struck by the resemblance between the characteristics of the early or ‘bicameral’ psyche as Jaynes describes it – lack of introspectivity, of analytic prowess, of concern with the will as such, of a sense of difference between past and future – and the characteristics of the psyche in oral cultures not only in the past but even today. The effects of oral states of consciousness are bizarre to the literate mind, and they can invite elaborate explanations which may turn out to be needless. Bicamerality may mean simply orality. The question of orality and bicamerality perhaps needs further investigation.
As a result of the work just reviewed, and of other work which will be cited, it is possible to generalize somewhat about the psychodynamics of primary oral cultures, that is, of oral cultures untouched by writing. For brevity, when the context keeps the meaning clear, I shall refer to primary oral cultures simply as oral cultures. Fully literate persons can only with great difficulty imagine what a primary oral culture is like, that is, a culture with no knowledge whatsoever of writing or even of the possibility of writing. Try to imagine a culture where no one has ever ‘looked up’ anything. In a primary oral culture, the expression ‘to look up something’ is an empty phrase: it would have no conceivable meaning. Without writing, words as such have no visual presence, even when the objects they represent are visual. They are sounds. You might ‘call’ them back – ‘recall’ them. But there is nowhere to ‘look’ for them. They have no focus and no trace (a visual metaphor, showing dependency on writing), not even a trajectory. They are occurrences, events. To learn what a primary oral culture is and what the nature of our problem is regarding such a culture, it helps first to reflect on the nature of sound itself as sound (Ong 1967b, pp. 111–38). All sensation takes place in time, but sound has a special relationship to time unlike that of the other fields that register in human sensation. Sound exists only when it is going out of existence. It is not simply perishable but essentially evanescent, and it is sensed as evanescent. When I pronounce the word ‘permanence’, by the time I get to the ‘-pence’, the ‘perma-’ is gone, and has to be gone. There is no way to stop sound and have sound. I can stop a moving picture camera and hold one frame fixed on the screen. If I stop the movement of sound, I have nothing – only silence, no sound at all. All sensation takes place in time, but no other sensory field totally resists a holding action, stabilization, in quite this way. Vision can register motion, but it can also register immobility. Indeed, it favors immobility, for to examine something closely by vision, we prefer to have it quiet. We often reduce motion to a series of still shots the better to see what motion is. There is no equivalent of a still shot for sound. An oscillogram is silent. It lies outside the sound world. For anyone who has a sense of what words are in a primary oral culture, or a culture not far removed from primary orality, it is not surprising that the Hebrew term dabar means ‘word’ and ‘event’. Malinowski (1923, pp. 45 1, 470-81) has made the point that among ‘primitive’ (oral) peoples generally language is a mode of action and not simply a countersign of thought, though he had trouble explaining what he was getting at (Sampson 1980, pp. 223–6), since understanding of the psychodynamics of orality was virtually nonexistent in 1923. Neither is it surprising that oral peoples commonly, and probably universally, consider words to have great power. Sound cannot be sounding without the use of power. A hunter can see a buffalo, smell, taste, and touch a buffalo when the buffalo is completely inert, even dead, but if he hears a buffalo, he had better watch out: something is going on. In this sense, all sound, and especially oral utterance, which comes from inside living organisms, is ‘dynamic’. The fact that oral peoples commonly and in all likelihood universally consider words to have magical potency is clearly tied in, at least unconsciously, with their sense of the word as necessarily spoken, sounded, and hence power-driven. Deeply typographic folk forget to think of words as primarily oral, as events, and hence as 32 orality and literacy necessarily powered: for them, words tend rather to be assimilated to things, ‘out there’ on a flat surface. Such ‘things’ are not so readily associated with magic, for they are not actions, but are in a radical sense dead, though subject to dynamic resurrection (Ong 1977, pp. 230–71). Oral peoples commonly think of names (one kind of words) as conveying power over things. Explanations of Adam’s naming of the animals in Genesis 2:20 usually call condescending attention to this presumably quaint archaic belief. Such a belief is in fact far less quaint than it seems to unreflective chirographic and typographic folk. First of all, names do give human beings power over what they name: without learning a vast store of names, one is simply powerless to understand, for example, chemistry and to practice chemical engineering. And so with all other intellectual knowledge. Secondly, chirographic and typographic folk tend to think of names as labels, written or printed tags imaginatively affixed to an object named. Oral folk have no sense of a name as a tag, for they have no idea of a name as something that can be seen. Written or printed representations of words can be labels; real, spoken words cannot be. YOU KNOW WHAT YOU CAN RECALL: MNEMONICS AND FORMULAS In an oral culture, restriction of words to sound determines not only modes of expression but also thought processes. You know what you can recall. When we say we know Euclidean geometry, we mean not that we have in mind at the moment every one of its propositions and proofs but rather that we can bring them to mind readily. We can recall them. The theorem ‘You know what you can recall’ applies also to an oral culture. But how do persons in an oral culture recall? The organized knowledge that literates today study so that they ‘know’ it, that is, can recall it, has, with very few if any exceptions, been assembled and made available to them in writing. This is the case not only with Euclidean geometry but also with American Revolutionary history, or even baseball batting averages or traffic regulations. An oral culture has no texts. How does it get together organized some psychodynamics of orality 33 material for recall? This is the same as asking, ‘What does it or can it know in an organized fashion?’ Suppose a person in an oral culture would undertake to think through a particular complex problem and would finally manage to articulate a solution which itself is relatively complex, consisting, let us say, of a few hundred words. How does he or she retain for later recall the verbalization so painstakingly elaborated? In the total absence of any writing, there is nothing outside the thinker, no text, to enable him or her to produce the same line of thought again or even to verify whether he or she has done so or not. Aides-mĂ©moire such as notched sticks or a series of carefully arranged objects will not of themselves retrieve a complicated series of assertions. How, in fact, could a lengthy, analytic solution ever be assembled in the first place? An interlocutor is virtually essential: it is hard to talk to yourself for hours on end. Sustained thought in an oral culture is tied to communication. But even with a listener to stimulate and ground your thought, the bits and pieces of your thought cannot be preserved in jotted notes. How could you ever call back to mind what you had so laboriously worked out? The only answer is: Think memorable thoughts. In a primary oral culture, to solve effectively the problem of retaining and retrieving carefully articulated thought, you have to do your thinking in mnemonic patterns, shaped for ready oral recurrence. Your thought must come into being in heavily rhythmic, balanced patterns, in repetitions or antitheses, in alliterations and assonances, in epithetic and other formulary expressions, in standard thematic settings (the assembly, the meal, the duel, the hero’s ‘helper’, and so on), in proverbs which are constantly heard by everyone so that they come to mind readily and which themselves are patterned for retention and ready recall, or in other mnemonic form. Serious thought is intertwined with memory systems. Mnemonic needs determine even syntax (Havelock 1963, pp. 87–96, 131–2, 294–6). Protracted orally based thought, even when not in formal verse, tends to be highly rhythmic, for rhythm aids recall, even physiologically. Jousse (1978) has shown the intimate linkage between rhythmic oral patterns, the breathing process, gesture, and the bilateral symmetry of the human body in ancient Aramaic and Hellenic targums, and thus also in ancient Hebrew. Among the ancient Greeks, Hesiod, 34 orality and literacy who was intermediate between oral Homeric Greece and fully developed Greek literacy, delivered quasi-philosophic material in the formulaic verse forms that structured it into the oral culture from which he had emerged (Havelock 1963, pp. 97–8, 294–301). Formulas help implement rhythmic discourse and also act as mnemonic aids in their own right, as set expressions circulating through the mouths and ears of all. ‘Red in the morning, the sailor’s warning; red in the night, the sailor’s delight.’ ‘Divide and conquer.’ ‘To err is human, to forgive is divine.’ ‘Sorrow is better than laughter, because when the face is sad the heart grows wiser’ (Ecclesiastes 7:3). ‘The clinging vine.’ ‘The sturdy oak.’ ‘Chase off nature and she returns at a gallop.’ Fixed, often rhythmically balanced, expressions of this sort and of other sorts can be found occasionally in print, indeed can be ‘looked up’ in books of sayings, but in oral cultures they are not occasional. They are incessant. They form the substance of thought itself. Thought in any extended form is impossible without them, for it consists in them. The more sophisticated orally patterned thought is, the more it is likely to be marked by set expressions skillfully used. This is true of oral cultures generally from those of Homeric Greece to those of the present day across the globe. Havelock’s Preface to Plato (1963) and fictional works such as Chinua Achebe’s novel No Longer at Ease (1961), which draws directly on Ibo oral tradition in West Africa, alike provide abundant instances of thought patterns of orally educated characters who move in these oral, mnemonically tooled grooves, as the speakers reflect, with high intelligence and sophistication, on the situations in which they find themselves involved. The law itself in oral cultures is enshrined in formulaic sayings, proverbs, which are not mere jurisprudential decorations, but themselves constitute the law. A judge in an oral culture is often called on to articulate sets of relevant proverbs out of which he can produce equitable decisions in the cases under formal litigation before him (Ong 1978, p. 5) In an oral culture, to think through something in nonformulaic, non-patterned, non-mnemonic terms, even if it were possible, would be a waste of time, for such thought, once worked through, could never be recovered with any effectiveness, as it could be with the aid of writing. It would not be abiding knowledge but simply a passing some psychodynamics of orality 35 thought, however complex. Heavy patterning and communal fixed formulas in oral cultures serve some of the purposes of writing in chirographic cultures, but in doing so they of course determine the kind of thinking that can be done, the way experience is intellectually organized. In an oral culture, experience is intellectualized mnemonically. This is one reason why, for a St Augustine of Hippo (AD 354– 430), as for other savants living in a culture that knew some literacy but still carried an overwhelmingly massive oral residue, memory bulks so large when he treats of the powers of the mind. Of course, all expression and all thought is to a degree formulaic in the sense that every word and every concept conveyed in a word is a kind of formula, a fixed way of processing the data of experience, determining the way experience and reflection are intellectually organized, and acting as a mnemonic device of sorts. Putting experience into any words (which means transforming it at least a little bit – not the same as falsifying it) can implement its recall. The formulas characterizing orality are more elaborate, however, than are individual words, though some may be relatively simple: the Beowulf-poet’s ‘whale-road’ is a formula (metaphorical) for the sea in a sense in which the term ‘sea’ is not. FURTHER CHARACTERISTICS OF ORALLY BASED THOUGHT AND EXPRESSION Awareness of the mnemonic base of the thought and expression in primary oral cultures opens the way to understanding some further characteristics of orally based thought and expression in addition to their formulaic styling. The characteristics treated here are some of those which set off orally based thought and expression from chirographically and typographically based thought and expression, the characteristics, that is, which are most likely to strike those reared in writing and print cultures as surprising. This inventory of characteristics is not presented as exclusive or conclusive but as suggestive, for much more work and reflection are needed to deepen understanding of orally based thought (and thereby understanding of chirographically based, typographically based, and electronically based thought). 36 orality and literacy In a primary oral culture, thought and expression tend to be of the following sorts. (i) Additive rather than subordinative A familiar instance of additive oral style is the creation narrative in Genesis 1:1–5, which is indeed a text but one preserving recognizable oral patterning. The Douay version (1610), produced in a culture with a still massive oral residue, keeps close in many ways to the additive Hebrew original (as mediated through the Latin from which the Douay version was made): In the beginning God created heaven and earth. And the earth was void and empty, and darkness was upon the face of the deep; and the spirit of God moved over the waters. And God said: Be light made. And light was made. And God saw the light that it was good; and he divided the light from the darkness. And he called the light Day, and the darkness Night; and there was evening and morning one day. Nine introductory ‘ands’. Adjusted to sensibilities shaped more by writing and print, the New American Bible (1970) translates: In the beginning, when God created the heavens and the earth, the earth was a formless wasteland, and darkness covered the abyss, while a mighty wind swept over the waters. Then God said, ‘Let there be light’, and there was light. God saw how good the light was. God then separated the light from the darkness. God called the light ‘day’ and the darkness he called ‘night’. Thus evening came, and morning followed – the first day. Two introductory ‘ands’, each submerged in a compound sentence. The Douay renders the Hebrew we or wa (‘and’) simply as ‘and’. The New American renders it ‘and’, ‘when’, ‘then’, ‘thus’, or ‘while’, to provide a flow of narration with the analytic, reasoned subordination that characterizes writing (Chafe 1982) and that appears more natural in twentieth-century texts. Oral structures often look to pragmatics (the convenience of the speaker – Sherzer, 1974, reports lengthy public some psychodynamics of orality 37 oral performances among the Cuna incomprehensible to their hearers). Chirographic structures look more to syntactics (organization of the discourse itself), as GivĂłn has suggested (1979). Written discourse develops more elaborate and fixed grammar than oral discourse does because to provide meaning it is more dependent simply upon linguistic structure, since it lacks the normal full existential contexts which surround oral discourse and help determine meaning in oral discourse somewhat independently of grammar. It would be a mistake to think that the Douay is simply ‘closer’ to the original today than the New American is. It is closer in that it renders we or wa always by the same word, but it strikes the present-day sensibility as remote, archaic, and even quaint. Peoples in oral cultures or cultures with high oral residue, including the culture that produced the Bible, do not savor this sort of expression as so archaic or quaint. It feels natural and normal to them somewhat as the New American version feels natural and normal to us. Other instances of additive structure can be found across the world in primary oral narrative, of which we now have a massive supply on tape (see Foley, 1980b, for listing of some tapes). (ii) Aggregative rather than analytic This characteristic is closely tied to reliance on formulas to implement memory. The elements of orally based thought and expression tend to be not so much simple integers as clusters of integers, such as parallel terms or phrases or clauses, antithetical terms or phrases or clauses, epithets. Oral folk prefer, especially in formal discourse, not the soldier, but the brave soldier; not the princess, but the beautiful princess; not the oak, but the sturdy oak. Oral expression thus carries a load of epithets and other formulary baggage which high literacy rejects as cumbersome and tiresomely redundant because of its aggregative weight (Ong 1977, pp. 188–212). The clichĂ©s in political denunciations in many low-technology, developing cultures – enemy of the people, capitalist war-mongers – that strike high literates as mindless are residual formulary essentials of oral thought processes. One of the many indications of a high, if subsiding, oral residue in the culture of the Soviet Union is (or was a few 38 orality and literacy years ago, when I encountered it) the insistence on speaking there always of ‘the Glorious Revolution of October 26’ – the epithetic formula here is obligatory stabilization, as were Homeric epithetic formulas ‘wise Nestor’ or ‘clever Odysseus’, or as ‘the glorious Fourth of July’ used to be in the pockets of oral residue common even in the early twentieth-century United States. The former Soviet Union still announced each year the official epithets for various loci classici in Soviet history. An oral culture may well ask in a riddle why oaks are sturdy, but it does so to assure you that they are, to keep the aggregate intact, not really to question or cast doubt on the attribution. (For examples directly from the oral culture of the Luba in Zaire, see Faik-Nzuji 1970.) Traditional expressions in oral cultures must not be dismantled: it has been hard work getting them together over the generations, and there is nowhere outside the mind to store them. So soldiers are brave and princesses beautiful and oaks sturdy forever. This is not to say that there may not be other epithets for soldiers or princesses or oaks, even contrary epithets, but these are standard, too: the braggart soldier, the unhappy princess, can also be part of the equipment. What obtains for epithets obtains for other formulas. Once a formulary expression has crystallized, it had best be kept intact. Without a writing system, breaking up thought – that is, analysis – is a high-risk procedure. As LĂ©viStrauss has well put it in a summary statement ‘the savage [i.e. oral] mind totalizes’ (1966, p. 245). (iii) Redundant or ‘copious’ Thought requires some sort of continuity. Writing establishes in the text a ‘line’ of continuity outside the mind. If distraction confuses or obliterates from the mind the context out of which emerges the material I am now reading, the context can be retrieved by glancing back over the text selectively. Backlooping can be entirely occasional, purely ad hoc. The mind concentrates its own energies on moving ahead because what it backloops into lies quiescent outside itself, always available piecemeal on the inscribed page. In oral discourse, the situation is different. There is nothing to backloop into outside the mind, for the oral utterance has vanished as soon as it is uttered. Hence the mind some psychodynamics of orality 39 must move ahead more slowly, keeping close to the focus of attention much of what it has already dealt with. Redundancy, repetition of the just-said, keeps both speaker and hearer surely on the track. Since redundancy characterizes oral thought and speech, it is in a profound sense more natural to thought and speech than is sparse linearity. Sparsely linear or analytic thought and speech are artificial creations, structured by the technology of writing. Eliminating redundancy on a significant scale demands a time-obviating technology, writing, which imposes some kind of strain on the psyche in preventing expression from falling into its more natural patterns. The psyche can manage the strain in part because handwriting is physically such a slow process – typically about one-tenth of the speed of oral speech (Chafe 1982). With writing, the mind is forced into a slowed-down pattern that affords it the opportunity to interfere with and reorganize its more normal, redundant processes. Redundancy is also favored by the physical conditions of oral expression before a large audience, where redundancy is in fact more marked than in most face-to-face conversation. Not everyone in a large audience understands every word a speaker utters, if only because of acoustical problems. It is advantageous for the speaker to say the same thing, or equivalently the same thing, two or three times. If you miss the ‘not only . . . ’you can supply it by inference from the ‘but also . . .’. Until electronic amplification reduced acoustical problems to a minimum, public speakers as late as, for example, William Jennings Bryan (1860– 1925) continued the old redundancy in their public addresses and by force of habit let them spill over into their writing. In some kinds of acoustic surrogates for oral verbal communication, redundancy reaches fantastic dimensions, as in African drum talk. It takes on the average around eight times as many words to say something on the drums as in the spoken language (Ong 1977, p. 101). The public speaker’s need to keep going while he is running through his mind what to say next also encourages redundancy. In oral delivery, though a pause may be effective, hesitation is always disabling. Hence it is better to repeat something, artfully if possible, rather than simply to stop speaking while fishing for the next idea. Oral cultures encourage fluency, fulsomeness, volubility. Rhetoricians were to call this copia. They continued to encourage it, by a kind of oversight, when 40 orality and literacy they had modulated rhetoric from an art of public speaking to an art of writing. Early written texts, through the Middle Ages and the Renaissance, are often bloated with ‘amplification’, annoyingly redundant by modern standards. Concern with copia remains intense in western culture so long as the culture sustains massive oral residue – which is roughly until the age of Romanticism or even beyond. Thomas Babington Macaulay (1800–59) is one of the many fulsome early Victorians whose pleonastic written compositions still read much as an exuberant, orally composed oration would sound, as do also, very often, the writings of Winston Churchill (1874–1965). (iv) Conservative or traditionalist Since in a primary oral culture conceptualized knowledge that is not repeated aloud soon vanishes, oral societies must invest great energy in saying over and over again what has been learned arduously over the ages. This need establishes a highly traditionalist or conservative set of mind that with good reason inhibits intellectual experimentation. Knowledge is hard to come by and precious, and society regards highly those wise old men and women who specialize in conserving it, who know and can tell the stories of the days of old. By storing knowledge outside the mind, writing and, even more, print downgrade the figures of the wise old man and the wise old woman, repeaters of the past, in favor of younger discoverers of something new. Writing is of course conservative in its own ways. Shortly after it first appeared, it served to freeze legal codes in early Sumeria (Oppenheim 1964, p. 232). But by taking conservative functions on itself, the text frees the mind of conservative tasks, that is, of its memory work, and thus enables the mind to turn itself to new speculation (Havelock 1963, pp. 254–305).Indeed, the residual orality of a given chirographic culture can be calculated to a degree from the mnemonic load it leaves on the mind, that is, from the amount of memorization the culture’s educational procedures require (Goody 1968a, pp. 13–14). Of course oral cultures do not lack originality of their own kind. Narrative originality lodges not in making up new stories but in managing a particular interaction with this audience at this time – at every telling the story has to be introduced uniquely into a unique situation, some psychodynamics of orality 41 for in oral cultures an audience must be brought to respond, often vigorously. But narrators also introduce new elements into old stories (Goody 1977, pp.29–30). In oral tradition, there will be as many minor variants of a myth as there are repetitions of it, and the number of repetitions can be increased indefinitely. Praise poems of chiefs invite entrepreneurship, as the old formulas and themes have to be made to interact with new and often complicated political situations. But the formulas and themes are reshuffled rather than supplanted with new materials. Religious practices, and with them cosmologies and deepseated beliefs, also change in oral cultures. Disappointed with the practical results of the cult at a given shrine when cures there are infrequent, vigorous leaders – the ‘intellectuals’ in oral society, Goody styles them (1977, p. 30) – invent new shrines and with these new conceptual universes. Yet these new universes and the other changes that show a certain originality come into being in an essentially formulaic and thematic noetic economy. They are seldom if ever explicitly touted for their novelty but are presented as fitting the traditions of the ancestors. (v) Close to the human lifeworld In the absence of elaborate analytic categories that depend on writing to structure knowledge at a distance from lived experience, oral cultures must conceptualize and verbalize all their knowledge with more or less close reference to the human lifeworld, assimilating the alien, objective world to the more immediate, familiar interaction of human beings. A chirographic (writing) culture and even more a typographic (print) culture can distance and in a way denature even the human, itemizing such things as the names of leaders and political divisions in an abstract, neutral list entirely devoid of a human action context. An oral culture has no vehicle so neutral as a list. In the latter half of the second book, the Iliad presents the famous catalogue of the ships – over four hundred lines – which compiles the names of Grecian leaders and the regions they ruled, but in a total context of human action: the names of persons and places occur as involved in doings (Havelock 1963, pp. 176–80). The normal and very likely the only place in Homeric Greece where this sort of political information could be 42 orality and literacy found in verbalized form was in a narrative or a genealogy, which is not a neutral list but an account describing personal relations (cf. Goody and Watt 1968, p. 32). Oral cultures know few statistics or facts divorced from human or quasi-human activity. An oral culture likewise has nothing corresponding to how-to-do-it manuals for the trades (such manuals in fact are extremely rare and always crude even in chirographic cultures, coming into effective existence only after print has been considerably interiorized – Ong 1967b, pp. 28–9, 234, 258). Trades were learned by apprenticeship (as they still largely are even in high-technology cultures), which means from observation and practice with only minimal verbalized explanation. The maximum verbal articulation of such things as navigation procedures, which were crucial to Homeric culture, would have been encountered not in any abstract manual-style description at all but in such things as the following passage from the Iliad i. 141–4, where the abstract description is embedded in a narrative presenting specific commands for human action or accounts of specific acts: As for now a black ship let us draw to the great salt sea And therein oarsmen let us advisedly gather and thereupon a hecatomb Let us set and upon the deck Chryseis of fair cheeks Let us embark. And one man as captain, a man of counsel, there must be. (quoted in Havelock 1963, p. 81; see also ibid., pp. 174–5). Primary oral culture is little concerned with preserving knowledge of skills as an abstract, self-subsistent corpus. (vi) Agonistically toned Many, if not all, oral or residually oral cultures strike literates as extraordinarily agonistic in their verbal performance and indeed in their lifestyle. Writing fosters abstractions that disengage knowledge from the arena where human beings struggle with one another. It separates the knower from the known. By keeping knowledge embedded in the human lifeworld, orality situates knowledge within a context of some psychodynamics of orality 43 struggle. Proverbs and riddles are not used simply to store knowledge but to engage others in verbal and intellectual combat: utterance of one proverb or riddle challenges hearers to top it with a more apposite or a contradictory one (Abrahams 1968; 1972). Bragging about one’s own prowess and/or verbal tongue-lashings of an opponent figure regularly in encounters between characters in narrative: in the Iliad, in Beowulf, throughout medieval European romance, in The Mwindo Epic and countless other African stories (Okpewho 1979; Obiechina 1975), in the Bible, as between David and Goliath (1 Samuel 17:43–7). Standard in oral societies across the world, reciprocal name-calling has been fitted with a specific name in linguistics: flyting (or fliting). Growing up in a still dominantly oral culture, certain young black males in the United States, the Caribbean, and elsewhere, engage in what is known variously as the ‘dozens’ or ‘joning’ or ‘sounding’ or by other names, in which one opponent tries to outdo the other in vilifying the other’s mother. The dozens is not a real fight but an art form, as are the other stylized verbal tongue lashings in other cultures. Not only in the use to which knowledge is put, but also in the celebration of physical behavior, oral cultures reveal themselves as agonistically programmed. Enthusiastic description of physical violence often marks oral narrative. In the Iliad, for example, Books viii and x would at least rival the most sensational television and cinema shows today in outright violence and far surpass them in exquisitely gory detail, which can be less repulsive when described verbally than when presented visually. Portrayal of gross physical violence, central to much oral epic and other oral genres and residual through much early literacy, gradually wanes or becomes peripheral in later literary narrative. It survives in medieval ballads but is already being spoofed by Thomas Nashe in The Unfortunate Traveller (1594). As literary narrative moves toward the serious novel, it eventually pulls the focus of action more and more to interior crises and away from purely exterior crises. The common and persistent physical hardships of life in many early societies of course explain in part the high evidence of violence in early verbal art forms. Ignorance of physical causes of disease and disaster can also foster personal tensions. Since the disease or disaster is caused by something, in lieu of physical causes the personal malevolence of another human being – a magician, a witch – can be assumed and 44 orality and literacy personal hostilities thereby increased. But violence in oral art forms is also connected with the structure of orality itself. When all verbal communication must be by direct word of mouth, involved in the give-and-take dynamics of sound, interpersonal relations are kept high – both attractions and, even more, antagonisms. The other side of agonistic name-calling or vituperation in oral or residually oral cultures is the fulsome expression of praise which is found everywhere in connection with orality. It is well known in the much-studied present-day African oral praise poems (Finnegan 1970; Opland 1975) as all through the residually oral western rhetorical tradition stretching from classical antiquity through the eighteenth century. ‘I come to bury Caesar, not to praise him’, Marcus Antonius cries in his funeral oration in Shakespeare’s Julius Caesar (v. ii. 79), and then proceeds to praise Caesar in rhetorical patterns of encomium which were drilled into the heads of all Renaissance schoolboys and which Erasmus used so wittily in his Praise of Folly. The fulsome praise in the old, residually oral, rhetoric tradition strikes persons from a highliteracy culture as insincere, flatulent, and comically pretentious. But praise goes with the highly polarized, agonistic, oral world of good and evil, virtue and vice, villains and heroes. The agonistic dynamics of oral thought processes and expression have been central to the development of western culture, where they were institutionalized by the ‘art’ of rhetoric, and by the related dialectic of Socrates and Plato, which furnished agonistic oral verbalization with a scientific base worked out with the help of writing. More will be said about this later. (vii) Empathetic and participatory rather than objectively distanced For an oral culture learning or knowing means achieving close, empathetic, communal identification with the known (Havelock 1963, pp. 145–6), ‘getting with it’. Writing separates the knower from the known and thus sets up conditions for ‘objectivity’, in the sense of personal disengagement or distancing. The ‘objectivity’ which Homer and other oral performers do have is that enforced by formulaic expression: the individual’s reaction is not expressed as simply individual or ‘subjective’ but rather as encased in the communal reaction, the some psychodynamics of orality 45 communal ‘soul’. Under the influence of writing, despite his protest against it, Plato had excluded the poets from his Republic, for studying them was essentially learning to react with ‘soul’, to feel oneself identified with Achilles or Odysseus (Havelock 1963, pp. 197–233). Treating another primary oral setting over two thousand years later, the editors of The Mwindo Epic (1971, p. 37) call attention to a similar strong identification of Candi Rureke, the performer of the epic, and through him of his listeners, with the hero Mwindo, an identification which actually affects the grammar of the narration, so that on occasion the narrator slips into the first person when describing the actions of the hero. So bound together are narrator, audience, and character that Rureke has the epic character Mwindo himself address the scribes taking down Rureke’s performance: ‘Scribe, march!’ or ‘O scribe you, you see that I am already going.’ In the sensibility of the narrator and his audience the hero of the oral performance assimilates into the oral world even the transcribers who are de-oralizing it into text. (viii) Homeostatic By contrast with literate societies, oral societies can be characterized as homeostatic (Goody and Watt 1968, pp. 31–4). That is to say, oral societies live very much in a present which keeps itself in equilibrium or homeostasis by sloughing off memories which no longer have present relevance. The forces governing homeostasis can be sensed by reflection on the condition of words in a primary oral setting. Print cultures have invented dictionaries in which the various meanings of a word as it occurs in datable texts can be recorded in formal definitions. Words thus are known to have layers of meaning, many of them quite irrelevant to ordinary present meanings. Dictionaries advertise semantic discrepancies. Oral cultures of course have no dictionaries and few semantic discrepancies. The meaning of each word is controlled by what Goody and Watt (1968, p. 29) call ‘direct semantic ratification’, that is, by the real-life situations in which the word is used here and now. The oral mind is uninterested in definitions (Laura 1976, pp. 48–99). Words acquire their meanings only from their always insistent actual habitat, 46 orality and literacy which is not, as in a dictionary, simply other words, but includes also gestures, vocal inflections, facial expression, and the entire human, existential setting in which the real, spoken word always occurs. Word meanings come continuously out of the present, though past meanings of course have shaped the present meaning in many and varied ways, no longer recognized. It is true that oral art forms, such as epic, retain some words in archaic forms and senses. But they retain such words, too, through current use – not the current use of ordinary village discourse but the current use of ordinary epic poets, who preserve archaic forms in their special vocabulary. These performances are part of ordinary social life and so the archaic forms are current, though limited to poetic activity. Memory of the old meaning of old terms thus has some durability, but not unlimited durability. When generations pass and the object or institution referred to by the archaic word is no longer part of present, lived experience, though the word has been retained, its meaning is commonly altered or simply vanishes. African talking drums, as used for example among the Lokele in eastern Zaire, speak in elaborate formulas that preserve certain archaic words which the Lokele drummers can vocalize but whose meaning they no longer know (Carrington 1974, pp. 41–2; Ong 1977, pp. 94–5) Whatever these words referred to has dropped out of Lokele daily experience, and the term that remains has become empty. Rhymes and games transmitted orally from one generation of small children to the next even in high-technology culture have similar words which have lost their original referential meanings and are in effect nonsense syllables. Many instances of such survival of empty terms can be found in Opie and Opie (1952), who, as literates, of course manage to recover and report the original meanings of the terms lost to their present oral users. Goody and Watt (1968, pp. 31–3) cite Laura Bohannan, Emrys Peters, and Godfrey and Monica Wilson for striking instances of the homeostasis of oral cultures in the handing on of genealogies. Some decades ago among the Tiv people of Nigeria the genealogies actually used orally in settling court disputes have been found to diverge considerably from the genealogies carefully recorded in writing by the British forty years earlier (because of their importance then, too, in some psychodynamics of orality 47 court disputes). The later Tiv have maintained that they were using the same genealogies as forty years earlier and that the earlier written record was wrong. What had happened was that the later genealogies had been adjusted to the changed social relations among the Tiv: they were the same in that they functioned in the same way to regulate the real world. The integrity of the past was subordinate to the integrity of the present. Goody and Watt (1968, p. 33) report an even more strikingly detailed case of ‘structural amnesia’ among the Gonja in Ghana. Written records made by the British at the turn of the twentieth century show that Gonja oral tradition then presented Ndewura Jakpa, the founder of the state of Gonja, as having had seven sons, each of whom was ruler of one of the seven territorial divisions of the state. By the time sixty years later when the myths of state were again recorded, two of the seven divisions had disappeared, one by assimilation to another division and the other by reason of a boundary shift. In these later myths, Ndewura Jakpa had five sons, and no mention was made of the two extinct divisions. The Gonja were still in contact with their past, tenacious about this contact in their myths, but the part of the past with no immediately discernible relevance to the present had simply fallen away. The present imposed its own economy on past remembrances. Packard (1980, p. 157) has noted that Claude LĂ©vi-Strauss, T. O. Beidelman, Edmund Leach and others have suggested that oral traditions reflect a society’s present cultural values rather than idle curiosity about the past. He finds this is true of the Bashu, as Harms (1980, p. 178) finds it also true of the Bobangi. The implications here for oral genealogies need to be noted. A West African griot or other oral genealogist will recite those genealogies which his hearers listen to. If he knows genealogies which are no longer called for, they drop from his repertoire and eventually disappear. The genealogies of political winners are of course more likely to survive than those of losers. Henige (1980, p. 255), reporting on Ganda and Myoro kinglists, notes that the ‘oral mode . . . allows for inconvenient parts of the past to be forgotten’ because of ‘the exigencies of the continuing present’. Moreover, skilled oral narrators deliberately vary their traditional narratives because part of their skill is their ability to adjust to new audiences and new situations or simply to be coquettish. A West African griot employed by a princely family 48 orality and literacy (Okpewho 1979, pp. 25–6, 247, n. 33; p. 248, n. 36) will adjust his recitation to compliment his employers. Oral cultures encourage triumphalism, which in modern times has regularly tended somewhat to disappear as once-oral societies become more and more literate. (ix) Situational rather than abstract All conceptual thinking is to a degree abstract. So ‘concrete’ a term as ‘tree’ does not refer simply to a singular ‘concrete’ tree but is an abstraction, drawn out of, away from, individual, sensible actuality; it refers to a concept which is neither this tree nor that tree but can apply to any tree. Each individual object that we style a tree is truly ‘concrete’, simply itself, not ‘abstract’ at all, but the term we apply to the individual object is in itself abstract. Nevertheless, if all conceptual thinking is thus to some degree abstract, some uses of concepts are more abstract than other uses. Oral cultures tend to use concepts in situational, operational frames of reference that are minimally abstract in the sense that they remain close to the living human lifeworld. There is a considerable literature bearing on this phenomenon. Havelock (1978a) has shown that preSocratic Greeks thought of justice in operational rather than formally conceptualized ways and the late Anne Amory Parry (1973) made much the same point about the epithet amymoÂŻn applied by Homer to Aegisthus: the epithet means not ‘blameless’, a tidy abstraction with which literates have translated the term, but ‘beautiful-in-the-way-awarrior-ready-to-fight-is-beautiful’. No work on operational thinking is richer for the present purpose than A. R. Luria’s Cognitive Development: Its Cultural and Social Foundations (1976). At the suggestion of the distinguished Soviet psychologist Lev Vygotsky, Luria did extensive fieldwork with illiterate (that is, oral) persons and somewhat literate persons in the remoter areas of Uzbekistan (the homeland of Avicenna) and Kirghizia in the Soviet Union during the years 1931–2. Luria’s book was published in its original Russian edition only in 1974, forty-two years after his research was completed, and appeared in English translation two years later. Luria’s work provides more adequate insights into the operation of some psychodynamics of orality 49 orally based thought than had the theories of Lucien LĂ©vy-Bruhl (1923), who concluded that ‘primitive’ (in fact, orally based) thought was ‘prelogical’ and magical in the sense that it was based on belief systems rather than on practical actuality, or than had the proposals of LĂ©vy-Bruhl’s opponents such as Franz Boas (not George Boas, as erroneously in Luria 1976, p. 8), who maintained that primitive peoples thought as we do but used a different set of categories. In an elaborate framework of Marxist theory, Luria attends to some degree to matters other than the immediate consequences of literacy, such as ‘the unregulated individualistic economy centered on agriculture’ and ‘the beginnings of collectivization’ (1976, p. 14), and he does not systematically encode his findings expressly in terms of oralliteracy differences. But despite the elaborate Marxist scaffolding, Luria’s report clearly turns in fact on the differences between orality and literacy. He identifies the persons he interviews on a scale ranging from illiteracy to various levels of moderate literacy and his data fall clearly into the classes of orally based versus chirographically based noetic processes. The contrasts that show between illiterates (by far the larger number of his subjects) and literates as such are marked and certainly significant (often Luria notes this fact explicitly) and they show what work reported on and cited by Carothers (1959) also shows: it takes only a moderate degree of literacy to make a tremendous difference in thought processes. Luria and his associates gathered data in the course of long conversations with subjects in the relaxed atmosphere of a tea house, introducing the questions for the survey itself informally, as something like riddles, with which the subjects were familiar. Thus every effort was made to adapt the questions to the subjects in their own milieu. The subjects were not leaders in their societies, but there is every reason to suppose that they had a normal range of intelligence and were quite representative of the culture. Among Luria’s findings the following may be noted as of special interest here. (1) Illiterate (oral) subjects identified geometrical figures by assigning them the names of objects, never abstractly as circles, squares, etc. A circle would be called a plate, sieve, bucket, watch, or moon; a square would be called a mirror, door, house, apricot drying-board. Luria’s subjects identified the designs as representations of real things they 50 orality and literacy knew. They never dealt with abstract circles or squares but rather with concrete objects. Teachers’ school students on the other hand, moderately literate, identified geometrical figures by categorical geometric names: circles, squares, triangles, and so on (1976, pp. 32–9). They had been trained to give school-room answers, not real-life responses. (2) Subjects were presented with drawings of four objects, three belonging to one category and the fourth to another, and were asked to group together those that were similar or could be placed in one group or designated by one word. One series consisted of drawings of the objects hammer, saw, log, hatchet. Illiterate subjects consistently thought of the group not in categorical terms (three tools, the log not a tool) but in terms of practical situations – ‘situational thinking’ – without adverting at all to the classification ‘tool’ as applying to all but the log. If you are a workman with tools and see a log, you think of applying the tool to it, not of keeping the tool away from what it was made for – in some weird intellectual game. A 25-year-old illiterate peasant: ‘They’re all alike. The saw will saw the log and the hatchet will chop it into small pieces. If one of these has to go, I’d throw out the hatchet. It doesn’t do as good a job as a saw’ (1976, p. 56). Told that the hammer, saw, and hatchet are all tools, he discounts the categorical class and persists in situational thinking: ‘Yes, but even if we have tools, we still need wood – otherwise we can’t build anything’ (ibid.). Asked why another person had rejected one item in another series of four that he felt all belonged together, he replied, ‘Probably that kind of thinking runs in his blood’. By contrast an 18-year-old who had studied at a village school for only two years, not only classified a similar series in categorical terms but insisted on the correctness of the classification under attack (1976, p. 74). A barely literate worker, aged 56, mingled situational grouping and categorical grouping, though the latter predominated. Given the series axe, hatchet, sickle to complete from the series saw, ear of grain, log, he completed the series with the saw – ‘They are all farming tools’ – but then reconsidered and added about the grain, ‘You could reap it with the sickle’ (1976, p. 72). Abstract classification was not entirely satisfying. At points in his discussions Luria undertook to teach illiterate subjects some principles of abstract classification. But their grasp was never some psychodynamics of orality 51 firm, and when they actually returned to working out a problem for themselves, they would revert to situational rather than categorical thinking (1976, p. 67). They were convinced that thinking other than operational thinking, that is, categorical thinking, was not important, uninteresting, trivializing (1976, pp. 54–5). One recalls Malinowski’s account (1923, p. 502) of how ‘primitives’ (oral peoples) have names for the fauna and flora that are useful in their lives but treat other things in the forest as unimportant generalized background: ‘That is just “bush”.’ ‘Merely a flying animal.’ (3) We know that formal logic is the invention of Greek culture after it had interiorized the technology of alphabetic writing, and so made a permanent part of its noetic resources the kind of thinking that alphabetic writing made possible. In the light of this knowledge, Luria’s experiments with illiterates’ reactions to formally syllogistic and inferential reasoning are particularly revealing. In brief, his illiterate subjects seemed not to operate with formal deductive procedures at all – which is not the same as to say that they could not think or that their thinking was not governed by logic, but only that they would not fit their thinking into pure logical forms, which they seem to have found uninteresting. Why should they be interesting? Syllogisms relate to thought, but in practical matters no one operates in formally stated syllogisms. Precious metals do not rust. Gold is a precious metal. Does it rust or not? Typical responses to this query included: ‘Do precious metals rust or not? Does gold rust or not?’ (peasant, 18 years of age); ‘Precious metal rusts. Precious gold rusts’ (34-year-old illiterate peasant) (1976, p. 104). In the Far North, where there is snow, all bears are white. Novaya Zembla is in the Far North and there is always snow there. What color are the bears? Here is a typical response, ‘I don’t know. I’ve seen a black bear. I’ve never seen any others.... Each locality has its own animals’ (1976, pp. 108–9). You find what color bears are by looking at them. Who ever heard of reasoning out in practical life the color of a polar bear? Besides, how am I sure that you know for sure that all bears are white in a snowy country? When the syllogism is given to him a second time, a barely literate 45-year-old chairman of a collective farm manages ‘To go by your words, they should all be white’ (1976, p. 114) . ‘To go by your words’ appears to indicate awareness of the formal intellectual structures. A little literacy 52 orality and literacy goes a long way. On the other hand, the chairman’s limited literacy leaves him more comfortable in the person-to-person human lifeworld than in a world of pure abstractions: ‘To go by your words....’ It is your responsibility, not mine, if the answer comes out in such a fashion. Referring to work by Michael Cole and Sylvia Scribner in Liberia (1973), James Fernandez (1980) pointed out that a syllogism is self-contained: its conclusions are derived from its premises only. He notes that persons not academically educated are not acquainted with this special ground rule but tend rather in their interpretation of given statements, in a syllogism as elsewhere, to go beyond the statements themselves, as one does normally in real-life situations or in riddles (common in all oral cultures). I would add the observation that the syllogism is thus like a text, fixed, boxed-off, isolated. This fact dramatizes the chirographic base of logic. The riddle belongs in the oral world. To solve a riddle, canniness is needed: one draws on knowledge, often deeply subconscious, beyond the words themselves in the riddle. (4) In Luria’s field work, requests for definitions of even the most concrete objects met with resistance. ‘Try to explain to me what a tree is.’ ‘Why should I? Everyone knows what a tree is, they don’t need me telling them’, replied one illiterate peasant, aged 22 (1976, p. 86). Why define, when a real-life setting is infinitely more satisfactory than a definition? Basically, the peasant was right. There is no way to refute the world of primary orality. All you can do is walk away from it into literacy. ‘How would you define a tree in two words?’ ‘In two words? Apple tree, elm, poplar.’ ‘Say you go to a place where there are no cars. What will you tell people [a car is]?’ ‘If I go, I’ll tell them that buses have four legs, chairs in front for people to sit on, a roof for shade and an engine. But when you get right down to it, I’d say: “If you get in a car and go for a drive, you’ll find out.”’ The respondent enumerates some features but turns back ultimately to personal, situational experience (1976, p. 87). By contrast, a literate collective-farm worker, aged 30: ‘It’s made in a factory. In one trip it can cover the distance it would take a horse ten days to make – it moves that fast. It uses fire and steam. We first have to set the fire going so the water gets steaming hot – the steam gives the some psychodynamics of orality 53 machine its power. . . . I don’t know whether there is water in a car, must be. But water isn’t enough, it also needs fire’ (1976, p. 90). Although he was not well informed, he did make an attempt to define a car. His definition, however, is not a sharp-focused description of visual appearance – this kind of description is beyond the capacity of the oral mind – but a definition in terms of its operations. (5) Luria’s illiterates had difficulty in articulate self-analysis. Selfanalysis requires a certain demolition of situational thinking. It calls for isolation of the self, around which the entire lived world swirls for each individual person, removal of the center of every situation from that situation enough to allow the center, the self, to be examined and described. Luria put his questions only after protracted conversation about people’s characteristics and their individual differences (1976, p. 148). A 38-year-old man, illiterate, from a mountain pasture camp was asked (1976, p. 150), ‘What sort of person are you, what’s your character like, what are your good qualities and shortcomings? How would you describe yourself?’ ‘I came here from Uch-Kurgan, I was very poor, and now I’m married and have children.’ ‘Are you satisfied with yourself or would you like to be different?’ ‘It would be good if I had a little more land and could sow some wheat.’ Externals command attention. ‘And what are your shortcomings?’ ‘This year I sowed one pood of wheat, and we’re gradually fixing the shortcomings.’ More external situations. ‘Well, people are different – calm, hot-tempered, or sometimes their memory is poor. What do you think of yourself?’ ‘We behave well – if we were bad people, no one would respect us’ (1976, p. 15). Self-evaluation modulated into group evaluation (‘we’) and then handled in terms of expected reactions from others. Another man, a peasant aged 36, asked what sort of person he was, responded with touching and humane directness: ‘What can I say about my own heart? How can I talk about my character? Ask others; they can tell you about me. I myself can’t say anything.’ Judgement bears in on the individual from outside, not from within. These are a few samples from Luria’s many, but they are typical. One could argue that responses were not optimal because the respondents were not used to being asked these kinds of questions, no matter how cleverly Luria could work them into riddle-like settings. But lack of familiarity is precisely the point: an oral culture simply does not deal in 54 orality and literacy such items as geometrical figures, abstract categorization, formally logical reasoning processes, definitions, or even comprehensive descriptions, or articulated self-analysis, all of which derive not simply from thought itself but from text-formed thought. Luria’s questions are schoolroom questions associated with the use of texts, and indeed closely resemble or are identical with standard intelligence test questions got up by literates. They are legitimate, but they come from a world the oral respondent does not share. The subject’s reactions suggest that it is perhaps impossible to devise a test in writing or even an oral test shaped in a literate setting that would assess accurately the native intellectual abilities of persons from a highly oral culture. Gladwin (1970, p. 219) notes that the Pulawat Islanders in the South Pacific respect their navigators, who have to be highly intelligent for their complex and demanding skill, not because they consider them ‘intelligent’ but quite simply because they are good navigators. Asked what he thought of a new village school principal, a Central African responded to Carrington (1974, p. 61), ‘Let’s watch a little how he dances’. Oral folk assess intelligence not as extrapolated from contrived textbook quizzes but as situated in operational contexts. Plying students or anyone else with analytic questions of this sort appears at a very late stage of textuality. Such questions are in fact missing not only from oral cultures, but also from writing cultures. Written examination questions came into general use (in the West) only well after print had worked its effects on consciousness, thousands of years after the invention of writing. Classical Latin has no word for an ‘examination’ such as we ‘take’ today and try to ‘pass’ in school. Until the past few generations in the West, and still in perhaps most of the world today, academic practice has demanded that students in class, ‘recite’, that is, feed back orally to the teacher statements (formulas – the oral heritage) that they had memorized from classroom instruction or from textbooks (Ong 1967b, pp. 53–76). Proponents of intelligence tests need to recognize that our ordinary intelligence test questions are tailored to a special kind of consciousness, one deeply conditioned by literacy and print, ‘modern consciousness’ (Berger 1978). A highly intelligent person from an oral or residually oral culture might be expected normally to react to Luria’s some psychodynamics of orality 55 type of question, as many of his respondents clearly did, not by answering the seemingly mindless question itself but by trying to assess the total puzzling context (the oral mind totalizes): What is he asking me this stupid question for? What is he trying to do? (See also Ong 1978, p. 4). ‘What is a tree?’ Does he really expect me to respond to that when he and everyone else has seen thousands of trees? Riddles I can work with. But this is no riddle. Is it a game? Of course it is a game, but the oral person is unfamiliar with the rules. The people who ask such questions have been living in a barrage of such questions from infancy and are not aware that they are using special rules. In a society with some literacy, such as that of Luria’s subjects, illiterates can and often do of course have experience of literately organized thinking on the part of others. They will, for example, have heard someone read written compositions or have heard conversations such as only literates can engage in. One value of Luria’s work is that it shows that such passing acquaintanceship with literate organization of knowledge has, at least so far as his cases show, no discernible effect on illiterates. Writing has to be personally interiorized to affect thinking processes. Persons who have interiorized writing not only write but also speak literately, which is to say that they organize, to varying degrees, even their oral expression in thought patterns and verbal patterns that they would not know of unless they could write. Because it does not follow these patterns, literates have considered oral organization of thought naive. Oral thinking, however, can be quite sophisticated and in its own way reflective. Navaho narrators of Navaho folkloric animal stories can provide elaborate explanations of the various implications of the stories for an understanding of complex matters in human life from the physiological to the psychological and moral, and are perfectly aware of such things as physical inconsistencies (for example, coyotes with amber balls for eyes) and the need to interpret elements in the stories symbolically (Toelken 1976, p. 156). To assume that oral peoples are essentially unintelligent, that their mental processes are ‘crude’, is the kind of thinking that for centuries brought scholars to assume falsely that because the Homeric poems are so skillful, they must be basically written compositions. Nor must we imagine that orally based thought is ‘prelogical’ or 56 orality and literacy ‘illogical’ in any simplistic sense – such as, for example, in the sense that oral folk do not understand causal relationships. They know very well that if you push hard on a mobile object, the push causes it to move. What is true is that they cannot organize elaborate concatenations of causes in the analytic kind of linear sequences which can only be set up with the help of texts. The lengthy sequences they produce, such as genealogies, are not analytic but aggregative. But oral cultures can produce amazingly complex and intelligent and beautiful organizations of thought and experience. To understand how they do so, it will be necessary to discuss some of the operations of oral memory. ORAL MEMORIZATION Verbal memory skill is understandably a valued asset in oral cultures. But the way verbal memory works in oral art forms is quite different from what literates in the past commonly imagined. In a literate culture verbatim memorization is commonly done from a text, to which the memorizer returns as often as necessary to perfect and test verbatim mastery. In the past, literates have commonly assumed that oral memorization in an oral culture, normally achieved the same goal of absolutely verbatim repetition. How such repetition could be verified before sound recordings were known was unclear, since in the absence of writing the only way to test for verbatim repetition of lengthy passages would be the simultaneous recitation of the passages by two or more persons together. Successive recitations could not be checked against each other. But instances of simultaneous recitation in oral cultures were hardly sought for. Literates were happy simply to assume that the prodigious oral memory functioned somehow according to their own verbatim textual model. In assessing more realistically the nature of verbal memory in primary oral cultures, the work of Milman Parry and Albert Lord again proved revolutionary. Parry’s work with the Homeric poems focused the issue. Parry demonstrated that the Iliad and the Odyssey were basically oral creations, whatever circumstances governed their commitment to writing. At first blush, this discovery would seem to have confirmed the assumption of verbatim memorization. The Iliad and the Odyssey were strictly metrical. How could a singer produce on demand a narrative some psychodynamics of orality 57 consisting of thousands of dactylic hexameter lines unless he had them memorized word for word? Literates who can recite lengthy metrical works on demand have memorized them verbatim from texts. Parry (1928, in Parry 1971), however, laid the grounds for a new approach that could account for such production very well without verbatim memorization. As has been seen in Chapter 2, he showed that the hexameters were made up not simply of word-units but of formulas, groups of words for dealing with traditional materials, each formula shaped to fit into a hexameter line. The poet had a massive vocabulary of hexameterized phrases. With his hexameterized vocabulary, he could fabricate correct metrical lines without end, so long as he was dealing with traditional materials. Thus in the Homeric poems, for Odysseus and Hector and Athena and Apollo and the other characters the poet had epithets and verbs which would fit them into the meter neatly when, for example, any one of them had to be announced as saying something. MetepheÂŻ polymeÂŻtis Odysseus (there spoke up clever Odysseus) or prosepheÂŻ polymeÂŻtis Odysseus (there spoke out clever Odysseus) occurs 72 times in the poems (Milman Parry 1971, p. 51). Odysseus is polymeÂŻtis (clever) not just because he is this kind of character but also because without the epithet polymeÂŻtis he could not be readily worked into the meter. As earlier noted, the appositeness of these and other Homeric epithets has been piously exaggerated. The poet had thousands of other similarly functioning metrical formulas that could fit into his varying metrical needs almost any situation, person, thing, or action. Indeed, most words in the Iliad and the Odyssey occur as parts of identifiable formulas. Parry’s work showed that metrically tailored formulas controlled the composition of the ancient Greek epic and that the formulas could be shifted around quite handily without interfering with the story line or the tone of the epic. Did oral singers actually shift the formulas, so that individual metrically regular renditions of the same story differed in wording? Or was the story mastered verbatim, so that it was rendered the same way at every performance? Since pretextual Homeric poets had all been dead for well over two thousand years, they could not be taped for direct evidence. But direct evidence was available from living narrative poets in modern (former) Yugoslavia, a country adjacent to and in part overlapping ancient Greece. Parry found such poets com58 orality and literacy posing oral epic narrative, for which there was no text. Their narrative poems, like Homer’s, were metric and formulaic, although their verse meter happened to be a different one from the ancient Greek dactylic hexameter. Lord continued and extended Parry’s work, building up the massive collection of oral recordings of modern Yugoslav narrative poets now in the Parry Collection at Harvard University. Most of these living South Slavic narrative poets – and indeed all of the better ones – are illiterate. Learning to read and write disables the oral poet, Lord found: it introduces into his mind the concept of a text as controlling the narrative and thereby interferes with the oral composing processes, which have nothing to do with texts but are ‘the remembrance of songs sung’ (Peabody 1975, p. 216). Oral poets’ memory of songs sung is agile: it was ‘not unusual’ to find a Yugoslav bard singing ‘from ten to twenty ten-syllable lines a minute’ (Lord 1960, p. 17). Comparison of the recorded songs, however, reveals that, though metrically regular, they were never sung the same way twice. Basically the same formulas and themes recurred, but they were stitched together or ‘rhapsodized’ differently in each rendition even by the same poet, depending on audience reaction, the mood of the poet or of the occasion, and other social and psychological factors. Orally recorded interviews with the twentieth-century bards supplemented records of their performances. From these interviews, and from direct observation, we know how the bards learn: by listening for months and years to other bards who never sing a narrative the same way twice but who use over and over again the standard formulas in connection with the standard themes. Formulas are of course somewhat variable, as are themes, and a given poet’s rhapsodizing or ‘stitching together’ of narratives will differ recognizably from another’s. Certain turns of phrases will be idiosyncratic. But essentially, the materials, themes and formulas, and their use belong in a clearly identifiable tradition. Originality consists not in the introduction of new materials but in fitting the traditional materials effectively into each individual, unique situation and/or audience. The memory feats of these oral bards are remarkable, but they are unlike those associated with memorization of texts. Literates are usually surprised to learn that the bard planning to retell the story he has some psychodynamics of orality 59 heard only once wants often to wait a day or so after he had heard the story before he himself repeats it. In memorizing a written text, postponing its recitation generally weakens recall. An oral poet is not working with texts or in a textual framework. He needs time to let the story sink into his own store of themes and formulas, time to ‘get with’ the story. In recalling and retelling the story, he has not in any literate sense ‘memorized’ its metrical rendition from the version of the other singer – a version long gone forever when the new singer is mulling over the story for his own rendition (Lord 1960, pp. 20–9). The fixed materials in the bard’s memory are a float of themes and formulas out of which all stories are variously built. One of the most telling discoveries in Lord’s work has been that, although singers are aware that two different singers never sing the same song exactly alike, nevertheless a singer will protest that he can do his own version of a song line for line and word for word any time, and indeed, ‘just the same twenty years from now’ (Lord 1960, p. 27). When, however, their purported verbatim renditions are recorded and compared, they turn out to be never the same, though the songs are recognizable versions of the same story. ‘Word for word and line for line’, as Lord interprets (1960, p. 28), is simply an emphatic way of saying ‘like’. ‘Line’ is obviously a text-based concept, and even the concept of a ‘word’ as a discrete entity apart from a flow of speech seems somewhat text-based. Goody (1977, p. 115) has pointed out that an entirely oral language which has a term for speech in general, or for a rhythmic unit of a song, or for an utterance, or for a theme, may have no ready term for a ‘word’ as an isolated item, a ‘bit’ of speech, as in, ‘The last sentence here consists of twenty-six words’. Or does it? Maybe there are twenty-eight. If you cannot write, is ‘textbased’ one word or two? The sense of individual words as significantly discrete items is fostered by writing, which, here as elsewhere, is diaeretic, separative. (Early manuscripts tend not to separate words clearly from each other, but to run them together.) Significantly, illiterate singers in the widely literate culture of modern former Yugoslavia develop and express attitudes toward writing (Lord 1960, p. 28). They admire literacy and believe that a literate person can do even better what they do, namely, recreate a lengthy song after hearing it only once. This is precisely what literates cannot 60 orality and literacy do, or can do only with difficulty. As literates attribute literate kinds of achievement to oral performers, so oral performers attribute oral kinds of achievement to literates. Lord early showed (1960) the applicability of oral-formulaic analysis to Old English (Beowulf) and others have shown various ways in which oral-formulaic methods help explain oral or residually oral composition in the European Middle Ages, in German, French, Portuguese, and other languages (see Foley 1980b). Fieldwork across the globe has corroborated and extended the work done by Parry and, far more extensively, by Lord in Yugoslavia. For example, Goody (1977, pp. 118–19) reports how, among the LoDagaa of northern Ghana, where the Invocation to the Bagre, like the Lord’s Prayer among Christians, is ‘something everyone “knows”’, the renditions of the invocation are nevertheless by no means stable. The Invocation consists of only ‘a dozen lines or so’, and, if you know the language, as Goody does, and pronounce the opening phrase of the Invocation, your hearer may take up the refrain, correcting any mistakes he or she finds you making. However, taping shows that the wording of the Invocation can vary significantly from one recitation to the other, even in the case of recitations by the same individual, and even in individuals who will correct you when your version does not correspond to their (current) version. Goody’s findings here, and the findings of others (Opland 1975; 1976), make it clear that oral peoples at times do try for verbatim repetition of poems or other oral art forms. What is their success? Most often it is minimal by literate standards. From South Africa Opland (1976, p. 114) reports earnest efforts at verbatim repetition and the results: ‘Any poet in the community will repeat the poem which is in my limited testing at least sixty per cent in correlation with other versions.’ Success hardly matches ambition here. Sixty per cent accuracy in memorization would earn a pretty low mark in schoolroom recitation of a text or in an actor’s rendition of a play’s script. Many instances of ‘memorization’ of oral poetry adduced as evidence of ‘prior composition’ by the poet, such as the instances in Finnegan (1977, pp. 76–82), seem to be of no greater verbatim accuracy. In fact, Finnegan claims only ‘close similarity, in places amounting to word-for-word repetition’ (1977, p. 76) and ‘much more verbal some psychodynamics of orality 61 and line-for-line repetition than one might expect from the Yugoslav analogy’ (1977, p. 78; on the value of these comparisons and the ambiguous significance of ‘oral poetry’ in Finnegan, see Foley 1979). More recent work, however, has brought to light some instances of more exact verbatim memorization among oral peoples. One is an instance of ritual verbalization among the Curia, off the Panama coast, reported by Joel Sherzer (1982). In 1970 Sherzer had taped a lengthy magic puberty rite formula being taught by a man who was a girls’ puberty rite specialist to other such specialists. He returned in 1979 with a transcription he had made of the formula and found that the same man could match the transcription verbatim, phoneme for phoneme. Although Sherzer does not state how widespread or durable the exact verbatim formula in question was within any given group of formula experts over a period of time, the instance he gives is a clearcut one of success with verbatim reproduction. (The instances referred to by Sherzer 1982, n. 3, from Finnegan 1977, as already indicated here above, all appear ambiguous, at best, and thus not equatable with his own instance.) Two other instances comparable to Sherzer’s show verbatim reproduction of oral materials fostered not by a ritualized setting but by special linguistic or musical constraints. One is from Somali classical poetry, which has a scansion pattern seemingly more complex and rigid than that of ancient Greek epic, so that the language cannot be varied so readily. John William Johnson notes that the Somali oral poets ‘learn the rules of prosody in a manner very similar, if not identical to the way they learn grammar itself’ (1979b, p. 118; see also Johnson 1979a). They can no more state what the metrical rules are than they can state the rules of Somali grammar. The Somali poets do not normally compose and perform at the same time, but work out a composition in private, word-for-word, which they afterwards recite in public themselves or pass on to another to recite. This again is a clear instance of oral verbatim memorization. How stable the verbalization is over any period of time (several years, a decade or so) apparently remains to be investigated. The second instance shows how music may act as a constraint to fix a verbatim oral narrative. Drawing on his own intensive fieldwork in Japan, Eric Rutledge (1981) reports on a still extant, but vestigial, 62 orality and literacy Japanese tradition, in which an oral narrative, The Tale of the Heike, is chanted to music, with some few ‘white voice’ sections unaccompanied instrumentally and some purely instrumental interludes. The narrative and musical accompaniment are memorized by apprentices, who begin as young children working with an oral master. The masters (there are not many left) undertake to train their apprentices in verbatim recitation of the chant through rigorous drill over several years, and succeed remarkably, though they themselves make changes in their own recitations of which they are unaware. Certain movements in the narrative are more errorprone than others. At some points the music stabilizes the text completely, but at others it generates errors of the same sorts found in manuscript copying, such as those produced by homoioteleuton – a copyist (or oral performer) skips from one occurrence of a concluding phrase to a later occurrence of the same concluding phrase, leaving out the intervening material. Again, we have here cultivated verbatim rendition of a sort, less than totally invariable, but noteworthy. Although in these instances the production of oral poetry or other oral verbalization by consciously cultivated memorization is not the same as the oral-formulaic practice in Homeric Greece or the modern former Yugoslavia or in countless other traditions, verbatim memorization apparently does not at all free the oral noetic processes from dependence on formulas, but if anything increases the dependence. In the case of Somali oral poetry, Francesco Antinucci has shown that this poetry has not merely phonological, metrical constraints, but also syntactic constraints. That is to say, only certain specific syntactic structures occur in the lines of the poems: in instances Antinucci presents, only two types of syntactic structures out of the hundreds possible (1979, p. 148). This is certainly formulaic composition with a vengeance, for formulas are nothing if not ‘constraints’ and here we are dealing with syntactic formulas (which are also found in the economy of the poems that Parry and Lord worked with). Rutledge (1981) notes the formulaic character of the material in the Heike chants, which indeed are so formulary as to contain many archaic words the meanings of which the masters do not even know. Sherzer (1982) also calls specific attention to the fact that the utterances he finds recited verbatim are made up of formulaic elements similar to those in oral some psychodynamics of orality 63 performances of the ordinary, rhapsodic, nonverbatim type. He suggests that we think of a continuum between the ‘fixed’ and the ‘flexible’ use of formulaic elements. Sometimes formulaic elements are managed in an effort to establish verbatim sameness, sometimes they work to implement a certain adaptability or variation (though users of the formulaic elements, as Lord has shown, may generally think of what is in fact ‘flexible’ or variable use as being ‘fixed’ use). Sherzer’s suggestion certainly is a wise suggestion. Oral memorization deserves more and closer study, especially in ritual. Sherzer’s verbatim instances are from ritual, and Rutledge hints in his paper and states explicitly in a letter to me (22 January 1982) that the Heike chants are ritualistic in setting. Chafe (1982), treating specifically the Seneca language, suggests that ritual language as compared to colloquial language is like writing in that it ‘has a permanence which colloquial language does not. The same oral ritual is presented again and again: not verbatim, to be sure, but with a content, style, and formulaic structure which remain constant from performance to performance’. There can be little doubt, all in all, that in oral cultures generally, by far most of the oral recitation falls toward the flexible end of the continuum, and even in ritual. Even in cultures which know and depend on writing but retain a living contact with pristine orality, that is, retain a high oral residue, ritual utterance itself is often not typically verbatim. ‘Do this in memory of me’ Jesus said at the Last Supper (Luke 22:19). Christians celebrate the Eucharist as their central act of worship because of Jesus’ directive. But the crucial words that Christians repeat as Jesus’ words in fulfilling this directive (that is, the words ‘This is my body . . . ; this is the cup of my blood . . .’) do not appear in exactly the same way in any two places where they are cited in the New Testament. The early Christian Church remembered, in pretextual, oral form, even in her textualized rituals, and even at those very points where she was commanded to remember most assiduously. Statements are often made about verbatim oral memorization of the Vedic hymns in India, presumably in complete independence of any texts. Such statements, so far as I know, have never been assessed in view of the findings of Parry and Lord and related findings concerning oral ‘memorization’. The Vedas are lengthy collections and old, probably composed between 1500 and 900 or 500 BC – the variance that 64 orality and literacy must be allowed in possible dates shows how vague are present-day contacts with the original settings in which grew the hymns, prayers, and liturgical formulas that make up these collections. Typical references still cited today to attest to verbatim memorization of the Vedas are from 1906 or 1927 (Kiparsky 1976, pp. 99–100), before any of Parry’s work was completed, or from 1954 (Bright 1981), before Lord’s (1960) and Havelock’s work (1963). In The Destiny of the Veda in India (1965) the distinguished French Indologist and translator of the Rig-Veda Louis Renou does not even advert to the kinds of questions that arise in the wake of Parry’s work. There is no doubt that oral transmission was important in the history of the Vedas (Renou 1965, pp. 25–6 – #26 – and notes, pp. 83–4). Brahman teachers or gurus and their students devoted intensive effort to verbatim memorization, even crisscrossing the words in various patterns to ensure oral mastery of their positions in relation to one another (Basham 1963, p. 164), though whether this latter pattern was used before a text had been developed appears an insoluble problem. In the wake of the recent studies of oral memory, however, questions arise as to the ways in which memory of the Vedas actually worked in a purely oral setting – if there ever was such a setting for the Vedas totally independent of texts. Without a text, how could a given hymn – not to mention the totality of hymns in the collections – be stabilized word for word, and that over many generations? Statements, made in good conscience by oral persons, that renditions are word for word the same, as we have seen, can be quite contrary to fact. Mere assertions, frequently made by literates, that such lengthy texts were retained verbatim over generations in a totally oral society can no longer be taken at face value without verification. What was retained? The first recitation of a poem by its originator? How could the originator ever repeat it word for word the second time and be sure he had done so? A version which a powerful teacher worked up? This appears a possibility. But his working it up in his own version shows variability in the tradition, and suggests that in the mouth of another powerful teacher more variations might well come wittingly or unwittingly. In point of fact, the Vedic texts – on which we base knowledge of the Vedas today – have a complex history and many variants, facts which seem to suggest that they hardly originated from an absolutely some psychodynamics of orality 65 verbatim oral tradition. Indeed, the formulaic and thematic structure of the Vedas, conspicuous even in translations, relates them to other oral performances we know, and indicates that they warrant further study in connection with what has been discovered recently about formulaic elements, thematic elements, and oral mnemonics. Peabody’s work (1975) already directly encourages such study in his examination of relations between the older Indo-European tradition and Greek versification. For example, the incidence of high redundancy or its lack in the Vedas could itself be an indication of the degree to which they are of more or less oral provenance (see Peabody 1975, p. 173). In all cases, verbatim or not, oral memorization is subject to variation from direct social pressures. Narrators narrate what audiences call for or will tolerate. When the market for a printed book declines, the presses stop rolling but thousands of copies may remain. When the market for an oral genealogy disappears, so does the genealogy itself, utterly. As noted above (pp. 48–9), the genealogies of winners tend to survive (and to be improved), those of losers tend to vanish (or to be recast). Interaction with living audiences can actively interfere with verbal stability: audience expectations can help fix themes and formulas. I had such expectations enforced on me a few years ago by a niece of mine, still a tiny child young enough to preserve a clearly oral mindset (though one infiltrated by the literacy around her). I was telling her the story of ‘The Three Little Pigs’: ‘He huffed and he puffed, and he huffed and he puffed, and he huffed and he puffed’. Cathy bridled at the formula I used. She knew the story, and my formula was not what she expected. ‘He huffed and he puffed, and he puffed and he huffed, and he huffed and he puffed’, she pouted. I reworded the narrative, complying to audience demand for what had been said before, as other oral narrators have often done. Finally, it should be noted that oral memory differs significantly from textual memory in that oral memory has a high somatic component. Peabody (1975, p. 197) has observed that ‘From all over the world and from all periods of time . . . traditional composition has been associated with hand activity. The aborigines of Australia and other areas often make string figures together with their songs. Other peoples manipulate beads on strings. Most descriptions of bards include stringed instruments or drums’. (See also Lord 1960; Havelock 66 orality and literacy 1978a, pp. 220–2; Biebuyck and Mateene 1971, frontispiece.) To these instances one can add other examples of hand activity, such as gesturing, often elaborate and stylized (Scheub 1977), and other bodily activities such as rocking back and forth or dancing. The Talmud, though a text, is still vocalized by highly oral Orthodox Jews in Israel with a forward-and-backward rocking of the torso, as I myself have witnessed. The oral word, as we have noted, never exists in a simply verbal context, as a written word does. Spoken words are always modifications of a total, existential situation, which always engages the body. Bodily activity beyond mere vocalization is not adventitious or contrived in oral communication, but is natural and even inevitable. In oral verbalization, particularly public verbalization, absolute motionlessness is itself a powerful gesture. VERBOMOTOR LIFESTYLE Much in the foregoing account of orality can be used to identify what can be called ‘verbomotor’ cultures, that is, cultures in which, by contrast with high-technology cultures, courses of action and attitudes toward issues depend significantly more on effective use of words, and thus on human interaction, and significantly less on non-verbal, often largely visual input from the ‘objective’ world of things. Jousse (1925) used his term verbomoteur to refer chiefly to ancient Hebrew and Aramaic cultures and surrounding cultures, which knew some writing but remained basically oral and word-oriented in lifestyle rather than object-oriented. We are expanding its use here to include all cultures that retain enough oral residue to remain significantly word-attentive in a person-interactive context (the oral type of context) rather than object-attentive. It should, of course, be noted that words and objects are never totally disjunct: words represent objects, and perception of objects is in part conditioned by the store of words into which perceptions are nested. Nature states no ‘facts’: these come only within statements devised by human beings to refer to the seamless web of actuality around them. The cultures which we are here styling verbomotor are likely to strike technological man as making all too much of speech itself, as some psychodynamics of orality 67 overvaluing and certainly overpracticing rhetoric. In primary oral cultures, even business is not business: it is fundamentally rhetoric. Purchasing something at a Middle East souk or bazaar is not a simple economic transaction, as it would be at Woolworth’s and as a hightechnology culture is likely to presume it would be in the nature of things. Rather, it is a series of verbal (and somatic) maneuvers, a polite duel, a contest of wits, an operation in oral agonistic. In oral cultures a request for information is commonly interpreted interactively (Malinowski 1923, pp. 451, 470–81), as agonistic, and, instead of being really answered, is frequently parried. An illuminating story is told of a visitor in County Cork, Ireland, an especially oral region in a country which in every region preserves massive residual orality. The visitor saw a Corkman leaning against the post office. He went up to him, pounded with his hand on the post office wall next to the Corkman’s shoulder, and asked, ‘Is this the post office?’ The Corkman was not taken in. He looked at his questioner quietly and with great concern: ‘’Twouldn’t be a postage stamp you were lookin’ for, would it?’ He treated the enquiry not as a request for information but as something the enquirer was doing to him. So he did something in turn to the enquirer to see what would happen. All natives of Cork, according to the mythology, treat all questions this way. Always answer a question by asking another. Never let down your oral guard. Primary orality fosters personality structures that in certain ways are more communal and externalized, and less introspective than those common among literates. Oral communication unites people in groups. Writing and reading are solitary activities that throw the psyche back on itself. A teacher speaking to a class which he feels and which feels itself as a close-knit group, finds that if the class is asked to pick up its textbooks and read a given passage, the unity of the group vanishes as each person enters into his or her private lifeworld. An example of the contrast between orality and literacy on these grounds is found in Carother’s report (1959) of evidence that oral peoples commonly externalize schizoid behavior where literates interiorize it. Literates often manifest tendencies (loss of contact with environment) by psychic withdrawal into a dreamworld of their own (schizophrenic delusional systematization), oral folk commonly manifest their schizoid tendencies by extreme external confusion, leading often to violent 68 orality and literacy action, including mutilation of the self and of others. This behavior is frequent enough to have given rise to special terms to designate it: the old-time Scandinavian warrior going ‘berserk’, the Southeast Asian person running ‘amok’. THE NOETIC ROLE OF HEROIC ‘HEAVY’ FIGURES AND OF THE BIZARRE The heroic tradition of primary oral culture and of early literate culture, with its massive oral residue, relates to the agonistic lifestyle, but it is best and most radically explained in terms of the needs of oral noetic processes. Oral memory works effectively with ‘heavy’ characters, persons whose deeds are monumental, memorable and commonly public. Thus the noetic economy of its nature generates outsize figures, that is, heroic figures, not for romantic reasons or reflectively didactic reasons but for much more basic reasons: to organize experience in some sort of permanently memorable form. Colorless personalities cannot survive oral mnemonics. To assure weight and memorability, heroic figures tend to be type figures: wise Nestor, furious Achilles, clever Odysseus, omnicompetent Mwindo (‘Little-OneJust-Born-He-Walked’, KĂĄbĂștwa-kĂ©nda, his common epithet). The same mnemonic or noetic economy enforces itself still where oral settings persist in literate cultures, as in the telling of fairy stories to children: the overpoweringly innocent Little Red Riding Hood, the unfathomably wicked wolf, the incredibly tall beanstalk that Jack has to climb – for non-human figures acquire heroic dimensions, too. Bizarre figures here add another mnemonic aid: it is easier to remember the Cyclops than a two-eyed monster, or Cerberus than an ordinary oneheaded dog (see Yates 1966, pp. 9–11, 65–7). Formulary number groupings are likewise mnemonically helpful: the Seven Against Thebes, the Three Graces, the Three Fates, and so on. All this is not to deny that other forces besides mere mnemonic serviceability produce heroic figures and groupings. Psychoanalytic theory can explain a great many of these forces. But in an oral noetic economy, mnemonic serviceability is a sine qua non, and, no matter what the other forces, without proper mnemonic shaping of verbalization the figures will not survive. As writing and eventually print gradually alter the old oral noetic some psychodynamics of orality 69 structures, narrative builds less and less on ‘heavy’ figures until, some three centuries after print, it can move comfortably in the ordinary human lifeworld typical of the novel. Here, in place of the hero, one eventually encounters even the antihero, who, instead of facing up to the foe, constantly turns tail and runs away, as the protagonist in John Updike’s Rabbit Run. The heroic and marvelous had served a specific function in organizing knowledge in an oral world. With the control of information and memory brought about by writing and, more intensely, by print, you do not need a hero in the old sense to mobilize knowledge in story form. The situation has nothing to do with a putative ‘loss of ideals’. THE INTERIORITY OF SOUND In treating some psychodynamics of orality, we have thus far attended chiefly to one characteristic of sound itself, its evanescence, its relationship to time. Sound exists only when it is going out of existence. Other characteristics of sound also determine or influence oral psychodynamics. The principal one of these other characteristics is the unique relationship of sound to interiority when sound is compared to the rest of the senses. This relationship is important because of the interiority of human consciousness and of human communication itself. It can be discussed only summarily here. I have treated the matter in greater fullness and depth in The Presence of the Word, to which the interested reader is referred (1967b, Index). To test the physical interior of an object as interior, no sense works so directly as sound. The human sense of sight is adapted best to light diffusely reflected from surfaces. (Diffuse reflection, as from a printed page or a landscape, contrasts with specular reflection, as from a mirror.) A source of light, such as a fire, may be intriguing but it is optically baffling: the eye cannot get a ‘fix’ on anything within the fire. Similarly, a translucent object, such as alabaster, is intriguing because, although it is not a source of light, the eye cannot get a ‘fix’ on it either. Depth can be perceived by the eye, but most satisfactorily as a series of surfaces: the trunks of trees in a grove, for example, or chairs in an auditorium. The eye does not perceive an interior strictly as an interior: inside a room, the walls it perceives are still surfaces, outsides. 70 orality and literacy Taste and smell are not much help in registering interiority or exteriority. Touch is. But touch partially destroys interiority in the process of perceiving it. If I wish to discover by touch whether a box is empty or full, I have to make a hole in the box to insert a hand or finger: this means that the box is to that extent open, to that extent less an interior. Hearing can register interiority without violating it. I can rap a box to find whether it is empty or full or a wall to find whether it is hollow or solid inside. Or I can ring a coin to learn whether it is silver or lead. Sounds all register the interior structures of whatever it is that produces them. A violin filled with concrete will not sound like a normal violin. A saxophone sounds differently from a flute: it is structured differently inside. And above all, the human voice comes from inside the human organism which provides the voice’s resonances. Sight isolates, sound incorporates. Whereas sight situates the observer outside what he views, at a distance, sound pours into the hearer. Vision dissects, as Merleau-Ponty has observed (1961). Vision comes to a human being from one direction at a time: to look at a room or a landscape, I must move my eyes around from one part to another. When I hear, however, I gather sound simultaneously from every direction at once: I am at the center of my auditory world, which envelopes me, establishing me at a kind of core of sensation and existence. This centering effect of sound is what high-fidelity sound reproduction exploits with intense sophistication. You can immerse yourself in hearing, in sound. There is no way to immerse yourself similarly in sight. By contrast with vision, the dissecting sense, sound is thus a unifying sense. A typical visual ideal is clarity and distinctness, a taking apart (Descartes’ campaigning for clarity and distinctness registered an intensification of vision in the human sensorium – Ong 1967b, pp. 63, 221). The auditory ideal, by contrast, is harmony, a putting together. Interiority and harmony are characteristics of human consciousness. The consciousness of each human person is totally interiorized, known to the person from the inside and inaccessible to any other person directly from the inside. Everyone who says ‘I’ means something different by it from what every other person means. What is ‘I’ to me is only ‘you’ to you. And this ‘I’ incorporates experience into itself by ‘getting it all together’. Knowledge is ultimately not a fractioning but a some psychodynamics of orality 71 unifying phenomenon, a striving for harmony. Without harmony, an interior condition, the psyche is in bad health. It should be noted that the concepts interior and exterior are not mathematical concepts and cannot be differentiated mathematically. They are existentially grounded concepts, based on experience of one’s own body, which is both inside me (I do not ask you to stop kicking my body but to stop kicking me) and outside me (I feel myself as in some sense inside my body). The body is a frontier between myself and everything else. What we mean by ‘interior’ and ‘exterior’ can be conveyed only by reference to experience of bodiliness. Attempted definitions of ‘interior’ and ‘exterior’ are inevitably tautological: ‘interior’ is defined by ‘in’, which is defined by ‘between’, which is defined by ‘inside’, and so on round and round the tautological circle. The same is true with ‘exterior’. When we speak of interior and exterior, even in the case of physical objects, we are referring to our own sense of ourselves: I am inside here and everything else is outside. By interior and exterior we point to our own experience of bodiliness (Ong 1967b, pp. 117–22, 176–9, 228, 231) and analyze other objects by reference to this experience. In a primary oral culture, where the word has its existence only in sound, with no reference whatsoever to any visually perceptible text, and no awareness of even the possibility of such a text, the phenomenology of sound enters deeply into human beings’ feel for existence, as processed by the spoken word. For the way in which the word is experienced is always momentous in psychic life. The centering action of sound (the field of sound is not spread out before me but is all around me) affects man’s sense of the cosmos. For oral cultures, the cosmos is an ongoing event with man at its center. Man is the umbilicus mundi, the navel of the world (Eliade 1958, pp. 231–5, etc.). Only after print and the extensive experience with maps that print implemented would human beings, when they thought about the cosmos or universe or ‘world’, think primarily of something laid out before their eyes, as in a modern printed atlas, a vast surface or assemblage of surfaces (vision presents surfaces) ready to be ‘explored’. The ancient oral world knew few ‘explorers’, though it did know many itinerants, travelers, voyagers, adventurers, and pilgrims. It will be seen that most of the characteristics of orally based thought 72 orality and literacy and expression discussed earlier in this chapter relate intimately to the unifying, centralizing, interiorizing economy of sound as perceived by human beings. A sound-dominated verbal economy is consonant with aggregative (harmonizing) tendencies rather than with analytic, dissecting tendencies (which would come with the inscribed, visualized word: vision is a dissecting sense). It is consonant also with the conservative holism (the homeostatic present that must be kept intact, the formulary expressions that must be kept intact), with situational thinking (again holistic, with human action at the center) rather than abstract thinking, with a certain humanistic organization of knowledge around the actions of human and anthropomorphic beings, interiorized persons, rather than around impersonal things. The denominators used here to describe the primary oral world will be useful again later to describe what happened to human consciousness when writing and print reduced the oral-aural world to a world of visualized pages. ORALITY, COMMUNITY AND THE SACRAL Because in its physical constitution as sound, the spoken word proceeds from the human interior and manifests human beings to one another as conscious interiors, as persons, the spoken word forms human beings into close-knit groups. When a speaker is addressing an audience, the members of the audience normally become a unity, with themselves and with the speaker. If the speaker asks the audience to read a handout provided for them, as each reader enters into his or her own private reading world, the unity of the audience is shattered, to be re-established only when oral speech begins again. Writing and print isolate. There is no collective noun or concept for readers corresponding to ‘audience’. The collective ‘readership’ – this magazine has a readership of two million – is a far-gone abstraction. To think of readers as a united group, we have to fall back on calling them an ‘audience’, as though they were in fact listeners. The spoken word forms unities on a large scale, too: countries with two or more different spoken languages are likely to have major problems in establishing or maintaining national unity, as today in Canada or Belgium or many developing countries. some psychodynamics of orality 73 The interiorizing force of the oral word relates in a special way to the sacral, to the ultimate concerns of existence. In most religions the spoken word functions integrally in ceremonial and devotional life. Eventually in the larger world religions sacred texts develop, too, in which the sense of the sacral is attached also to the written word. Still, a textually supported religious tradition can continue to authenticate the primacy of the oral in many ways. In Christianity, for example, the Bible is read aloud at liturgical services. For God is thought of always as ‘speaking’ to human beings, not as writing to them. The orality of the mindset in the Biblical text, even in its epistolary sections, is overwhelming (Ong 1967b, pp. 176–91). The Hebrew dabar, which means word, means also event and thus refers directly to the spoken word. The spoken word is always an event, a movement in time, completely lacking in the thing-like repose of the written or printed word. In Trinitarian theology, the Second Person of the Godhead is the Word, and the human analogue for the Word here is not the human written word, but the human spoken word. God the Father ‘speaks’ to his Son: he does not inscribe him. Jesus, the Word of God, left nothing in writing, though he could read and write (Luke 4:16). ‘Faith comes through hearing’, we read in the Letter to the Romans (10:17). ‘The letter kills, the spirit [breath, on which rides the spoken word] gives life’ (2 Corinthians 3:6). WORDS ARE NOT SIGNS Jacques Derrida has made the point that ‘there is no linguistic sign before writing’ (1976, p. 14). But neither is there a linguistic ‘sign’ after writing if the oral reference of the written text is adverted to. Though it releases unheard-of potentials of the word, a textual, visual representation of a word is not a real word, but a ‘secondary modeling system’ (cf. Lotman 1977). Thought is nested in speech, not in texts, all of which have their meanings through reference of the visible symbol to the world of sound. What the reader is seeing on this page are not real words but coded symbols whereby a properly informed human being can evoke in his or her consciousness real words, in actual or imagined sound. It is impossible for script to be more than marks on a surface unless it is used by a conscious human 74 orality and literacy being as a cue to sounded words, real or imagined, directly or indirectly. Chirographic and typographic folk find it convincing to think of the word, essentially a sound, as a ‘sign’ because ‘sign’ refers primarily to something visually apprehended. Signum, which furnished us with the word ‘sign’, meant the standard that a unit of the Roman army carried aloft for visual identification – etymologically, the ‘object one follows’ (Proto-Indo-European root, sekw-, to follow). Though the Romans knew the alphabet, this signum was not a lettered word but some kind of pictorial design or image, such as an eagle, for example. The feeling for letter names as labels or tags was long in establishing itself, for primary orality lingered in residue, as will be seen, centuries after the invention of writing and even of print. As late as the European Renaissance, quite literate alchemists using labels for their vials and boxes tended to put on the labels not a written name, but iconographic signs, such as various signs of the zodiac, and shopkeepers identified their shops not with lettered words but with iconographic symbols such as the ivy bush for a tavern, the barber’s pole, the pawnbroker’s three spheres. (On iconographic labeling, see Yates 1966.) These tags or labels do not at all name what they refer to: the words ‘ivy bush’ are not the word ‘tavern’, the word ‘pole’ is not the word ‘barber’. Names were still words that moved through time: these quiescent, unspoken, symbols were something else again. They were ‘signs’, as words are not. Our complacency in thinking of words as signs is due to the tendency, perhaps incipient in oral cultures but clearly marked in chirographic cultures and far more marked in typographic and electronic cultures, to reduce all sensation and indeed all human experience to visual analogues. Sound is an event in time, and ‘time marches on’, relentlessly, with no stop or division. Time is seemingly tamed if we treat it spatially on a calendar or the face of a clock, where we can make it appear as divided into separate units next to each other. But this also falsifies time. Real time has no divisions at all, but is uninterruptedly continuous: at midnight yesterday did not click over into today. No one can find the exact point of midnight, and if it is not exact, how can it be midnight? And we have no experience of today as being next to yesterday, as it is represented on a calendar. Reduced to space, time seems some psychodynamics of orality 75 more under control – but only seems to be, for real, indivisible time carries us to real death. (This is not to deny that spatial reductionism is immeasurably useful and technologically necessary, but only to say that its accomplishments are intellectually limited, and can be deceiving.) Similarly, we reduce sound to oscillograph patterns and to waves of certain ‘lengths’, which can be worked with by a deaf person who can have no knowledge of what the experience of sound is. Or we reduce sound to script and to the most radical of all scripts, the alphabet. Oral man is not so likely to think of words as ‘signs’, quiescent visual phenomena. Homer refers to them with the standard epithet ‘winged words’ – which suggests evanescence, power, and freedom: words are constantly moving, but by flight, which is a powerful form of movement, and one lifting the flier free of the ordinary, gross, heavy, ‘objective’ world. In contending with Jean Jacques Rousseau, Derrida is of course quite correct in rejecting the persuasion that writing is no more than incidental to the spoken word (Derrida 1976, p. 7). But to try to construct a logic of writing without investigation in depth of the orality out of which writing emerged and in which writing is permanently and ineluctably grounded is to limit one’s understanding, although it does produce at the same time effects that are brilliantly intriguing but also at times psychedelic, that is, due to sensory distortions. Freeing ourselves of chirographic and typographic bias in our understanding of language is probably more difficult than any of us can imagine, far more difficult, it would seem, than the ‘deconstruction’ of literature, for this ‘deconstruction’ remains a literary activity. More will be said about this problem in treating the internalizing of technology in the next chapter. 76 orality and literacy 
0 notes
katebushwick · 5 years
Text
Plugging in: Power sockets, standards and the valencies of national habitus
This article examines why it is not possible to plug a British plug directly into a Dutch power socket. The author traces the development of the British Standard BS1363 plug-socket assembly and compares it to the European arrangement in order to demonstrate how the ‘banal nationalism’ of everyday life is formed and can be seen to be manifest in everyday utilitarian artefacts. A concept of nationhood that is formed in terms of ‘national habitus’ is elaborated, the nation is envisaged as being constituted by the socio-technological infrastructures upon which it depends, which are established as being formed by the governmentalities that have allowed them to come into being, and it is suggested that such a system will operate at a range of scales or valencies. By tracing the history of the two examples given, the intention is to demonstrate how each national habitus depends upon a particular technical development shaped by social forces.
In a cheap hotel room in Amsterdam I am trying to finish a presentation. It is late at night and the piece has to be ready for the next day. I have nearly finished, but there are a few important tasks to complete. On the screen of my little netbook, a message-box appears: ‘Plug in or find another power source.’ As I have done a thousand times before, without really taking my mind off the work I am trying to do, I take the cable from my bag and slide the power connector into the socket on the side of the machine. I reach down the cable and find the plug. Looking up, I suddenly stop. I am staring at two round holes in the wall and I am holding a plug with three flat rectangular pins. I have no adaptor. Why did this happen? What made these two objects, both of which perform the same function, so different in their national contexts? Britain and the Netherlands are not far apart, either geographically or culturally, yet this simple piece of technology, an alternating current (AC) plug-socket assembly, which I need to live my daily life, that works perfectly well at home, ceases to function at all when I cross the border. If there is no adaptor to marry-up the two systems, I am excluded in a way that is more material than symbolic, even if it is revealed as rich in meaning in the process. A plug-socket assembly is a device that allows electrically operated machinery to connect to the mains power supply in a building. It comprises the male plug on the appliance, which has metal pins that fit into the female socket connection tubes housed in the wall. This apparatus thus allows for the conduction of electricity from a large-scale infrastructural utility supply to an individual appliance. In the scenario described above, my interaction with such everyday technology is revealed to be implicit in a much bigger entity. Such a stoppage, a small-scale action baffled by a conflict of protocol, of ‘how things are done’, speaks not only of how such material interactions create the conditions that give rise to the subjective sensation of belonging, but also indicate how this sensation is dependent upon the infrastructure I inhabit. This article investigates how these two plug-socket assemblies came to be as they are; it examines the way in which small-scale everyday acts are dependent upon larger grids of action and power (in this case, quite literally); and though there is not space here to discuss in detail the phenomenological terrain formed by the interaction of such scales, the intention is to demonstrate how unnoticed everyday things, such as the humble plug, act to constitute a sense of being part of the nation, in a way that is dependent upon the material and socio-political history that shaped them. Despite the fact that the earliest manifestation of standardisation took place in the USA (a plug-socket standard was established there in 1917; see Schroeder, 1986), for the sake of clarity this discussion is limited to the Dutch and British examples I encountered in my moment in Amsterdam. With the British Standard 1363 electrical plug-socket assembly, it is easy to identify what code of conduct resulted in the metal and plastic arrangement we have now. There is one document, Post War Building Study 11: Electrical Installations, drawn up by a central government committee in 1944 in the run-up to the nationalisation of electricity provision in the UK, that can be identified as the source of the strict specifications for the polarised, earthed, three-pin, appliance-side-fused arrangement that is in use today (MoW, 1944: see Figure 1). Examination of the circumstances that led up to the drafting of this report and the specifications it lays out thus make it possible to see how the British standard plug-socket assembly was brought into being by the explicit agency of the state at a point in history where the safety of users was regarded as a defining factor in its design. The initial electrification of the Netherlands depended not upon domestically produced technology, but French and German equipment derived from American standards (Buiter, 1997; Jonnes, 2004; Van der Vleuten, 2010), and was shaped by a range of commercial interests administered by municipalities with a high degree of fiscal and strategic control (Bos, 2010; Doyer, 1916; Vereeniging van Directeuren van Electriciteitsbedrijven [VDEN], 1926). The standard socket also appeared at a different time to the standard plug (IECEE, 1963[1957]). Consequently the socket I found in my hotel room in the Netherlands was less directly defined by the action of central specification, a quality that is discernible in the strangely unsatisfying relationship between the simplest form of plug and the larger socket housing into which it fits (see Figure 2), which reveals the absence of central state control in its production. The pressures that mitigated for the Dutch plug-socket assembly are thus less directly identified and the evidence has had to be adduced from contemporary reports and secondary sources that deal with the process of electrification (Bos, 2010; Buitter, 1997; De Rijk, 2009; Doyer, 1916; Van der Vleuten, 2010; VDEN, 1926). Thus the approaches to the standardisation of the connection of appliances to the wider infrastructure, that is the governance of conduction at the domestic level, so to speak, can be seen and experienced in the devices with which we have been left. Through this comparison it is proposed that the electrical plug-socket assembly can be understood as an example of ‘banal nationalism’, whereby such small and unnoticed things take the form they do because they are the product of much larger scale processes; that something as abstract as nation is present in the day-today materiality of an electrical plug; that modes of conduct and government are coded into the tangible things we use to live our lives, and these things take the form they do for historical reasons. Infrastructure and the banalities of national habitus It is very difficult to imagine modern life without electricity. As is illustrated by the situation that begins this article, much of what we do on a day-to-day level is dependent upon the supply of electricity, but it is only when such availability is frustrated that it comes into consciousness. Daily life in contemporary culture is reliant upon a whole technical infrastructure that supplies electricity, of which, in normal conditions, we tend to be unaware. Such an infrastructure can, in terms be regarded as a Large Technical System, the parameters of which can be understood to include not just the mechanics of the system, the generators, cables and fuses and the like, but also the users, decision makers and social effects of the operation of the system. This is because the technological apparatus we live with to a large degree establishes the conditions of how we may act and behave. As Brian Larkin (2008: 5) observes, infrastructures ‘are the material forms that allow for exchange over space’, and are thus the physical structures that allow for much of our social lives beyond that which is tangibly immediate. Paul Dourish and Genevieve Bell (2007: 417) argue that infrastructure and everyday life are established as coextensive, in that infrastructure as a concept ‘encompasses not just technological but also the social and the cultural structures of experience’. Yet it should be noted that such an ‘infrastructure of experience’, in its broadest sense, is dependent upon the actual embedding of a range of material infrastructures into everyday space that ‘shapes our experience of that space and provides a framework through which our encounters with space take on meaning’. Belonging is experienced at a range of scales. The closest register of home is arguably the domestic environment, which is peopled by individuals who are intimately known to us. There is also the wider sense of the community, which has broader geographical range, and those who make up this perceived grouping have varying degrees of status in terms of the extent to which they are known. The nation, then, is a scale of perceived community that coincides with the legal and bureaucratic structures that could properly be said to be of the state. Max Weber’s (1978: 181) observation that there is an important distinction between the state and the nation, whereby the state is a political institution, the nation a cultural community, does not exclude the intersection of these entities. As Peter Taylor (1994: 156) has argued, the nation state emerged in the 19th century as a way of transforming populations into societies, more or less cohesive social groupings, constituted by practical and ethical regulatory systems. Thus infrastructural networks Figure 2. The Dutch ‘Schuko’ two-pin socket with a ‘Europlug’ inserted. such as communications facilities, transport links and of course utilities came to be essential mechanisms of power in the constitution of the modern nation state. As Manuel DeLanda (2006) observes, it was the establishment of the material infrastructure, the literal wiring-up of the nation, that allowed it to come together as a functional entity, and as Ernest Gellner (1983) notes, the establishing of cultural norms that make up the nation can only occur in a mass culture, and such a form of society develops as a feature of modernity. That is, it is the technical elaboration of the state as a physical entity that creates the conditions in which the nation as a concept and a sensation can be narrated and internalised by individuals. At the scale of everyday interaction, ‘nation’ is not just an abstraction, it can also be apprehended as an iterative physical phenomenon. Social existence depends upon the material; we use things to perform the actions and routines of daily life. As Richard Johnson (1993: 168) has argued, the nexus of meanings that constitute the nation, in its multifarious versions, depends on a supporting structure of common practices: ‘the experience of waged and domestic labour, the preparation and consumption of food and drink, the forms of intimacy, leisure and sociability in family, community and region’ that in certain circumstances are ‘culturally nationalised’ and ‘made to mean a nation’. Yet beneath such varying degrees of explicit meaning making lie more hidden and implicit forms of association and attachment. Michel Foucault (2001: 341) suggests that the exercise of power can best be understood not simply as the exertion of force from above, but as control through the ‘conduct of conducts’; this is then a ‘management of possibilities’ whereby what can be done and the manner in which it can be achieved serve to determine the forms that life can take. For Foucault, such ‘governmentality’ is not just about the nature of formal state control (although it includes this), rather it is concerned with matrixes of strategies that have developed throughout modernity to deal with what Mitchell Dean (2007: 103) describes as the ‘emergence of a set of problems specific to the issue of population’. Thus, as Dean observes, any attempt to study such a phenomenon must pay attention to ‘the actual rationalities and techniques’ through which government is achieved, as the way in which the conduct of conduct is established, and indeed establishes particular ways of being in any specific circumstance. The valencies of everyday life That which is most revisited in the course of everyday life, the things that we rely upon to live our lives, often remain unnoticed (Highmore, 2001). This is not least because of their very ubiquity and, as Leigh Star (1999: 377) notes, ‘many aspects of infrastructure are singularly unexciting’. Consequently, we tend not to register the existence of things such as mains power sockets until they are needed, yet they become objects of extreme attenuation when they are absent. In this way, they are the substrate of the physical landscape of subjectivity that makes us feel we belong somewhere. There is a collectivity in what we experience, and most of this is a form of what Michael Billig (1995: 6) talks of as a ‘banal nationalism’, whereby the mechanisms of the nation operate at the level of the ‘endemic condition’. Such are plugs and power sockets. They are just what we live with, but in this they are what makes us feel we belong. It is an example of ‘what we do here’ that is so everyday, and thus seemingly a natural element in our lived experience, that we do not notice the specific form that its functional solution takes within the ethos of the milieu in which we encounter it. This is not then to argue for a simplistic form of technological determinism whereby material technologies absolutely delimit the nature of social action. Rather it is to suggest that there is a dynamic relationship between the infrastructure of life in its everyday manifestations and the internal life-world of the subjects who inhabit such terrain. Although it is through the work of Pierre Bourdieu (1987) that the concept of habitus has gained prominence in the UK, it was previously elaborated in the context of national belonging by Norbert Elias (2000[1939]). As Giselinde Kuipers (2011: 3) observes, the term habitus refers to the learned practices and standards that have become so much part of ourselves that they ‘feel self-evident and natural’. In this context, national habitus is useful in the analysis of localised ways of being, those cultural practices that ‘end at the border’, because it makes it possible to ask the question, as Kuipers does: ‘under what conditions does such a national ground-tone in behaviour, institutions and standards emerge?’ (p. 4). With an everyday object such as the plug-socket assembly, the confluence of technical possibilities, regulatory practices and cultural norms that have served to form the object’s life-course means that it is of a specific type in a particular place; there is a cultural and social history to any such artefact that accounts for its nature in any location. In mapping this process and its effects it is then necessary to identify the nature of the interactions between the elements in play. Erik Van der Vleuten (2009) observes that the study of Large Technical Systems allows not just for an understanding of their workings at the level of mechanics but also their sociotechnical functioning. This is useful because it breaks down the artificial distinction between people and things, or rather it regards people as things within the operation of the system. To such ends, Bruno Latour (2007: 63), in describing the principles of Actor-Network Theory, suggests that all the constituents involved in any interaction, whether human or non-human, can be regarded as actors or ‘actants’, which then collectively constitute the whole as a network of interdependent and interacting agencies. Thus the people who inhabit a space can be seen as the social actors who operate in this arena; yet, in Latour’s terms, the material elements with which they interact can also be understood to be acting agents – to be actants – in the constitution and performance of power relationships, which then happen at a range of scales or valencies. People and tangible things are not the only actants in any given situation; social life is also made up of social entities such as nation, and the individual subject is not the smallest component in any interaction (DeLanda, 2006: 32). Thus the set of actants that make up this field of activity and interaction can be understood to include both the nation as a large entity, the individual selves that perceive themselves to be acting in this arena, and all the components of the technical apparatus, from the biggest alternator to the smallest fuse. However, these parts exist at differently ordered valencies. Actants that function at the level of large-scale industrial production fit with other actants of this valency; elements that function at the level of everyday use then marry with other parts of this scale. What then is necessary for the functioning of the whole is that there should be a Taylor 7 congruence between the valencies, an ordering logic which allows for the levels to interact, thus making it possible for the system to operate as a single entity. The national machine AC power is a form of electrical conduction whereby the flow of the current constantly reverses back and forth from positive to negative; this is then in contrast to direct current (DC), in which the current flows only in one direction. We have AC power sockets in our houses because in the early days of electrical generation, in the latter part of the 19th century, this form of supply was found to have an advantage over DC. Early DC generation systems were only suitable for small-scale, localised production. Since AC was produced at a higher voltage it could be transmitted over larger distances and was thus suited to the supply of electricity as a mass utility.1 In the late 19th century, the electrification of Europe was being driven by companies using technology developed by the American inventor and entrepreneur Thomas Alva Edison, through the French-based arm of his organisation, Continental Edison, and the German company, AEG (Buiter, 1997; Hughes, 1983; Jonnes, 2004). As Edison noted, electrification was a complex process, both in technical and business terms, since for the system to work there needed to be standardisation across the network. He observed: It was not only necessary that the lamps should give light and the dynamos generate current, but the lamps must be adapted to the current of the dynamos, and the dynamos must be constructed to give the character of current required by the lamps, and likewise all parts of the system must be constructed with reference to all other parts, since, in one sense, all the parts form one machine. (Edison cited in Carr, 2009: 26) Thus each component was not to be conceptualised and designed as a discrete entity in itself, rather it had to be thought of as part of the machine as a whole. This mechanism then had to be integrated into the daily life of those who would use such power, if power was to be conducted into the home. As Dean (2007: 82) notes, conduct can be used both as a noun and a verb. As a verb, it is taken to mean ‘to lead, guide and direct’. As a noun it is suggested that it implies ‘behaviour, action, comportment, and may give rise to the embodied repertoire of such that sociologists call habitus’ (original emphasis). Thus the history of the electrification of nations is the charting of the development of the infrastructural nature of habitus. In the early 20th century, when standards for electrical supply were being established, the nations that were being electrified had reached a certain degree of bureaucratic and technological complexity that allowed for such process to be undertaken. There was a level of social and technical integration that created the conditions in which such largescale multi-disciplinary, inter-sector projects could be undertaken (Hughes, 1983). In these circumstances, the norms and standards required by such a process were necessarily set at a certain level, one dictated by the extent of social and technical interconnection and interdependence that is the boundary of protocological maintenance. When such developments as the expansion of electricity generation and the concomitant growth of consumer appliances took place, the boundaries of integration were demarcated by the extent of nation states, because this form of entity was the highest level of abstraction allowed by the political, legal and bureaucratic power structures of the time. Thus standardisation took place at the valency of the nation, as each country essentially established a national ‘machine’ of electricity supply and usage. There were, however, significant differences in the structure of such governmentalities in each nation since, in the UK, power essentially resided at the valency of the centralised nation state, whilst in the Netherlands much power devolved to the regional governments (Lesage, 2008). The Dutch approach to the generation and distribution of electricity was established early in the 20th century as a range of different interests attempted to become the dominant players in the industry. Private companies, regional power producers and municipal electricity suppliers all vied for advantage in the market, just as the state sought to regulate it (Doyer, 1916; Verstegen, 1986). What was to become the Dutch national grid had developed during the First World War, as directors of the provincial electricity boards sought to create a broader network to supply energy to replace the domestic use of coal, which was in short supply at this time (Centraal Verslag, 1918; De Rijk, 2009). Early development had been geographically unbalanced, as power stations had been built in areas where high sales had been expected. Areas such as Amsterdam and Leiden were amply supplied whilst regions such as Drenthe and Zeeland were largely excluded from the electricity networks (Buiter, 1997). Thus to ensure that the countryside would be electrified as well, the provinces began to draw up local bye-laws and by the 1920s, it was clear that it would be the provincial governments who would control the electrical power grid that provided power to the nation (VDEN, 1926). So the municipalities were able to gain dominance by pioneering the shift from local and district facilities to regionwide provision through a process of laying the cable and building the infrastructure (Van der Vleuten, 2010). In the Netherlands, therefore, electrification happened under the aegis of regional government, but crucially for the establishment of the two-pin unpolarised plug-socket assembly as the standard at this time, in the process they relied upon the components and technical capabilities that were commercially available (Buiter, 1997). In Britain, the National Grid was established by act of parliament in 1926 in the wake of the first World Power Conference (WPC) held in London two years earlier. In response to the conference’s call for standardisation, the British government set up a committee under Lord Weir. This was tasked with creating a framework for the nationally integrated provision of electricity, and it was suggested that the various electricity supply companies should still own and operate the generating facilities, but a Central Electricity Board (CEB) should be established which would be responsible for connecting the most efficient stations into what was described at the time as a ‘national gridiron’ of high voltage transmission lines (Sinclair, 1985). Despite opposition to the parliamentary bill from the British Electrical and Allied Manufacturers’ Association, the government were joined both by the Liberals and the socialists, with Clement Atlee and Herbert Morrison being instrumental in passing what even the Conservative Prime Minister at the time described as ‘by far the most socialistic piece of legislation ever known’ (see Cochrane, 1985: 15). As members of the Joint Electricity Authorities (JEA), the bodies which preceded the unification of the new National Grid, Atlee’s and Morrison’s experience suggested that not only was the availability of electricity a force that could change how people lived, but that the generation of electricity, and the control of this, could be a motive force in the Taylor 9 creation of a fairer society. At the WPC the delegates had spoken glowingly of the transformative power of electricity (Bozell, 1924). A contemporary article in The Times (1924: 19) observed the general feeling was that the condition of Europe after the war was ‘amazingly confused’ as it seemed clear that there was not enough wealth left in the world to maintain pre-war standards of living. Thus electricity seemed to be a technological solution to this problem, and to the socialists it appeared to be literally the power that could change how people lived and worked. Although it was only later, after the Second World War, that a Labour government would be able to truly nationalise the system, what was important was that the material infrastructure was being put in place that then laid the groundwork for the following post-war expansion (Cochrane, 1985). Plugging in Overhead wires and underground cables are the national electrical mechanism at a certain scale. Yet we do not generally experience the national machine at this level. We may see them from a distance, but seldom do we encounter high-voltage power lines close-up. This is why we do not recognise them as being of our habitus; the scale is wrong. It is at the valency whereby the home connects to the grid that we subjectively experience the electrically powered nature of our culture. As Fred Schroeder (1986: 526) acutely observes, the ‘commonplace act of plugging an electrical appliance into a wall outlet can be regarded as an act that connects directly into an industrial system’, even if on a dayto-day level the users are no more likely to think of its attachment to a power plant than they are to think of forests when turning the pages of a newspaper. It is at the plug-socket assembly where we experience electrification. Yet despite the thoroughness of companies such as Edison’s, the nature of the actual connection of the appliance to the wall was not initially specified. The point at which large-scale supply became use in the home thus took many different forms, and this was a situation that was functionally and commercially untenable (Mellanby, 1957; Mullins, 2006). On the level of generation and mass supply it was the introduction of the rotary converter that allowed for the provision of electricity as a standardised product. Invented in 1888 by a former Edison engineer Charles Bradley, this was a device that could turn one type of current into another and transform a range of voltages, frequencies and phases into one form of output (Blalock, 2013). It regulated the flow, meaning that essentially the whole of the supply side of the electrification was operating on one protocol, making it suitable for utility provision. For mass use, however, what was needed was the standardisation of the connection between production and consumption. With electricity, voltage is the pressure supplied and amperage, amps, is the rate of flow. The mass-provision of electricity, particularly in an AC system is the control of flow: what is put in, where it can go and how fast it can go there. Conduction cannot take place without connection; for current to flow, all the elements of the circuit must be connected. To do this, the connections need to be compatible; they need to be operating on the same protocol. Thus when nations were being electrified there was a need for the connectors that unified the appliances with the grid to be standardised if the system was to be maximised and a market established. The first patents for commercially viable electrical plugs and sockets were registered in 1883–1884. The two-pin form, of which the Dutch type is a version, then appeared in 1885 and was listed in a catalogue produced by the GEC corporation in 1889 (Mellanby, 1957: 165). To begin with, the number of sockets that allowed electricity to enter the house was limited by technical capacity. In the UK and the Netherlands, even up until the early 1930s, the wiring supplied as standard to the vast majority of houses only supported a maximum of six ceiling fittings and one wall socket (Van der Vleuten, 2010: 91; Women’s Institute, 1992: 69). Consequently domestic consumers would use a range of adaptors that fitted into light sockets and split wall connections to allow them to use more than one appliance at a time (see Figure 3). Thus the mobile consumer appliances that were available were inconvenient and often dangerous to use. So, although electric irons and curling tongs quickly caught on, it was some time after the initial phase of electrification that changes in planning law and the increased collaboration between manufacturers served to create the conditions in which electrical consumer goods could become a mass market in either nation (De Rijk, 2009; Hughes, 1983). That is to say, for domestic mobile appliances to become a viable proposition, both those installing the sockets in the houses and those fitting the plugs to the devices needed to be working to the same standard. Standard connections When a British Standard 1363 plug is pushed into its socket fitting, it does so with a satisfying ‘snap’. This is because the top earth pin on the plug is longer than the lower live and neutral pins, and as it slides into its housing it engages a shutter within the socket, which slots down with an action that allows the other pins to slide home into their Figure 3. Pre-standardisation UK plugs and sockets. Notice the bayonet fitting light-socket adaptor (second from left). Such fittings were commonplace before the introduction of standard wall-mounted fittings.  Taylor 11 connectors. When the plug is withdrawn, the shutter clicks back up and covers the holes in the socket (see Figure 4). The British plug is like this because it was designed as one unit made to a standard set by a politically orientated bureaucratic process that did not take place in the Netherlands: the centralised state administered nationalisation of the electricity supply. In both Britain and the Netherlands, the initial phase of electrification was dominated by commercially driven differentiation, a process which each state attempted to regulate but did not initiate (Buiter, 1997; Cochrane, 1985; Hannah, 1977; Van der Vleuten, 2010; VDEN, 1926). In the Netherlands, as in the rest of continental Europe, the round two-pin plug-socket assembly developed as the dominant design as this was, with some variations, the connection used by the companies that had derived their technology from Edison’s (Jonnes, 2004). Such an unpolarised, symmetrical system has a disadvantage, however, in that it has no earth contact, thus making it more dangerous to use. An earth or ‘ground’ connection is necessary as it protects the user if the insulation becomes compromised, allowing the current to flow away harmlessly. This problem was solved by Albert BĂŒttner of the German company Bayerische Elektrozubehör GmbH. Working from an idea by Werner von Siemens, BĂŒttner developed a socket with earth clips at the top and bottom which make contact when a plug with corresponding contacts is pushed into its recessed housing (Schwedt, 2013). The ‘Schuko’ (‘Schutzkontakt’, or ‘safety-connection’) socket then quickly became the standard across the electrifying regions of continental Europe dominated by those companies that were working with the two-pin standard plug derived from Edison’s (Hughes, 1983; Jonnes, 2004). This was not least because the ‘Schuko’ will work not only with the earthed plugs made specifically for it, but it will also take any appropriately sized ungrounded, round two-pinned plug. This versatility then meant that this coupling of plug and socket could become the commercial de facto standard across Europe. Figure 4. The BS1363 socket seen from the inside. The socket on the left has the shutter held in the closed position by a spring. The right hand socket has a plug inserted, pushing the shutter down. However, although the popularity of the ‘Schuko’ as a socket type had delivered an effective commercial norm across the region, there was still a great deal of variation in terms of the plugs that would fit it, with each country having a different type (International Electricity Commission [IEC], 2014). By the early 1930s, the Internationale FragensKommission, a group of electricity suppliers from 12 countries met in the Netherlands to attempt to bring some form of standardisation to electrical fittings. In co-operation with the IEC, Technical Committee 23 was formed to address this need. Given the complexity of the task of bringing together disparate national systems with no real power or mandate, progress was slow. Meetings in Torquay in the UK in June 1938, and Paris in June 1939 did not even result in an agreed agenda, and the war inevitably halted work completely. Although 1957 saw the release of the International Commission on the Rules for the Approval of Electrical Equipment (IECEE) Publication 83, Standard for Plugs and Socket-Outlets for Domestic and Similar Purposes, this was little more than a technical report that detailed what was actually in use. It was not until the second edition of this document in 1963 that what is now referred to as the ‘Europlug’ (the standard round two-pin) made an appearance as CEE 7/16 (IECEE, 1963[1957]). What then was noteworthy about this standard was that it specifies the plug only, as it is the already existent Schuko to which it marries (IEC, 2014). Thus this was not a national standard for the whole plug-socket assembly that was being established, rather it was a ratification of a commercial fact on the ground whereby good enough connections were being made from what was available, within national norms. In the UK, after World War II, the newly elected Labour government set up a British Electricity Authority over 14 Area Boards and the grid as a whole passed into full public ownership in April 1948 (Cochrane, 1985). Prior to this, in 1947, ‘British Standard 1363 Fused Plugs and Shuttered Socket Outlets’ was introduced, based on principles laid out in the Ministry of Works publication, Post War Building Study 11: Electrical Installations, the report of the Electrical Installations Committee (EIC), published in 1944. Initially convened two years earlier, the EIC had been formed by the newly appointed Minister of Works and Planning, John Reith, as part of a series of reports on standards for reconstruction, with the remit ‘of securing a comprehensive and co-ordinated review of building techniques for the guidance of those who would be responsible for the direction and organisation of building after the war’ (see Coles, 2012). When Lord Reith, as he was at this time, took up this governmental position in 1941, Britain had been devastated by the Blitz and German bombing had destroyed a great deal of the housing stock. There was thus a need for a mass programme of house building, and the Post War Building Studies were to act as a largescale analysis, plan and basis for regulation for this project. Reith came fresh from experience of building the BBC, essentially from scratch. This had been a huge technical and logistical exercise, one for which there had been no blueprint. Thus he clearly had an appreciation of the scale of planning, and the grasp of detail, necessary for such an undertaking to be achievable. He also brought with him a paternalist ethos fostered at the BBC, one which asserted that ‘public-service motive’ and a ‘sense of moral obligation’ necessitate that the agency of the state should be directed to utilitarian ends; that the benefits of modernity, in this case the fruits of peace, should devolve to all; and that power should do its best for the common man . This then shaped the remit of the committee, which in turn set the parameters of the technical system that could result from this exercise. In a series of 22 meetings, the committee debated the particulars of the new wiring arrangements and thrashed out proposals. Their area of responsibility covered ‘the supply of electricity for all purposes from the point of entry of the current at the property boundary to the point of its delivery to an appliance’, as well as having responsibility for all electrical household appliances and electrically operated telecommunications (MoW, 1944: 1). Recognising that the rapid expansion of electrical appliances in the home was a trend that was likely to continue, they suggested a need for a dramatic increase in the socket-outlets in post-war housing. To these ends, paragraph 76 of the study introduced the single pole fuse ring-final circuit. It was proposed that ‘all socket outlets should be supplied from a “ring circuit” which, starting and ending at the fuse terminal at the consumer’s supply control will pass through each room in turn’ (p. 18). This was thus a departure from pre-war practice whereby the sockets in a dwelling would be wired in a radial pattern with each outlet running back to a central hub. The new configuration was primarily suggested for economic reasons, although the technical advantages soon became apparent. Since each socket in a ring circuit provides two independent conductors for live, neutral and earth, it thus provides two ways in which the electricity can flow. Because the load is split across the different routes, the amperage in each direction is half of the total, and this allows for the use of copper wire with half the current-carrying capacity, which is much cheaper. Yet when protected by high-rated overcurrent circuit breakers, ring circuits can also supply a large number of sockets. With the use of such high amperage circuit breakers, in this case typically 30A, it was therefore necessary to incorporate a fuse of lower amperage on the appliance side of the electrical system (which in the British system was set at a maximum of 13A). This feature was thus not primarily a safety measure for the user of the appliance, rather it was a necessity if the ring main was to be secured. Post War Building Study 11: Electrical Installations made one proposal that was identifiably driven by a high value being ascribed to safety and the protection of the population. Despite shuttered sockets having been available since at least 1906, it was the ‘anti-flash’ shuttered socket first introduced by MK Electrics in 1926 that can be recognised as the forerunner of the flat three-pin British assembly (Mellanby, 1957). This innovation was claimed as an improvement on its unshuttered predecessors primarily because when the plug was inserted the initial resistance caused by the presence of the shutter, and the tightness of the connection tubes, ensured that it would always drive home with a ‘snap’ as the shutter opened; at the same time this action cut off the arc on withdrawal of the plug. The report thus proposed that all plug-socket assemblies connected to the ring-main should be of this form, expressly ‘to ensure the safety of young children’, to prevent them inserting their fingers into the connection tubes (MoW, 1944: 20). As Foucault (1991: 76) notes, there is a tendency to assume that the world comes to us in the only inevitable form, one that is ‘self-evident, universal and necessary’. Through what could be called the re-singularisation of the object through the charting of its history, so the ‘connections, encounters, supports, blockages, plays of force’ and ‘strategies’ that allowed it to come into being can be factored back in. Both the BS1363 and the  Schuko two-pin arrangement were put together from the technology that existed at the point at which they became the dominant design in each nation. Once such a standard has been arrived at, it is very difficult to change it and incorporate new features, even if they would improve its function (Brian, 1989). Such a component of culture, in this case a significant though often unnoticed feature of the homes we inhabit, remains as the physical manifestation of the material ideological pressures that brought it into being in the form that now endures. In this way, the possibilities of the banal nationalism of the act of plugging-in in a particular geographical region have been conditioned by the historical contingencies that have determined the conduct of conduction in that place. Conclusion When I failed to plug my BS 1363 into the Schuko socket on the wall, this happened because different modalities of governance, determined by historical circumstance, had mitigated for particular socio-technical conjunctions in each region where the apparatuses were produced. The UK plug is a device that is the direct product of paternalist, public service ethos. The need to rebuild after the war, and the desire of the mass of the population to have freely available electrical energy in their homes, established the conditions in which the state was forced to find a solution. The nationalisation of the electricity supply then created the conditions in which a state-mandated standard could be established. A bureaucratic structure, the government committee, through setting values (arguably reproducing those upon which its remit was established) dictated the criteria upon which technical decisions could be made. Technological capacity then specified what hardware could supply this defined need. Economic limitations mitigated towards the introduction of the domestic ring-main; this allowed for a number of sockets to be installed in the home; this needed fused plugs, so the plug housing had to be large enough to accommodate the fuse, and it needed to be possible to change the fuse, should it blow. The perceived need for a shuttering system then dictated that there should be a three-pin arrangement with a longer shaft at the top. Thus the BS 1363 plug I held in my hand was a socialist artefact produced in 1947 as part of what can literally be described as an act of social engineering, one intended to protect the young and provide safety for all. The Dutch power socket I was left staring at is the physical expression of a practical accommodation achieved in 1926; the plug that is meant to connect with this takes the form it does because of a similar process, whereby a de facto standard was achieved through a commercial accommodation to what works. Thus the large, over-engineered, safety-conscious British plug would not connect with the material expediencies of the Dutch socket. Such lines of descent may not be consciously registered, but the physicality of the artefacts, the actualisation of what was and is possible within specific governmentalities of conduction took its effect. In my hotel room in Amsterdam I experienced the shock of one who found himself unpreparedly outside his national habitus, that space where the learned practices and standards of everyday life are so self-evident and natural that they are not noticed. A defining feature of habitus is the fact that the infrastructure of daily life is not registered because of its very banality; that when we feel we belong somewhere this is in part because we have ceased to notice its physical qualities because they just ‘are’. Yet the Taylor 15 infrastructure of experience is to a large degree determined by real, physical infrastructures that allow us to live our lives in the manner that we do. Such material networks of flow, the power sockets, plugs and other technical paraphernalia of daily life are governed by certain standards, the protocols and conducts of conduct that determine the limits of possibility. In this way, the valencies of national habitus, in all their banality, can be glimpsed and in this moment of recognition it is possible to apprehend for a moment how such small things shape how we live.
0 notes
katebushwick · 5 years
Text
One cold October night in the late 1990s, three Hmong American matsutake pickers huddled in their tent. Shivering, they brought their gas cooking stove inside to provide a little warmth. They went to sleep with the stove on. It went out. The next morning, all three were dead, asphyxiated by the fumes. Their deaths left the campground vulnerable, haunted by their ghosts. Ghosts can paralyze you, taking away your ability to move or speak. The Hmong pickers moved away, and the others soon moved too. The U.S. Forest Service did not know about the ghosts. They wanted to rationalize the pickers’ camping area, to make it accessible to police and emergency services, and easier for campground hosts to enforce rules and fees. In the early 1990s, Southeast Asian pickers had camped where they pleased, like everyone else who visits the national forests. But whites complained that Southeast Asians left too much litter. The Forest Communal agendas, Oregon. A Mien pickers’ encampment. Here Mien recalled village life and escaped the confinements of California cities. 74 Chapter 5 Service responded by shunting the pickers to a lonely access road. At the time of the deaths, the pickers were camped all along the road. But soon afterward, the Forest Service built a great grid, with numbered camping spaces, scattered portable toilets, and, after many complaints, a large tank of water at the (rather distant) campground entrance. The campsites had no amenities, but the pickers—escaping from the ghosts—quickly made them their own. Mimicking the structure of the refugee camps in Thailand where many had spent more than a decade, they segregated themselves into ethnic groups: on one end, Mien and then those Hmong willing to stay; half a mile away, Lao and then Khmer; in an isolated hollow, way back, a few whites. The Southeast Asians built structures of slim pine poles and tarps and put their tents inside, sometimes with the addition of wood stoves. As in rural Southeast Asia, possessions were hung from the rafters, and an enclosure gave privacy for dip baths. In the center of the camp, a big tent sold hot bowls of pho. Eating the food, listening to the music, and observing the material culture, I thought I was in the hills of Southeast Asia, not the forests of Oregon. The Forest Service’s idea about emergency access did not work out as it imagined. A few years later, someone called emergency services in behalf of a critically wounded picker. Regulations aimed only at the mushroom camp required the ambulance to wait for police escort before entering. The ambulance waited for hours. When the police finally showed up, the man was dead. Emergency access had not been limited by terrain but by discrimination. This man, too, left a dangerous ghost, and no one slept near his campsite except Oscar, a white man and one of the few local residents to seek out Southeast Asians, who did it once, drunk, on a dare. Oscar’s success in getting through the night led him to try picking mushrooms on a nearby mountain, sacred to local Native Americans and the home of their ghosts. But the Southeast Asians I knew stayed away from that mountain. They knew about ghosts. Oregon’s center of matsutake commerce in the first decade of the twenty-first century was a place not marked on any map, “in the middle of nowhere.” Everyone in the trade knew where it was, but it wasn’t a Open Ticket, Oregon 75 town or a recreation site; it was officially invisible. Buyers had established a cluster of tents along the highway, and every evening pickers, buyers, and field agents gathered there, turning it into a theater of lively suspense and action. Because the place is self-consciously off the map, I decided to make up a name to protect people’s privacy, and to add some characters from the up-and-coming matsutake trading spot down the road. My composite field site is “Open Ticket, Oregon.” “Open ticket” is actually the name of a mushroom-buying practice. In the evening after returning from the woods, pickers sell their mushrooms for the buyer’s price per pound, adjusted in relation to the mushroom’s size and maturity, its “grade.” Most wild mushrooms carry a stable price. But the price of matsutake shoots up and down. Within the night, the price may easily shift by $10 per pound or more. Within the season, price shifts are much greater. Between 2004 and 2008, prices shifted between $2 and $60 per pound for the best mushrooms—and this range is nothing compared with earlier years. “Open ticket” means that a picker may return to the buyer for the difference between the original price paid and a higher price offered on the same night. Buyers— who earn a commission based on the poundage they buy—offer open ticket to entice pickers to sell early in the evening, rather than waiting to see if prices will rise. Open ticket is testimony to the unspoken power of pickers to negotiate buying conditions. It also illustrates the strategies of buyers, who continually try to put each other out of business. Open ticket is a practice of making and affirming freedom for both pickers and buyers. It seems an apt name for a site of freedom’s performance. For what is exchanged every evening is not just mushrooms and money. Pickers, buyers, and field agents are engaged in dramatic enactments of freedom, as they separately understand it, and they exchange these, encouraging each other, along with their trophies: money and mushrooms. Sometimes, indeed, it seemed to me that the really important exchange was the freedom, with the mushroom-and-money trophies as extensions—proofs, as it were—of the performance. After all, it was the feeling of freedom, galvanizing “mushroom fever,” that energized buyers to put on their best shows and pressed pickers to get up the next dawn to search for mushrooms again. But what is this freedom about which pickers spoke? The more I asked about it, the more unfamiliar it became to me. This is not the 76 Chapter 5 freedom imagined by economists, who use that term to talk about the regularities of individual rational choice. Nor is it political liberalism. This mushroomers’ freedom is irregular and outside rationalization; it is performative, communally varied, and effervescent. It has something to do with the rowdy cosmopolitanism of the place; freedom emerges from open-ended cultural interplay, full of potential conflict and misunderstanding. I think it exists only in relation to ghosts. Freedom is the negotiation of ghosts on a haunted landscape; it does not exorcise the haunting but works to survive and negotiate it with flair. Open Ticket is haunted by many ghosts: not only the “green” ghosts of pickers who died untimely deaths; not only the Native American communities removed by U.S. laws and armies; not only the stumps of great trees cut down by reckless loggers, never to be replaced; not only the haunting memories of war that will not seem to go away; but also the ghostly appearance of forms of power—held in abeyance—that enter the everyday work of picking and buying. Some kinds of power are there, but not there; this haunting is a place from which to begin to understand this multiply culturally layered enactment of freedom. Consider these absences that make Open Ticket what it is: Open Ticket is far from the concentration of power; it is the opposite of a city. It is missing social order. As Seng, a Lao picker, put it, “Buddha is not here.” Pickers are selfish and greedy, he said; he was impatient to return to the temple where things were properly arranged. But, meanwhile, Dara, a Khmer teenager, explained that this is the only place she can grow up away from the violence of gangs. Yet Thong is a (former?) Lao gang member; I think he is getting away from warrants for his arrest. Open Ticket is a hodgepodge of flights from the city. White Vietnam vets told me they wanted to be away from crowds, which sparked flashbacks from the war and uncontrollable panic attacks. Hmong and Mien told me they were disappointed in America, which had promised them freedom but instead crowded them into tiny urban apartments; only in the mountains could they find the freedom they remembered from Southeast Asia. Mien in particular hoped to reconstitute a remembered village life in the matsutake forest. Matsutake picking was a time to see dispersed friends and to be away from the constraints of crowded families. Nai Tong, a Mien grandmother, explained that her daughter called her every day to beg her to come home to take care of the grandchildren. But Open Ticket, Oregon 77 she calmly repeated that she had at least to make up the money for her picking permit; she could not go back yet. The important bits were left unsaid in those calls: Escaping from apartment life, she had the freedom of the hills. The money was less important than the freedom. Matsutake picking is not the city, although haunted by it. Picking is also not labor—or even “work.” Sai, a Lao picker, explained that “work” means obeying your boss, doing what he tells you to. In contrast, matsutake picking is “searching.” It is looking for your fortune, not doing your job. When a white campground owner, sympathetic to the pickers, talked to me about how the pickers deserved more because they work so hard, getting up at dawn and staying through sun and snow, something nagged at me about her view. I had never heard a picker talk like that. No pickers I met imagined the money they gained from matsutake as a return on their labor. Even Nai Tong’s time babysitting was more akin to work than mushroom picking. Tom, a white field agent who had spent years as a picker, was particularly clear about rejecting labor. He had been an employee of a big timber company, but one day he put his equipment in his locker, walked out the door, and never looked back. He moved his family into the woods and earned from what the land would give him. He has gathered cones for seed companies and trapped beaver for skins. He has picked all kinds of mushrooms—not to eat but to sell, and he has taken his skills into the buying scene. Tom tells me how liberals have ruined American society; men no longer know how to be men. The best answer is to reject what liberals think of as “standard employment.” Tom goes to great lengths to explain to me that the buyers he works with are not employees but independent businessmen. Even though he gives them large amounts of cash every day to buy mushrooms, they can sell to any field agent—and I know they do. It’s an all cash business, too, without contracts, so if a buyer decides to abscond with his cash, he says, there is nothing he can do about it. (Amazingly, buyers who abscond often come back to deal with another field agent.) But the scales he lends buyers for weighing mushrooms, he points out, are his; he could call the police about the scales. He tells the story of a recent buyer who absconded with several thousand dollars—but made the mistake of taking the scale. Tom drove down the road in the direction he believed the buyer took, and, sure enough, there was the scale abandoned 78 Chapter 5 by the side of the road. The cash was gone of course; but that was the risk of independent business. Pickers bring many kinds of cultural heritage to their rejection of labor. Mad Jim celebrates his Native American ancestors in matsutake picking. After many jobs, he said, he was working as a bartender on the coast. A Native woman walked in with a $100 bill; shocked, he asked where she got it. “Picking mushrooms,” she told him. Jim went out the next day. It wasn’t easy to learn: he crawled through the brush; he followed animals. Now he knows how to stalk the dunes for the matsutake buried deep in the sand. He knows where to look under tangled rhododendron roots in the mountains. He has never gone back to wage work. Lao-Su works in a Wal-Mart warehouse in California when he is not picking matsutake, making $11.50 an hour. To get that pay rate, however, he had to agree to work without medical benefits. When he hurt his back on the job and was unable to lift merchandise, he was given a long leave to recover. While he hopes the company will take him back, he says he gets more money from matsutake picking than from Wal-Mart anyway, despite the fact that the mushroom season is only two months long. Besides, he and his wife look forward to joining the vibrant Mien community in Open Ticket every year. They consider it a vacation; on weekends, their children and grandchildren sometimes come up to join them in picking. Matsutake picking is not “labor,” but it is haunted by labor. So, too, property: Matsutake pickers act as if the forest was an extensive commons. The land is not officially a commons. It is mainly national forest, with some adjacent private land, all fully protected by the state. But the pickers do their best to ignore questions of property. White pickers are particularly aggravated by federal property and do their best to thwart restrictions on using it. Southeast Asian pickers are generally warmer to government, expressing wishes that it would do more. Unlike white pickers, many of whom are proud of picking without a permit, most Southeast Asians register with the Forest Service for permission to pick. However, the fact that law enforcement tends to single out Asians for infractions even without evidence—as one Khmer buyer put it, “driving while being Asian”—makes it seem less worth the effort to stay within the law. Not many do. Open Ticket, Oregon 79 Vast lands without boundary markers makes staying in approved picking zones quite difficult, as I found from my own experience. Once, a sheriff staked out my car to catch me without a permit when I returned with mushrooms. Even as an avid reader of maps, I had been unable to tell whether this place was on or off limits.1 I was lucky; I was just at the border. But it wasn’t marked. Once, too, after I had pleaded with a Lao family for days to take me picking, they agreed, if I would drive. We chugged through forest on unmarked dirt roads for what seemed hours before they told me we had arrived at the place they wanted to pick. When I pulled over, they asked me why I wasn’t trying to hide the car. Only then did I realize that we were surely trespassing. The fines are steep. During my fieldwork, the fine for picking in a national park was $2,000 on the first offense. But law enforcement is thin on the ground, and the roads and trails are many. The national forest is crisscrossed with abandoned logging roads; these make it possible for pickers to travel across extensive forestland. Young men, too, are willing to hike many miles, looking for the most isolated mushroom patches—perhaps on forbidden lands, perhaps not. When the mushrooms get to the buyers, no one asks.2 But what is “public property” if not an oxymoron? Certainly, the Forest Service has trouble with it in these times. Legislation requires that public forests be thinned for fire protection for a square mile around private inholdings; this requires a lot of public funds to save a few private assets.3 Meanwhile, private timber companies do that thinning, making further profits from public forests. And, while logging is allowed within Late Successional Reserves, pickers are forbidden— because no one has found funds for an environmental impact assessment. If pickers have trouble sorting out which kinds of lands are offlimits, they are not alone in their confusion. The difference between the two kinds of confusion is also instructive. The Forest Service is asked to uphold property, even if it means neglecting the public. The pickers do their best to hold property in abeyance as they pursue a commons haunted by the possibility of their own exclusion. Freedom/haunting: two sides of the same experience. Conjuring a future full of pasts, a ghost-ridden freedom is both a way to move on and a way to remember. In its fever, picking escapes the separation of persons and things so dear to industrial production. The mushrooms 80 Chapter 5 are not yet alienated commodities; they are effects of the pickers’ freedom. Yet this scene only exists because the two-sided experience has purchase in a strange sort of commerce. Buyers translate freedom trophies into trade through dramatic performances of “free market competition.” Thus market freedom enters freedom’s jumble, making the holding in abeyance of concentrated power, labor, property, and alienation seem strong and effective. It’s time to get back to the buying in Open Ticket. It’s late afternoon, and some of the white field agents are sitting around joking. They accuse each other of lying and call each other “vultures” and “Wile E. Coyote.” They are right. They agree to open at the price of $10 a pound for number one mushrooms, but almost no one does. The minute the tents open, the competition is on. The field agents call their buyers to offer opening prices—perhaps $12 or even $15 if they agreed on $10. It is up to the buyers to report back about what is happening in the buying tents. Pickers come in and ask about the prices. But the price is a secret—unless you are a regular seller, or, alternatively, you are already showing your mushrooms. Other buyers send their friends, disguised as pickers, to find the price, so it is not something to tell just anyone. Then, when a buyer wants to raise prices, to beat the competition, he or she is supposed to call the field agent. If not, the buyer will have to pay the price difference from his or her commission—but this is a tactic many are willing to try. Soon enough, calls ricochet between pickers, buyers, and field agents. The prices are shifting. “It’s dangerous!” one field agent would tell me as he stalked around the buying area, watching the scene. He could not talk to me during the buying; it demanded his full attention. Barking commands into his cell phone, each tried to stay ahead—and to trip up the others. Meanwhile, field agents are on the phone to their bulking companies and exporters, learning how high they can go. It’s exciting and exacting work to put the others out of business as well as one can. “Imagine the time before cell phones!” one field agent reminisced. Everyone lined up at the two public phone booths, trying to get through as the prices changed. Even now, every field agent surveys the buying field like a general on an old-fashioned battlefield, his phone, like a field Open Ticket, Oregon 81 radio, constantly at his ear. He sends out spies. He must react quickly. If he raises the price at the right time, his buyers will get the best mushrooms. Better yet, he might push a competitor to raise the price too high, forcing him to buy too many mushrooms, and, if all goes really right, to close down for a few days. There are all kinds of tricks. If the price spikes, a buyer can get pickers to take his mushrooms to sell to other buyers: Better the money than the mushrooms. There will be rude laughter for days, fuel for another round of calling the others liars—and yet, no one goes out of business despite all these efforts.4 This is a performance of competition—not a necessity of business. The point is the drama. Let’s say it’s dark now, and pickers are lined up to sell at a buying tent. They have picked this buyer not only because of his prices, but because they know he is a skilled sorter. Sorting is just as important as basic prices, because a buyer assigns a grade to each mushroom, and the price depends on the grade. And what an art sorting is! Sorting is an eye-catching, rapid-fire dance of the arms with the legs held still. White men make it look like juggling; Lao women—the other champion buyers—make it look like Royal Lao dancing. A good sorter knows a lot about the mushrooms just from touching them. Matsutake with insect larvae will spoil the batch before it arrives in Japan; it is essential that the buyer refuse them. But only an inexperienced buyer cuts into the mushroom to look for larvae. Good buyers know from the feel. They can also smell the provenance of the mushroom: its host tree; the region it comes from; other plants, such as rhododendron, which affect the size and shape. Everyone enjoys watching a good buyer sort. It is a public performance full of prowess. Sometimes pickers photograph the sorting. Sometimes they also photograph their best mushrooms, or the money, especially when it is hundred dollar bills. These are trophies of the chase. Buyers try to assemble “crews,” that is, loyal pickers, but pickers do not feel the obligation to continue selling to any buyer. So buyers court pickers, using ties of kinship, language, and ethnicity, or special bonuses. Buyers offer pickers food and coffee—or, sometimes, stronger beverages, such as alcoholic tonics laced with herbs and scorpions. Pickers sit around eating and drinking outside buyers’ tents; where they share common war experiences with the buyers, the camaraderie may last until late at night. But such groups are evanescent; all it takes is a rumor of a 82 Chapter 5 high price or a special deal, and pickers are off to another tent, another circle. Yet the prices are not so different. Might performance be part of the point? Competition and independence mean freedom for all. Sometimes pickers have been known to wait, sitting in their pickup trucks with their mushrooms, because they are dissatisfied with everyone’s prices. But they must sell before the evening is over; they cannot keep the mushrooms. Waiting too is part of the performance of freedom: freedom to search wherever one pleases—holding propriety, labor, and property at arm’s length; freedom to bring one’s mushrooms to any buyer, and for the buyers, to any field agent; freedom to put the other buyers out of business; freedom to make a killing or lose it all. Once I told an economist about this buying scene, and he was excited, telling me this was the true and basic form of capitalism, without the pollution of powerful interests and inequalities. This was real capitalism, he said, where the playing field was level, as it should be. But is Open Ticket’s picking and buying capitalism? The problem is that there isn’t any capital. There is a lot of money changing hands, but it slips away, never forming an investment. The only accumulation is happening downstream, in Vancouver, Tokyo, and Kobe, where exporters and importers use the matsutake trade to build their firms. Open Ticket’s mushrooms join streams of capital there, but they are not procured in what seems to me a capitalist formation. But there are clearly “market mechanisms”: or are there? The whole point of competitive markets, according to economists, is to lower prices, forcing suppliers to procure goods in more efficient ways. But Open Ticket’s buying competition has the explicit goal of raising prices. Everyone says so: pickers, buyers, bulkers. The purpose of playing with prices is to see if the price can be increased, so that everyone at Open Ticket benefits. Many seem to think that there is an ever-flowing spring of money in Japan, and the goal of competitive theater is to force open the pipes so that the money will flow to Open Ticket. Old timers all remember 1993, when the price of matsutake in Open Ticket rose briefly to $600 a pound in the hands of pickers. All you had to do was find one fat button, and you had $300!5 Even after that high, they say, in the 1990s a single picker could make several thousand dollars in one day. How might access to that flow of money be opened again? Open Ticket buyers and bulkers stake their bets on competition to raise prices. Open Ticket, Oregon 83 It seems to me that there are two framing circumstances that allow this set of beliefs and practices to flourish. First, American businessmen have naturalized the expectation that the U.S. government will apply muscle in their behalf: As long as they perform “competition,” the government will twist the arms of foreign business partners to make sure American companies get the prices and market share they want.6 Open Ticket matsutake trading is much too small and inconspicuous to get that kind of government attention. Still, it is within this national expectation that buyers and bulkers engage in competitive performances to get the Japanese to offer them the best prices. As long as they show themselves properly “American,” they expect to succeed. Second, Japanese traders are willing to put up with such displays as signs of what the importer I mentioned called “American psychology.” Japanese traders expect to work in and around strange performances; if this is what brings in the goods, it should be encouraged. Later, exporters and importers can translate the exotic products of American freedom into Japanese inventory—and, through inventory, accumulation. What is this “American psychology” then? There are too many people and histories in Open Ticket to plunge directly into the coherence through which we usually imagine “culture.” The concept of assemblage—an open-ended entanglement of ways of being—is more useful. In an assemblage, varied trajectories gain a hold on each other, but indeterminacy matters. To learn about an assemblage, one unravels its knots. Open Ticket’s performances of freedom require following histories that stretch far beyond Oregon but show how Open Ticket’s entanglements might have come into being.
The freedom about which so many pickers and buyers speak has far-flung referents as well as local ones. In Open Ticket, most explain their commitments to freedom as stemming from terrifying and tragic experiences in the U.S.-Indochina War and the civil wars that followed. When pickers talk about what shaped their lives, including their mushroom picking, most talk about surviving war. They are willing to brave the considerable dangers of the matsutake forest because it extends their living survival of war, a form of haunted freedom that goes everywhere with them. Yet engagements with war are culturally, nationally, and racially specific. The landscapes pickers construct vary with their legacies of Communal agendas, Oregon. Foraging with a rifle. Most pickers have terrible stories of surviving war. The freedom of the mushroom camps emerges out of varied histories of trauma and displacement. 86 Chapter 6 engagement with war. Some pickers wrap themselves in war stories without ever having lived through war. One wry Lao elder explained why even young Lao pickers wear camouflage: “These people weren’t soldiers; they’re just pretending to be soldiers.” When I asked about the dangers of being invisible to white deer hunters, a Hmong picker evoked a different imaginary: “We wear camouflage so we can hide if we see the hunters first.” If they saw him, hunters might hunt him, he implied. Pickers navigate the freedom of the forest through a maze of differences. Freedom as they described it is both an axis of commonality and a point from which communally specific agendas divide. Despite further differences within such agendas, a few portraits can suggest the varied ways the matsutake hunt is energized by freedom. This chapter extends my exploration of what pickers and buyers meant by freedom by turning to the stories they told about war. Frontier romanticism runs high in the mountains and forests of the Pacific Northwest. It is common for whites to glorify Native Americans and identify with the settlers who tried to wipe them out. Self-sufficiency, rugged individualism, and the aesthetic force of white masculinity are points of pride. Many white mushroom pickers are advocates of U.S. conquest abroad, limited government, and white supremacy. Yet the rural northwest has also gathered hippies and iconoclasts. White veterans of the U.S.-Indochina War bring their war experiences into this rough and independent mix, adding a distinctive mixture of resentment and patriotism, trauma and threat. War memories are simultaneously disturbing and productive in forming this niche. War is damaging, they tell us, but it also makes men. Freedom can be found in war as well as against war. Two white veterans suggest the range of how freedom is expressed. Alan felt lucky when an aggravated childhood injury caused him to be sent home from Indochina. For the next six months he served as a driver on an American base. One day he received orders to return to Vietnam. He drove his jeep back to the depot and walked out of the base, AWOL. He spent the next four years hiding in the Oregon mountains, where he gained a new goal: to live in the woods and never pay rent. Later, when War Stories 87 the matsutake rush came, it suited him perfectly. Alan imagines himself as a gentle hippie who works against the combat culture of other vets. Once he went to Las Vegas and had a terrible flashback when surrounded by Asians at the casino. Life in the forest is his way of keeping clear of psychological danger. Not all war experience is so benign. When I first met Geoff I was overjoyed to find someone with so much knowledge about the forest. Telling me of the pleasures of his childhood in eastern Washington, he described the countryside with a passionate eye for detail. My enthusiasm to work with Geoff was transformed, however, when I talked with Tim, who explained that Geoff had served a long and difficult tour in Vietnam. Once, his group had jumped from a helicopter into an ambush. Many of the men were killed, and Geoff was shot through the neck but, miraculously, survived. When Geoff came home, he screamed so much at night that he could not stay home, and so he returned to the woods. But his war years were not over. Tim described a time when he and Geoff had surprised a group of Cambodian pickers on a mushroom patch Geoff thought of as one of his special places. Geoff had opened fire, and the Cambodians scrambled into the bushes to get away. Once Tim and Geoff shared a cabin, but Geoff spent the night brooding and sharpening his knife. “Do you know how many men I killed in Vietnam?” he asked Tim. “One more wouldn’t make a bit of difference.” White pickers imagine themselves not only as violent vets but also as self-sufficient mountain men: loners, tough, and resourceful. One point of connection with those who did not fight is hunting. One white buyer, too old for Vietnam but a strong supporter of U.S. wars, explained that hunting, like war, builds character. We spoke of then Vice President Cheney, who had shot a friend while bird hunting; it was through the ordinariness of accidents such as this that hunting makes men, he said. Through hunting, even noncombatants can experience the forest landscape as a site for making freedom. Cambodian refugees cannot easily join established Pacific Northwest legacies; they have had to make up their own histories of freedom in the United States. Such histories are guided not only by U.S. bombardment 88 Chapter 6 and the subsequent terrors of the Khmer Rouge regime and civil war, but also by their moment of entry into the United States: the shutting down of the U.S. welfare state in the 1980s. No one offered Cambodians stable jobs with benefits. Like other Southeast Asian refugees, they had to make something from what they had—including their war experiences. The matsutake boom made forest foraging, with its opportunities for making a living through sheer intrepidness, an appealing option. What then is freedom? One white field agent, exalting the pleasures of war, suggested I speak with Ven, a Cambodian who, the field agent said, would show me that even Asians love U.S. imperial war. Given that Ven spoke to me with this introduction, I was not surprised by his endorsement of American freedom as a military quest. Yet our conversation took turns that I don’t imagine the field agent would have expected, and yet it echoed other Cambodians in the forest. First, in the confusions of the Cambodian civil war, it was never quite clear on which side one was fighting. Where white vets imagined freedom on a starkly divided racial landscape, Cambodians told stories in which war bounced one from one side to the other without one’s knowledge. Second, where white vets sometimes took to the hills to live out war’s traumatic freedom, Cambodians offered a more optimistic vision of recovery in the forests of American freedom. At the age of thirteen, Ven left his village to join an armed struggle. His goal was to repel Vietnamese invaders. He says he did not know the national affiliations of his group; he later found it to be a Khmer Rouge affiliate. Because of his youth, the commander befriended him and he was kept safe, close to the leaders. Later, however, the commander fell out of favor, and Ven became a political detainee. His group of detainees was sent to the jungle to fend for themselves. By chance, this turned out to be an area Ven knew from his fighting days. Where others saw empty jungle, he knew the concealed paths and forest resources. At this point in the story, I expected him to say that he escaped, especially since he was beaming with pride about his jungle knowledge. But no: He showed the group a hidden spring, without which they would not have had fresh water. Perhaps there was something empowering about this forest detention, even in its coercions. Returning to the forest draws from this spark—but only, he explained, in the safety of American imperial freedom. War Stories 89 Other Cambodians spoke about mushroom foraging as healing from war. One woman described how weak she was when she first came to the United States; her legs were so frail that she could hardly walk. Mushroom foraging has brought back her health. Her freedom, she explained, is freedom of motion. Heng told me about his experiences in a Cambodian militia. He was the leader of thirty men. But while patrolling one day he stepped on a land mine, which blew off his leg. He begged his comrades to shoot him, since the life of a one-legged man in Cambodia was beyond what he imagined as human. Through luck, however, he was picked up by a UN mission and transported to Thailand. In the United States he gets along well on his artificial leg. Still, when he told his relatives that he would pick mushrooms in the forest, they scoffed. They refused to take him with them, since, they said, he would never be able to keep up. Finally, an aunt dropped him off at the base of a mountain, telling him to find his own way. He found mushrooms! Ever since, the matsutake harvest has been an affirmation of his mobility. Another of his buddies is missing the other leg, and he jokes that together in the mountains, they are “complete.” The Oregon mountains are both a cure for and a connection to old habits and dreams. I was startled into seeing this one day when I asked Heng about deer hunters. I had been picking by myself that afternoon when suddenly shots rang out nearby. I was terrified; I didn’t know which way to run. I asked Heng about it later. “Don’t run!” he said. “To run shows that you are afraid. I would never run. That’s why I am a leader of men.” The woods are still full of war, and hunting is its reminder. The fact that almost all the hunters are white, and that they tend to be contemptuous of Asians, makes the parallels to war yet more apparent. This theme was even more consequential for Hmong pickers, who, unlike most Cambodians, identify as hunters as well as hunted. During the U.S.-Indochina War, the Hmong became the front line of the U.S. invasion of Laos. Recruited by General Vang Pao, whole villages gave up agriculture to subsist on CIA airdrops of food. The men called in U.S. bombers, putting their bodies on the line so that Americans 90 Chapter 6 could destroy the country from the skies.1 It is not surprising that this policy exacerbated tensions between the Lao targets of the bombing and the Hmong. Hmong refugees have done relatively well in the United States, but war memories run strong. The landscapes of wartime Laos are very much alive for Hmong refugees, and this shapes both the politics of freedom and freedom’s everyday activities. Consider the case of Hmong hunter and U.S. Army sharpshooter Chai Soua Vang. In November 2004, he climbed into a deer blind in a Wisconsin forest just as the white landowners were touring the property. The landowners confronted him, telling him to leave. It seems they shouted racial epithets, and someone shot at him. In response, he shot eight of them with his semiautomatic rifle, killing six. The story was news, and the main tenor in which it was told was outrage. CBS News quoted local Deputy Tim Zeigle, who said Vang was “chasing after [the landowners] and killing them. He hunted them down.”2 Hmong community spokesmen immediately took their distance from Vang and focused on saving the reputation of the Hmong people. Although younger Hmong spoke up against racism in the trial that followed Vang’s arrest, no one publicly suggested why Vang might have assumed a sharpshooter’s stance to eliminate his adversaries. The Hmong I spoke with in Oregon all seemed to know, and to empathize. What Vang did appeared utterly familiar; he could have been a brother or a father. Although Vang was too young to have participated in the U.S.-Indochina War, his actions showed how well he was socialized in the landscapes of that war. There every man who was not a comrade was an enemy, and war meant to kill or be killed. The elder men of the Hmong community still live very much in the world of these battles; at Hmong gatherings, the logistics of particular battles—the topography, timing, and surprises—are the subject of men’s conversations. One Hmong elder whom I had asked about his life used the opportunity to tell me about how to throw back grenades and what to do if you are shot. The logistics of wartime survival were the substance of his life. Hunting recalls the familiarity of Laos for Hmong in the United States. The Hmong elder explained his coming of age in Laos: as a boy, he had learned to hunt, and he used his hunting skills in jungle fighting. Now in the United States, he teaches his sons how to hunt. Hunting brings Hmong men into a world of tracking, survival, and manhood. War Stories 91 Hmong mushroom pickers are comfortable in the forest because of hunting. Hmong rarely get lost; they use the forest-navigation skills they know from hunting. The forest landscape reminds older men of Laos: Much is different, but there are wild hills and the necessity of keeping your wits about you. Such familiarity brings the older generation back to pick each year; like hunting, this is a chance to remember forest landscapes. Without the sounds and smells of the forest, the elder told me, a man dwindles. Mushroom picking layers together Laos and Oregon, war and hunting. The landscapes of war-torn Laos suffuse present experience. What seemed to me nonsequiturs shocked me into awareness of such layers: I asked about mushrooms, and Hmong pickers answered by telling me of Laos, of hunting, or of war. Tou and his son Ger kindly took my assistant Lue and me for many a matsutake hunt. Ger was an exuberant teacher, but Tou was a quiet elder. As a result, I valued the things he said all the more. One afternoon after a long and pleasurable forage, Tou collapsed into the front seat of the car with a sigh. Lue translated from Hmong. “It’s just like Laos,” Tou said, telling us of his home. His next comment made no sense to me: “But it’s important to have insurance.” It took me the next half hour to figure out what he meant. He offered a story: A relative of his had gone back to Laos for a visit, and the hills had so drawn him that he left one of his souls behind when he returned to the United States. He soon died as a result. Nostalgia can cause death, and then it’s important to have life insurance, because that allows the family to buy the oxen for a proper funeral. Tou was experiencing the nostalgia of a landscape made familiar by hiking and foraging. This is also the landscape of hunting—and of war. As Buddhists, ethnic Lao tend to object to hunting. Instead, Lao are the businessmen of the mushroom camps. Most Southeast Asian mushroom buyers are Lao. In the campgrounds, Lao have opened noodle tents, gambling, karaoke, and barbeque shops. Many of the Lao pickers I met originated from or were displaced to Laotian cities. They are often lost in the woods. But they enjoy the risks of mushroom picking and explain it as an entrepreneurial sport. 92 Chapter 6 I first started thinking about cultural engagements with war when I was hanging out with Lao pickers. Camouflage is popular among Lao men. Most are further covered by protective tattoos—some gained in the army, some in gangs, and some in martial arts. Lao rowdiness is the justification for Forest Service rules that disallow gunfire in the campgrounds. Compared with other picker groups, the Lao I met seemed less wounded by the actual moment of war—and yet more involved in its simulation in the forest. But what is a wound? U.S. bombing in Laos displaced 25 percent of the rural population, forcing fleeing refugees into cities—and, when possible, abroad.3 If Lao refugees in the United States have some characteristics of camp followers, is this not also a wound? Some Lao pickers grew up in army families. Sam’s father served in the Royal Lao Army; he was set to follow in his father’s footsteps by enlisting in the U.S. Army. The fall before his recruitment he joined some friends for a last hurrah—picking mushrooms. He made so much money that he called off his army plans. He even brought his parents to pick. He also discovered the pleasures of illegal picking one season when he made $3,000 in one day by trespassing on national park lands. Like white pickers, the Lao I knew looked for out-of-bounds and hidden matsutake patches. (In contrast, Cambodian, Hmong, and Mien pickers more often used careful observation in well-known common spots.) Lao pickers also—again like whites—took pleasure in boasting of their forays outside the law and their ability to get out of scrapes. (Other pickers went outside the law more quietly.) As entrepreneurs, Lao were mediators, with all the pleasures and dangers of mediation. In my own inexperience, I found the entrepreneurial grasp of combat readiness a confusing set of juxtapositions. Yet I could tell it somehow worked as advocacy for high-risk enterprise. Thong, a strong and handsome man in his mid-thirties, seemed to me a man of contradictions: a fighter, a fine dancer, a reflective thinker, a judgmental critic. Because of his strength, Thong picks in high, inaccessible places. He told of his encounter with a policeman who stopped him for speeding one night more than forty miles from the mushroom camp. He told the policeman to go ahead and impound his car; he would walk through the frozen night. The policeman gave in, he said, and let him go. When Thong said that mushroom pickers are in the War Stories 93 forest to escape warrants, I thought he might be speaking for himself. So, too, until quite recently he was married. In the process of getting a divorce, he quit a well-paying job to pick mushrooms. At the least, I believe he aimed to escape the obligations of child support. The contradictions multiply. He went out of his way to express contempt for pickers who abandon their children for the forest. He is not in touch with his children. Meta thinks a lot about Buddhism. Meta spent two years in a monastery; returned to the world, he works to renounce material things. Mushroom picking is a way to do this work of renunciation. Most of his belongings are in his car. Money comes to him easily but disappears just as easily. He does not mire himself in possession. This does not mean he is ascetic in a Western sense. When he is drunk, he sings a tender tenor karaoke. Only among Lao pickers did I meet children of mushroom pickers who, as adults, became mushroom pickers themselves. Paula first came picking with her parents, who later moved to Alaska. But she maintains her parents’ social networks in the Oregon forests, thus earning the room for maneuver claimed by much more seasoned pickers. Paula is daring. She and her husband arrived ready to pick ten days before the U.S. Forest Service opened the season. When the police caught them with mushrooms in their truck, her husband pretended that he couldn’t speak English, while Paula berated the officials. Paula is cute and looks like a child; she can get away with more sass than others. Still, I was surprised at the chutzpah she claimed. She said she dared the police to interfere with her activities. They asked her where she found the mushrooms. “Under green trees.” Where were these green trees? “All trees are green trees,” she insisted. Then she pulled out her cell phone and started calling her supporters. What is freedom? U.S. immigration policy differentiates “political refugees” from “economic refugees,” granting asylum only to the former. This requires immigrants to endorse “freedom” as a condition of their entry. Southeast Asian Americans had the opportunity to learn such endorsements in refugee camps in Thailand, where many spent years preparing themselves for U.S. immigration. As the Lao buyer quoted at the beginning of this chapter quipped in explaining why he picked the United States rather than France: “In France they have two 94 Chapter 6 kinds, freedom and communist. In the U.S. they just have one kind: freedom.” He went on to say that he prefers mushroom picking to a steady job with a good income—he has been a welder—because of the freedom. Lao strategies for enacting freedom contrast sharply with those of the other picker group that vies for the title “most harassed by the law”: Latinos. Latino pickers tend to be undocumented migrants who fit mushroom foraging into a year-round schedule of outdoor work. During mushroom season many live hidden in the forest instead of in the legally required industrial camps and motels where identification and picking permits might be checked. Those I knew had multiple names, addresses, and papers. Mushroom arrests could lead not just to fines but also to loss of vehicles (for faulty papers) and deportation. Instead of sassing the law, Latino pickers tried to stay out of the way, and, if caught, juggle papers and sources of legitimation and support. In contrast, most Lao pickers, as refugees, are citizens and, embracing freedom, hustle for more room. Contrasts such as these motivated my search to understand the cultural engagements with war that shape the practices of freedom of white veterans and Cambodian, Hmong, and Lao refugees. Veterans and refugees negotiate American citizenship through endorsing and enacting freedom. In this practice, militarism is internalized; it infuses the landscape; it inspires strategies of foraging and entrepreneurship. Among commercial matsutake pickers in Oregon, freedom is a “boundary object,” that is, a shared concern that yet takes on many meanings and leads in varied directions.4 Pickers arrive every year to search out matsutake for Japanese-sponsored supply chains because of their overlapping yet diverging commitments to the freedom of the forest. Pickers’ war experiences motivate them to come back year after year to extend their living survival. White vets enact trauma; Khmer heal war wounds; Hmong remember fighting landscapes; Lao push the envelope. Each of these historical currents mobilizes the practice of picking mushrooms as the practice of freedom. Thus, without any corporate recruitment, training, or discipline, mountains of mushrooms are gathered and shipped to Japan.
Everything about Open Ticket surpris ed me, but especially the feel of Southeast Asian village life in the middle of the Oregon forest. My disorientation was only amplified when I found a different group of matsutake pickers: Japanese Americans. Despite many differences from my Chinese American background, Japanese Americans felt familiar to me, like family. Yet this ease struck me sharply, a splash of Communal agendas, Oregon. Preparing matsutake for a sukiyaki dinner at the predominantly Japanese American Buddhist Church. For Japanese Americans, matsutake picking is a cultural legacy and a tool for building cross-generational community ties. 98 Chapter 7 cold water. I realized that something huge and perplexing had happened to U.S. citizenship between early- and late-twentieth-century immigrations. A wild new cosmopolitanism has inflected what it means to be an American: a jostling of unassimilated fragments of cultural agendas and political causes from around the world. My surprise, then, was not the ordinary shock of cultural difference. American precarity—living in ruins—is in this unstructured multiplicity, this uncongealed confusion. No longer a melting pot, we live with unrecognizable others. And if I tell this story within Asian American worlds, do not think it stops there. This cacophony is the feel of precarious living for both white and colored Americans—with repercussions around the world. It is most clearly seen, however, in relation to its alternatives, such as assimilation. The first people to go “matsutake crazy” in Oregon were Japanese who came to the region in that short window of opportunity between the banishment of the Chinese in 1882 and the “Gentlemen’s Agreement” stopping Japanese immigration in 1907.2 Some of the first Japanese immigrants worked as loggers and found matsutake in the forest. When they settled into farming, they returned to the forest every season: for warabi ferns in the spring, fuki shoots in the summer, and matsutake in the fall. By the early twentieth century, matsutake outings—picnic lunches with matsutake foraging—were a popular leisure activity, as celebrated in the poem that opens this chapter. Uriuda’s poem is a useful signpost of both pleasures and dilemmas. The matsutake hunters drive cars into the mountains; they are enthusiastic Americans even as they retain Japanese sensibilities. Like others who ventured out of Meiji Japan, the immigrants were serious translators, learning other cultures. Beside themselves, they became children—in both American and Japanese ways. Then something changed: World War II. Since arriving in the United States, Japanese had struggled over bans against citizenship and land ownership. Despite this, they had succeeded at farming—especially with labor-intensive fruits and vegetables, such as cauliflower, which needed to be shaded from light, and berries, which needed hand picking. World War II broke that trajectory, removing them from their farms. Oregon’s Japanese Americans were interned in “War Relocation Camps.” Their citizenship dilemmas were turned inside out. What Happ ened to the State? 99 I first heard Uriuda’s poem sung in Japanese in a classical style during a gathering of Japanese Americans celebrating their matsutake heritage in 2006. The elderly man who sang it had first learned classical singing when he was interned in the camps. Indeed, many “Japanese” hobbies flourished there. But even as it was possible to pursue Japanese hobbies, the camps changed what it meant to be Japanese in the United States. When they came back after the war, most had lost access to their possessions and their farms. (Juliana Hu Pegues notes that the same year Japanese American farmers were sent away to camps, the United States opened the Bracero program to bring in Mexican farm laborers.)3 They were treated with suspicion. In response, they did their best to become model Americans. As one man recalled, “We stayed away from everything Japanese-y. If you had a pair of [Japanese] slippers, you left them at the door when you went out.” Japanese daily habits were not for public display. Young people stopped learning Japanese. Total immersion into American culture was expected, without bicultural extensions, and children led the way. Japanese Americans became “200 percent American.”4 At the same time, Japanese arts had flourished in the camps. Traditional poetry and music, in decline before the war, were revived. Camp activities became the basis for postwar clubs. These would be private leisure activities. Japanese culture, matsutake picking included, became increasingly popular, but it formed a segregated addition to the performance of American selves. “Japanese-ness” flourished only as an American-style hobby. Perhaps you can catch a glimmer of my disconcertment. Japanese American matsutake pickers are quite different from Southeast Asian refugees—and I can’t explain the difference away by “culture” or by “time” spent in the United States, the usual sociological stories of differences among immigrants. Second-generation Southeast Asian Americans are nothing like Japanese American Nisei in their performance of citizenship. The difference has to do with historical events—indeterminate encounters, if you will—in which relations between immigrant groups and the demands of citizenship are formed. Japanese Americans were subject to coercive assimilation. The camps taught them that to be an American required serious work in transforming oneself from inside out. Coercive assimilation showed me its contrast: Southeast Asian refugees 100 Chapter 7 have become citizens in a moment of neoliberal multiculturalism. A love for freedom may be enough to join the American crowd. The contrast hit me in a personal way. My mother came to study in the United States from China just after World War II, when the two countries were allies; after the triumph of communism in China, the U.S. government did not let her go home. Through the 1950s and early 1960s, our family, like other Chinese Americans, was under FBI surveillance as possible enemy aliens. Thus my mother, too, learned a coercive assimilation. She learned to cook hamburgers, meatloaf, and pizza, and when she had children, she refused to allow us to learn Chinese, even though she was still struggling with English. She believed that if we spoke Chinese, our English might show the trace of an accent, revealing us as not quite American. It was unsafe to be bilingual, to carry one’s body in the wrong way, or to eat the wrong foods. When I was a child, my family used the term “American” to mean white, and we watched Americans carefully as sources of both emulation and cautionary tales. In the 1970s, I joined Asian American student groups whose participants were of Chinese, Japanese, and Filipino origin; even our most radical politics took for granted the coerced assimilation each of these groups had experienced. My background thus prepared me for an easy empathy with the Japanese American matsutake pickers I met in Oregon: I felt comfortable with their way of being Asian American. The elders were second-generation immigrants who spoke hardly a word of Japanese, and who were as likely to go out for cheap Chinese food as to prepare traditional Japanese dishes. They were proud of their Japanese heritage—as witnessed in their devotion to matsutake. But that pride was expressed in self-consciously American ways. Even the matsutake dishes we cooked together were cosmopolitan hybrids that violated every Japanese culinary principle. In contrast, I had been utterly unprepared to discover the Asian American cultures of Open Ticket’s matsutake camps. Mien camps struck me with particular force because they reminded me not of the Asian America I knew but of some combination of my mother’s remembered China and the villages in Borneo where I had done fieldwork. Mien come to the Cascades in multigenerational groups of kin and neighbors with the explicit aim of recuperating village life. They remain committed to differences that mattered in Laos; because Lao sit on the floor, Mien sit on What Happ ened to the State? 101 the low stools my mother still longs for as a reminder of China. They refuse raw vegetables—that’s for Lao—but prepare soups and sautĂ©s with chopsticks, as do Chinese. No meatloaf or hamburgers are cooked in Mien mushroom camps. Because so many Southeast Asians are gathered together, deliveries of Asian vegetables from California family garden plots arrive all the time. Every evening, cooked dishes are exchanged with neighbors, and visitors talk over smoking bongs into the night. When I saw one of my Mien hosts squatting in a sarong and shelling overripe yard-long beans or sharpening her machete, I felt transported to the upland villages in Indonesia where I first learned about Southeast Asia. This wasn’t the United States that I knew. The other Southeast Asian groups in Open Ticket are less dedicated to recreating village life; some are from cities, not villages. Still, they have one thing in common with these Mien: a lack of interest in—even an unfamiliarity with—the kind of American assimilation with which I grew up. I wondered, How did they get away with this? At first, I was awed, and perhaps a little jealous. Later, I recognized that they had been asked to assimilate too, in a different mode. This is where freedom and precarity come back into the story: freedom coordinates wildly diverse expressions of American citizenship, and it provides the only official rudder for precarious living. But this means that between the arrival of Japanese Americans and the coming of Lao and Cambodian Americans something important has changed in the relationship of the state and its citizens. The pervasive quality of Japanese American assimilation was shaped by the cultural politics of the U.S. welfare state from the New Deal through the late twentieth century. The state was empowered to order people’s lives with attractions as well as coercion. Immigrants were exhorted to join the “melting pot,” to become full Americans by erasing their pasts. Public schools were a venue for making Americans. The affirmative action policies of the 1960s and 1970s not only opened schools but also made it possible for minorities educated in public schools to find professional placements despite their racial exclusion from networks of influence. Japanese Americans were cajoled as well as prodded into the American fold. It is the erosion of this apparatus of state welfare that most simply helps to explain why the Southeast Asian Americans of Open Ticket 102 Chapter 7 have developed such a different relationship to American citizenship. Since the mid-1980s, when they arrived as refugees, all kinds of state programs have been dismantled. Affirmative action has been criminalized, funds cut for public schools, unions chased out, and standard employment has become a vanishing ideal for anyone, much less entrylevel workers. Even if they had managed to become perfect copies of white Americans, there would be few rewards. And the immediate challenges of making a living loom. In the 1980s, the refugees had few resources and needed public assistance. Yet welfare in the strict sense was being radically downsized. In California, the destination of many Open Ticket Southeast Asians, eighteen months became the limit for state assistance. Many of the Lao and Cambodian Americans in Open Ticket received some language education and job training, but rarely of a sort that actually helped them get a job. They were left to find their own way in American society.5 For those few who had Western-style educations, English, or money, there were options. The rest were in the difficult position of finding traction for the resources and skills they had, such as, for example, surviving a war. The freedom they had endorsed to enter the United States had to be translated into livelihood strategies. Histories of survival shaped what they could use as livelihood skills. It is a tribute to their resourcefulness that they used them. But this also created differences among the refugees. Consider some of these differences. A Lao buyer from a family of businesswomen in the capital city, Vientiane, explained that she decided to leave because communism was bad for profits. Vientiane is on the Mekong River, across from Thailand, and leaving meant finding a night to swim the river. She could have been shot; she had a young daughter to carry. Still, despite the danger, the experience showed her that she must seize opportunities. The freedom that pushed her toward the United States was the freedom of the market. In contrast, Hmong pickers were adamant about freedom as anticommunism combined with ethnic autonomy. Older Hmong in Open Ticket had fought for General Vang Pao’s CIA army in Laos. The middleaged had spent years after the communist victory going back and forth between refugee camps in Thailand and rebel camps in Laos. Both these life trajectories combined jungle survival and ethnopolitical loyalty. These were skills that could be used in the United States for kin- What Happ ened to the State? 103 based investments, for which Hmong Americans have become known. Sometimes such commitments need to be revived—by life in the wild. Everyone I talked to dreamed about livelihood strategies self-consciously tied to their ethnic and political stories. No one in Open Ticket thought immigration meant erasing one’s past to become an American. An ethnic Lao from northeastern Cambodia would like to run trucks between Cambodia and Laos. An ethnic Khmer from Vietnam, whose family crossed the border to defend Cambodia, thought his family’s patriotism made him a good candidate for a military career. While many of these dreams would remain unfulfilled, they told me something about dreaming: these were not the new start we still call “the American dream.” The more you stare at it, the more the idea that you should start over to become an American seems strange. What was this American dream then? Clearly, it was more than an effect of economic policy. Might it have been a version of Christian conversion, American-style, in which the sinner opens up to God and resolves to banish his former sinful life? The American dream requires relinquishing one’s old self, and perhaps this is one form of conversion. Protestant revivalism has been key to composing the “we” of the American polity since the American Revolution.6 Furthermore, Protestantism guided the twentieth-century project of American secularization—designed to reject illiberal Christianity while promoting unmarked liberal forms. Susan Harding has shown how U.S. public education in the mid-twentieth century was shaped by projects of secularization, in which some versions of Christianity were promoted as examples of “tolerance,” while other versions were parochialized as exotic remnants of earlier times.7 In its secular forms, then, this cosmological politics exceeds Christianity; to be an American, you must convert, not to Christianity, but to American democracy. In the mid-twentieth century, assimilation was a project of this American Protestant secularism. Immigrants were expected to “convert” by taking on the full array of white American bodily practices and speech habits. Speech was particularly important—the speaking of the “we.” That’s why my mother wouldn’t let me learn Chinese. It would be a sign of the devil, so to speak, peeking out of my American habitus. This is the conversion wave that hit Japanese Americans after World War II. 104 Chapter 7 It did not necessarily mean becoming a Christian. The Japanese Americans I worked with are mainly Buddhists. Indeed, Buddhist “churches” (as some participants call them) help tie the community together. The one I visited is a curious hybrid. The hall for weekly worship has a colorful Buddhist altar in front. But the rest of the room is an exact model of an American Protestant church. There are rows of wooden pews, complete with holders on the seatbacks for hymnals and announcements. The basement has space for Sunday School classes and for fundraising dinners and bake sales. The core congregation is Japanese American, but they are proud to have a white pastor, whose Buddhism augments their American identity. The congregation’s “American” conversion sponsors religious legibility. Contrast Open Ticket’s Southeast Asian refugees. Thinking through cosmological politics, they were also “converted” to American democracy. They each had a conversion ritual in a Thai refugee camp—the interview that allowed them to enter the United States. At this interview, they were required to endorse “freedom” and to show their anticommunist credentials. Else they would be enemy aliens: outside the fold. To enter the country, a rigorous assertion of freedom was necessary. The refugees might not know much English, but they needed one word: freedom. In addition, some of Open Ticket’s Hmong and Mien Americans have converted to Christianity. Yet—as Thomas Pearson has shown for Vietnamese Montagnard-Dega refugees in North Carolina—they have, from a U.S. Protestant point of view, a strange kind of Christian practice.8 The point of conversion for an American Protestant is to be able to say, “I once was lost, but now I have accepted God.” Instead, the refugees say, “Communist soldiers pointed at me, but God made me invisible.” “War scattered my family in the jungle, but God brought us back together.” God operates like indigenous spirits, warding off danger. Instead of needing interior transformation, the converts I met came under protection through endorsing freedom. Again the contrast: A centripetal (in-spinning) logic of conversion drew my family and my Japanese American friends into an inclusive, expansive United States of assimilative Americanization. A centrifugal (out-spinning) logic of conversion, held together by a single boundary object, freedom, shaped Open Ticket’s Southeast Asian refugees. These What Happ ened to the State? 105 two kinds of conversion can coexist. Yet each was carried on a distinct historical wave of citizenship politics. It seems quite predictable, then, these two kinds of matsutake pickers do not mix. Japanese Americans picked commercially at the beginning of Japan’s import boom; but by the late 1980s, they were overtaken by white and Southeast Asian pickers. Now they pick for their friends and family rather than to sell. Matsutake is a treasured gift and a food that confirms one’s Japanese cultural roots. And matsutake picking is fun—a chance for elders to show off their knowledge, for kids to play in the woods, and for everyone to share delicious bento lunches. This kind of leisure is possible because the Japanese Americans I accompanied had entered a class niche of urban employment. When they returned from the camps after World War II, as I explained, they had lost their access to farms. Still, many resettled as close to the places they knew as they could. Some became factory workers and were able to join newly integrated unions. Others opened small restaurants or worked in hotels. It was a time of growing wealth for Americans. Their children went to public schools and became dentists, pharmacists, and store managers. Some married white Americans. Yet people keep track of each other; the community is close. Matsutake help maintain the community even though no one depends on them to defray living expenses. One of the best-loved matsutake forests of this community is a pinestudded, moss-covered valley, as smooth and clean as the grounds of a Japanese temple. Japanese Americans are proud of how carefully they maintain the area for both people and plants. Even the foraging areas of the deceased are remembered and respected. In the mid-1990s, a bold, white bulker-buyer from Open Ticket brought a load of commercial pickers to this area. The commercial pickers were not used to careful harvesting; they needed to cover a lot of ground to make the day’s pick. They tore up the moss and left the place a mess. A confrontation ensued. Japanese Americans brought in the Forest Service, who advised the buyer that commerce inside national forests is prohibited. The buyer accused the agency of racial discrimination. “Why should Japanese have special rights?” he reminisced to me, still sore. Finally, the Forest Service closed the area to commercial picking. The buyer went back to Open Ticket. But without enforcement, commercial pickers still sneak in, and hostilities between Japanese and Southeast Asian Americans 106 Chapter 7 still smolder. Clearly, they are different kinds of Asian Americans. As one Japanese American picker unself-consciously quipped, “The forests were great until the Asians came.” Who? Let me return to Southeast Asian pickers’ freedom. Certainly, it includes sneaking into forbidden places when one can get away with it. But freedom is more than personal daring; it is an engagement with an emerging political formation. I am sure I am not the only product of integration who was taken by surprise by the strength of twenty-firstcentury resentment of this program, particularly by rural whites, who feel left out and left behind. Some white pickers and buyers call their position “traditionalism.” They oppose integration; they want to savor their own values, without contamination from others. They also call this “freedom.” This is not a multicultural plan. And yet, ironically, it has helped bring to life the most cosmopolitan cultural formation the United States has ever known. The new traditionalists reject racial mixing and the muscular legacy of the welfare state that made mixing possible—through coercive assimilation. As they dismantle assimilation, new formations emerge. Without central planning, immigrants and refugees hold on to their best chances to make a living: their war experiences, languages, and cultures. They join American democracy through that single word, “freedom.” They are free, indeed, to continue transnational politics and trade; they may plot to overthrow foreign regimes and stake their fortunes on international fashions. In contrast to earlier immigrants, they need not study to become American from inside out. In the wake of the welfare state, this concurrence of freedom agendas—in all its unruly diversity—has seized the time. And what better participants in global supply chains! Here are nodes of ready and willing entrepreneurs, with and without capital, able to mobilize their ethnic and religious fellows to fill almost any kind of economic niche. Wages and benefits are not needed. Whole communities can be mobilized—and for communal reasons. Universal standards of welfare hardly seem relevant. These are projects of freedom. Capitalists looking for salvage accumulation, take note.
I have been arguing that commercia l mushroom picking exemplifies the general condition of precarity—and in particular of livelihood without “regular jobs.” But how did we get into a situation in which so few jobs with wages and benefits are available, even in the world’s richest country? Worse yet, how did we lose the expectation and taste for such jobs? This is a recent situation; many white pickers knew such jobs, or at least such expectations, from their earlier lives. Something changed. This chapter makes the bold assertion that the view from a neglected commodity chain can illuminate this surprisingly abrupt—and global—change. But isn’t matsutake economically negligible? Shouldn’t it offer only the view from a frog in a well? On the contrary: the modest success of the Oregon-to-Japan matsutake commodity chain is the tip of an iceberg, and following the iceberg to its underwater girth brings up forgotten stories that still grip the planet. Things that seem small often turn out to be big. It is the very negligible quality of the matsutake commodity chain that hid it from the view of twenty-first-century reformers, thus preserving a late-twentieth-century history that shook the world. Translating value, Tokyo. Matsutake, calculator, telephone: still life at an intermediate wholesaler’s booth. 110 Chapter 8 This is the history of encounters between Japan and the United States that shaped the global economy. Shifting relations between U.S. and Japanese capital, I argue, led to global supply chains—and to the end of expectations of progress aimed toward collective advancement. Global supply chains ended expectations of progress because they allowed lead corporations to let go of their commitment to controlling labor. Standardizing labor required education and regularized jobs, thus connecting profits and progress. In supply chains, in contrast, goods gathered from many arrangements can lead to profits for the lead firm; commitments to jobs, education, and well-being are no longer even rhetorically necessary. Supply chains require a particular kind of salvage accumulation, involving translation across patches. The modern history of U.S.-Japanese relations is a counterpoint of call-and-response that spread this practice around the world. Two bookends frame the tale. In the mid-nineteenth century, U.S. ships threatened Edo Bay in order to “open” the Japanese economy for American businessmen; this sparked a Japanese revolution that overturned the national political economy and pushed Japan into international commerce. Japanese refer to the indirect upending of Japan through the icon of the “Black Ships” that carried the U.S. threat. This icon is useful in considering what happened—in reverse—150 years later, at the end of the twentieth century, when the threat of Japan’s commercial power indirectly upended the U.S. economy. Scared by the success of Japanese investments, American business leaders destroyed the corporation as a social institution and propelled the U.S. economy into the world of Japanese-style supply chains. One might call this “Reverse Black Ships.” In the great wave of mergers and acquisitions of the 1990s, with their corporate reshufflings, the expectation that U.S. corporate leaders ought to provide employment disappeared. Instead, labor would be outsourced elsewhere—into more and more precarious situations. The matsutake commodity chain linking Oregon and Japan is just one of many global outsourcing arrangements inspired by the success of Japanese capital between the 1960s and the 1980s. This history was quickly covered up. In the 1990s, American businessmen reclaimed preeminence in the world economy, while the Japanese economy fell drastically. By the twenty-first century, Japan’s economic power had been forgotten, and progress, fueled by American Between the Dollar and the Yen 111 ingenuity, appeared to account for the global shift to outsourcing. This is where a humble commodity chain comes in to help us cut through obfuscations. What economic models allowed its organizational forms to emerge? The only way to answer this question is to follow twentiethcentury Japanese economic innovations. These were not created in isolation: they formed from tensions and dialogues across the Pacific. The matsutake commodity chain places us firmly in U.S.-Japanese economic interactions, and from here we can notice this chunk of forgotten history. In what follows, I let the thread of the story unroll quite far from matsutake. Yet at each step I need the chain’s reminders to resist the lull of current erasures. This is not just a story, then, but also a method: big histories are always best told through insistent, if humble, details. Money can open the tale. Both the U.S. dollar and the yen came into being in a world dominated by Spanish pesos, minted since the sixteenth century from the exploitation of Latin American silver. Neither the United States nor Japan were early players, as the United States only came into existence in the eighteenth century, and Japan was ruled by inward-looking lords, who strictly regulated foreign trade, from the seventeenth to the nineteenth centuries. The grand futures of neither the dollar nor the yen were evident at their births. By the mid-nineteenth century, however, the dollar had gained the clout of imperial gunboats deployed in its name. U.S. businessmen resented the tight control over foreign trade exerted by the Tokugawa shogunate.1 In 1853, Matthew Perry, commodore of the U.S. Navy, took up their cause by leading a fleet of armed ships to Edo Bay. Intimidated by this show of force, the shogunate signed the Convention of Kanagawa in 1854, which opened ports for U.S. trade.2 Japanese elites were aware of the subjugation of China in the wake of that country’s opposition to British “free trade” opium. To avoid war, they signed away their rights. But domestic crisis followed, resulting in the toppling of the shogunate. A new era opened with the brief civil war known as the Meiji Restoration. The winning group looked to Western modernity for their inspiration. In 1871, the Meiji government established the yen as the Japanese national currency, intending it to move within European and American circuits. Thus the dollar, indirectly, helped give birth to the yen. Meiji-era elites were not satisfied, however, to let foreigners control trade. They quickly worked to learn Western conventions and to establish 112 Chapter 8 their own firms as domestic equivalents to foreign ones. The government brought in foreign experts and sent young men abroad to study Western languages, laws, and trading practices. The young men came home and established professions, industries, banks, and trading companies, which flourished in Japan’s push for “the modern.” The new money was embedded in new contract laws, political forms, and debates about value. Meiji Japan was full of entrepreneurial energies, and international trade quickly emerged as an important sector of the economy.3 Japan lacked natural resources for industrialization, and the importation of raw materials was seen as an essential service for the building of the nation. Trading was among the most successful Meiji enterprises, and it became associated with rising new industries such as the production of cotton thread and textiles. Meiji-era traders saw their job as mediating between Japan and foreign economic worlds. Traders were trained through experience in foreign countries, gaining the doubled cultural agility that allowed them to negotiate across radical difference. Their work exemplifies Satsuka’s concept of “translation,” in which learning another culture both bridges and maintains difference.4 The new traders learned how commodities were traded in other places, and they used that knowledge to make advantageous contracts for Japan. In the terms economists use, they were specialists in “imperfect markets,” that is, markets in which information is not freely available to all buyers and sellers. Meiji-era traders coordinated markets across national borders; they also worked across incommensurable value systems. As Japanese have continued to imagine a “Japan” that exists in dynamic difference with something called “the West,” this understanding of international trading as translation has persisted, informing contemporary business practices. Trading creates capitalist value through its work of translation. Meiji-era traders associated themselves with industrial enterprises. Industry needed raw materials gained through trade; trade and industry flourished together. In the early twentieth century, the boom economy associated with World War I allowed large conglomerates to form, encompassing banking, mining, industry, and foreign trade.5 In contrast to twentieth-century U.S. corporate giants, these conglomerates, the zaibatsu, were coordinated by finance capital, not production: Banking and trading were central to their mission. From the first they were Between the Dollar and the Yen 113 involved with government business (Mitsui, for example, had provided the money to overthrow the shogunate);6 in the run-up to World War II, pressed by Japanese nationalists, the zaibatsu became increasingly entangled with imperial expansion. When Japan lost the war, the zaibatsu were the first targets of the U.S. occupation.7 The yen lost its value; the Japanese economy was in shambles. In the first days of the occupation, it seemed that the United States was favoring smaller firms, and even the advancement of labor. Soon enough, however, the American occupiers arranged for the rehabilitation of once-disgraced nationalists and rebuilt the Japanese economy as a bulwark against communism. It was in this climate that associations of banks, industrial enterprises, and specialists in trade formed again, although less formally, as keiretsu “enterprise groups.”8 At the heart of most enterprise groups was a general trading company in partnership with a bank.9 The bank transferred money to the trading company, which, in turn, made smaller loans to its associated enterprises. The bank did not have to monitor these small loans, which the trading company used to facilitate the formation of supply chains. This model is well made to stretch across national borders. Trading companies advanced loans—or equipment, technical advice, or special marketing agreements—to their supply chain partners overseas. The trading company’s job was to translate goods procured in varied cultural and economic arrangements into inventory. It is hard not to see in this arrangement the roots of the current hegemony of global supply chains, with their associated form of salvage accumulation.10 I first learned about supply chains in studying logging in Indonesia, and this is a place to see how the Japanese supply-chain model works.11 During Japan’s building boom in the 1970s and 1980s, Japanese imported Indonesian trees to make plywood construction molds. But no Japanese cut down Indonesian trees. Japanese general trading companies offered loans, technical assistance, and trade agreements to firms from other countries, which cut logs to Japanese specifications. This arrangement had many advantages for Japanese traders. First, it avoided political risk. Japanese businessmen were aware of the political difficulties of Chinese Indonesians who, resented for their wealth and willingness to cooperate with the more ruthless policies of the Indonesian government, were targets in periodic riots. Japanese businessmen evaded 114 Chapter 8 such difficulties for themselves by advancing money to Chinese Indonesians, who made the deals with Indonesian generals and took the risks. Second, the arrangement facilitated transnational mobility. Japanese traders had already deforested the Philippines and much of Malaysian Borneo by the time they got to Indonesia. Rather than adapting to a new country, the traders could merely bring in agents willing to work with them in each location. Indeed, Filipino and Malaysian loggers, financed by Japanese traders, were ready and able to go to work in cutting down Indonesian trees. Third, supply-chain arrangements facilitated Japanese trade standards while ignoring environmental consequences. Environmentalists looking for targets could find only a grab bag of varied companies, many Indonesian; no Japanese were in the forests. Fourth, supply-chain arrangements accommodated illegal logging as a layer of subcontracting, which harvested trees protected by environmental regulations. Illegal loggers sold their logs to the larger contractors, who passed them on to Japan. No one need be responsible. And—even after Indonesia started its own plywood businesses, in a supply-chain hierarchy modeled on Japanese trade—the wood was so cheap! The cost could be calculated without regard to the lives and livelihoods of loggers, trees, or forest residents. Japanese trading companies made the logging of Southeast Asia possible. They were equally busy with other commodities and in other parts of the world.12 Let me return to the early post–World War II period when these arrangements were emerging to see how this system developed. Some of the first postwar supply chains from Japan made use of ties with Japan’s former colony, Korea. At this time, the United States was the world’s richest country and the best destination for every country’s wares, but it had imposed a strict quota on goods imported from Japan. Historian Robert Castley tells the story of how Japan helped build South Korea’s economy to avoid U.S. quotas.13 By transferring light industry to South Korea, Japanese traders could export more products freely to the United States. Yet Japanese direct investment was resented in Korea. Thus Japan adopted what Castley calls a “putting out” approach. “It involved merchants (or firms) supplying subcontractors with loans, credit, machinery and equipment to produce or finish goods, which would be sold in distant markets by the merchant.”14 Castley notes the power of traders and bankers in this strategy: “the Japanese Between the Dollar and the Yen 115 offered long-term contracts with overseas suppliers and frequently loans for the development of resources.”15 This form of expansion, he says, was a form of political as well as economic security in Japan. The putting-out system transferred less profitable manufacturing sectors and older technologies to South Korea, clearing the way for Japanese businesses to upgrade. According to this model, which Japanese proponents later graced with the image of “flying geese,” Korean businesses would always be one cycle of innovation behind Japan.16 But all would be flying forward, in part because Koreans could then transfer their own outdated manufacturing sectors to the poorer countries of Southeast Asia, allowing Koreans to inherit new rounds of Japanese innovation. South Korean elites were happy to benefit from Japanese capital—some of it transferred as war reparations. The resulting business networks formed models for the transnational expansion of capital in Japan, including the work of the Japanese-controlled Asian Development Bank. By the 1970s, many kinds of supply chains snaked in and out of Japan. General trading companies organized cross-continental supply chains for raw materials, becoming some of the richest companies in the world. Banks sponsored enterprises across Asia with links to Japan. Meanwhile, producers had organized their own supply chains, sometimes called “vertical keiretsu” in the English-language literature. Car companies, for example, subcontracted the development and manufacture of parts, saving costs. Mom-and-Pop suppliers made industrial components at home. Salvage accumulation and supply-chain subcontracting had grown together. The combined result was so successful that U.S. businesses, and their government supporters, could feel the heat. The success of Japanese cars was particularly painful to American pundits who had become used to thinking of the U.S. economy in relation to its cars. The appearance of Japanese cars in the United States, and the related decline of Detroit’s car companies, sparked public awareness of Japan’s rising economic fortunes. Some business leaders jumped to learn from Japanese success, showing interest in “quality control” and “corporate culture.”17 Other business leaders sought U.S. reprisals against Japan. A wave of public fear emerged. One index was the 1982 murder of Chinese American Vincent Chin, mistaken for a Japanese by unemployed white autoworkers in Detroit.18 116 Chapter 8 The threat posed by Japan unleashed a U.S. revolution. Reverse Black Ships overturned the U.S. order of things, but through U.S. efforts. Empowered by public fears of U.S. decline, a small group of activist stockholders and business school professors, who might otherwise have never gained a hearing, were allowed to dismantle American corporations.19 The activists of the 1980s “shareholders’ revolution” reacted to what they saw as the erosion of U.S. power. To regain it, they aimed to take back corporations for their owners, the stockholders, rather than leaving them in the hands of professional managers. They began to buy up corporations to strip them of assets and resell them. By the 1990s, the movement had won; the radical chic of “leveraged buyouts” became the mainstream investment strategy of “mergers and acquisitions.” As corporations rid themselves of all but their most profitable sectors, most of what had once been inside those corporations was contracted to distant suppliers. Supply chains, and thus commitment to their distinctive form of salvage accumulation, took off as the dominant form of capitalism in the United States. This worked so well for investors that by the turn of the century, U.S. business leaders had forgotten that this shift was part of a struggle for position and had recast it as the leading edge of an evolutionary process. They were busy cramming the world into this process, and had, indeed, made headway in enforcing an American version in Japan.20 To understand how Japan’s threat had faded requires going back a bit—and allowing money to emerge as a protagonist of the story. In the 1980s and 1990s, lots of things shifted because of confrontations between the dollar and the yen. In 1949, the yen was pegged to the U.S. dollar as part of the Bretton Woods agreements. As the Japanese economy flourished, in part through nonreciprocated exports to the United States, the U.S. balance of payments with Japan suffered.21 From the U.S. perspective, the yen was “undervalued,” making Japanese goods cheap in the United States and U.S. exports to Japan too dear there. U.S. anxieties about the yen were one small part of the situation in 1971 that led to the U.S. abandonment of the gold standard. In 1973, the yen was allowed to float. Then in 1979, the U.S. raised interest rates, attracting investment in the dollar and keeping its value high. Because the Japanese economy continued to export to the United States, the Japanese government bought and sold Between the Dollar and the Yen 117 U.S. dollars to keep the price of the yen low. In the first half of the 1980s, capital flowed out of Japan, keeping the yen weak in relation to the dollar. By 1985, U.S. business leaders had panicked about this situation. In response, the U.S. engineered an international agreement, the Plaza Accord. The value of the dollar was lowered, and the yen rose. By 1988, the yen had doubled its value in relation to the dollar. Japanese consumers could buy almost everything abroad—including matsutake. National pride rose; this was the moment of The Japan That Can Say No. 22 However, the situation made it difficult for Japanese companies to export their goods, which now were priced too high. Japanese companies responded by sending more production abroad. So did their suppliers in South Korea, Taiwan, and Southeast Asia, also reeling from the change in currency values. Supply chains traveled everywhere. Here’s how two American sociologists describe the situation: Faced with the sudden increase of the dollar value of their factor inputs, and eager to keep their prices low and thus maintain their contracts with American retailers, Asian businesses quickly began to diversify. Most of Taiwan’s light industries . . . moved to . . . mainland China, but also to Southeast Asia. . . . Large segments of Japanese export-oriented industries moved to Southeast Asia. In addition some firms, such as Toyota, Honda, and Sony, established portions of their business in North America. South Korean businesses also moved labor-intensive operations to Southeast Asia, as well as to other developing countries in Latin America and central Europe. In each place that they established their new businesses, low-price supplier networks began to form.23 The Japanese national economy went into shock—first with the “bubble economy” of inflated real estate and stock prices in the late 1980s, then the “lost decade” of recession in the 1990s, then the further financial crisis of 1997.24 But supply chains took off as never before: not just Japanese-sponsored chains but chains from all Japan’s supplier sites, which now had their own chains. Supply-chain capitalism became a presence around the world. But Japan was no longer in charge. One company’s history sharply etches the change between Japanese and U.S. leadership of global supply chains: Nike, the trendsetting brand of athletic shoes. Nike began as a U.S. outpost of a Japanese distribution chain for athletic shoes. (Distribution is an element of many 118 Chapter 8 Japanese supply chains.) Subject to the disciplines of the Japanese trading regime, Nike learned the supply-chain model. But Nike slowly began to transform it, American style. Instead of making value through trade as translation, Nike would use American advantages in advertising and branding. When Nike’s founders established their independence from their Japanese chain, they added style—in the form of the Nike “swoosh” and advertisements featuring black American sports heroes. Learning from their Japanese experience, however, it never occurred to them to manufacture shoes. “We don’t know the first thing about manufacturing. We are marketers and designers,” explained one Nike vice-president.25 Instead, they contracted with the proliferating supply networks developing across Asia, making good use of the post1985 profusion of “low-price supplier networks” mentioned above. By the early twenty-first century, the company had contracts with more than nine hundred factories, and it had become a symbol of both the excitement and the terrors of supply-chain capitalism. To speak of Nike evokes the horrors of sweatshops, on the one hand, and the pleasures of designer brands, on the other. Nike has succeeded in making this contradiction seem particularly American. But Nike’s rise from a Japanese supply chain reminds us of the pervasive legacy of Japan. That legacy is clear in the matsutake supply chain, too small and too specialized to attract the intervention of American big business. Yet the chain stretches to North America, enrolling Americans as suppliers rather than as chain directors. Nike on its head! How were Americans convinced to take on such a lowly role? As I have explained, no one in Oregon thinks of him- or herself as an employee of a Japanese business. The pickers, buyers, and field agents are there for freedom. But freedom has come to mobilize the poor only through the freeing of American livelihoods from expectations of employment—a result of the transpacific dialogue between U.S. and Japanese capital. In the matsutake commodity chain, then, we see the history I have been describing: Japanese traders, searching for local partners; American workers, released from the hope for regular jobs; translations across aspirations, allowing American freedom to assemble Japanese inventory. I have been arguing that the organization of the commodity chain allows us to notice this history, which otherwise might be obscured by Between the Dollar and the Yen 119 hype about U.S. global leadership. When humble commodities are allowed to illuminate big histories, the world economy is revealed as emerging within historical conjunctures: the indeterminacies of encounter. If conjunctures make history, everything rests on moments of coordination—the translations that allow Japanese investors to profit from American foraging, just as pickers take advantage of Japanese wealth. How are mushrooms that are foraged for freedom transformed into inventory? I return to Open Ticket—and its commodity chain.
0 notes
katebushwick · 5 years
Text
Indeterminacy has a rich legacy in human appreciation of mushrooms.
American composer John Cage wrote a set of short performance
pieces called Indeterminacy, many of which celebrate encounters with
mushrooms.1 Hunting wild mushrooms, for Cage, required a particular
kind of attention: attention to the here and now of encounter, in all its
contingencies and surprises. Cage’s music was all about this “always different”
here and now, which he contrasted to the enduring “sameness”
of classical composition; he composed to get the audience to listen as
much to ambient sounds as composed music. In one famous composition,
4'33", no music is played at all, and the audience is forced to just
listen. Cage’s attention to listening as things occurred brought him to
appreciate indeterminacy. The Cage quotation with which I began this
chapter is his translation of seventeenth-century
Japanese poet Matsuo
Basho’s haiku, “matsutake ya shiranu ki no ha no hebari tsuku,” which
I have seen translated as “Matsutake; And on it stuck / The leaf of some
unknown tree.”2 Cage decided that the indeterminacy of encounter was
not clear enough in such translations. First he settled on “That that’s
unknown brings mushroom and leaf together,” which nicely expresses
the indeterminacy of encounter. But, he thought, it is too ponderous.
“What leaf? What mushroom?” can also take us into that open-endedness
that Cage so valued in learning from mushrooms.3
Indeterminacy has been equally important in what scientists learn
from mushrooms. Mycologist Alan Rayner finds the indeterminacy of fungal growth one of the most exciting things about fungi.4 Human bodies achieve a determinate form early in our lives. Barring injury, we’ll never be all that different in shape than we were as adolescents. We can’t grow extra limbs, and we’re stuck with the one brain we’ve each got. In contrast, fungi keep growing and changing form all their lives. Fungi are famous for changing shape in relation to their encounters and environments. Many are “potentially immortal,” meaning they die from disease, injury, or lack of resources, but not from old age. Even this little fact can alert us to how much our thoughts about knowledge and existence just assume determinate life form and old age. We rarely imagine life without such limits—and when we do we stray into magic. Rayner challenges us to think with mushrooms, otherwise. Some aspects of our lives are more comparable to fungal indeterminacy, he points out. Our daily habits are repetitive, but they are also open-ended, responding to opportunity and encounter. What if our indeterminate life form was not the shape of our bodies but rather the shape of our motions over time? Such indeterminacy expands our concept of human life, showing us how we are transformed by encounter. Humans and fungi share such here-and-now transformations through encounter. Sometimes they encounter each other. As another seventeenth-century haiku put it: “Matsutake / Taken by someone else / Right in front of my nose.”5 What person? What mushroom? The smell of matsutake transformed me in a physical way. The first time I cooked them, they ruined an otherwise lovely stir-fry. The smell was overwhelming. I couldn’t eat it; I couldn’t even pick out the other vegetables without encountering the smell. I threw the whole pan away and ate my rice plain. After that I was cautious, collecting but not eating. Finally, one day, I brought the whole load to a Japanese colleague, who was head over heels in delight. She had never seen so much matsutake in her life. Of course she prepared some for dinner. First, she showed me how she tore apart each mushroom, not touching it with a knife. The metal of the knife changes the flavor, she said, and, besides, her mother told her that the spirit of the mushroom doesn’t like it. Then she grilled the matsutake on a hot pan without oil. Oil changes the smell, she explained. Worse yet, butter, with its strong smell. Matsutake must be dry grilled or put into a soup; oil or butter ruins it. She served the grilled matsutake with a bit of lime juice. It was marvelous! The smell had begun to delight me. 48 Interlude Over the next few weeks, my senses changed. It was an amazing year for matsutake, and they were everywhere. Now, when I caught a whiff, I felt happy. I lived for several years in Borneo, where I had had a similar experience with durian, that marvelously stinky tropical fruit. The first time I was served durian I thought I would vomit. But it was a good year for durian, and the smell was everywhere. Before long I found myself thrilled by the smell; I couldn’t remember what had sickened me. Similarly, matsutake: I could no longer remember what I had found so disturbing. Now it smelled like joy. I’m not the only one who has that reaction. Koji Ueda runs a beautifully trim vegetable shop in Kyoto’s traditional market. During the matsutake season, he explained, most people who come into the store don’t want to buy (his matsutake are expensive); they want to smell. Just coming into the store makes people happy, he said. That’s why he sells matsutake, he said: for the sheer pleasure it gives people. Perhaps the happiness factor in smelling matsutake is what pressed Japanese odor engineers to manufacture an artificial matsutake smell. Now you can buy matsutake-flavored potato chips and matsutake-flavored instant miso soup. I’ve tried them, and I can sense a distant memory of matsutake at the edge of my tongue, but it’s nothing like encountering a mushroom. Still, many Japanese have only known matsutake in this form, or as the frozen mushrooms used in matsutake rice or matsutake pizza. They wonder what the fuss is all about and feel indulgently critical toward those who go on and on about matsutake. Nothing can smell all that good. Matsutake lovers in Japan know this scorn and cultivate a defensive exuberance about the mushroom. The smell of matsutake, they say, recalls times past that these young people never knew, much to their detriment. Matsutake, they say, smells like village life and a childhood visiting grandparents and chasing dragonflies. It recalls open pinewoods, now crowded out and dying. Many small memories come together in the smell. It brings to mind the paper dividers on village interior doors, one woman explained; her grandmother would change the papers every New Year and use them to wrap the next year’s mushrooms. It was an easier time, before nature became degraded and poisonous. Nostalgia can be put to good uses. Or so explained Makoto Ogawa, the elder statesman of matsutake science in Kyoto. When I met him, he Smelling 49 had just retired. Worse yet, he had cleaned out his office and thrown away books and scientific articles. But he was a walking library of matsutake science and history. Retirement had made it easier for him to talk about his passions. His matsutake science, he explained, had always involved advocacy for both people and nature. He had dreamed that showing people how to nurture matsutake forests might revitalize connections between city and countryside—as urban people became interested in rural life, and villagers had a valuable product to sell. Meanwhile, even as matsutake research could be funded by economic excitement, it had many benefits for basic science, especially in understanding relations among living things in changing ecologies. If nostalgia was a part of this project, so much the better. This was his nostalgia too. He took my research team to see what once was a thriving matsutake forest behind an old temple. Now the hill was alternately dark with planted conifers and choked with evergreen broadleaf trees, with only a few dying pines. We found no matsutake. Once, he recalled, that hillside was teeming with mushrooms. Like Proust’s madeleines, matsutake are redolent with temps perdu. Dr. Ogawa savors nostalgia with considerable irony and laughter. As we stood in the rain beside the matsutake-less temple forest, he explained the Korean origin of Japanese regard for matsutake. Before you hear the story, consider that there is no love lost between Japanese nationalists and Koreans. For Dr. Ogawa to remind us that Korean aristocrats started Japanese civilization works against the grain of Japanese desire. Besides, civilization, in his tale, is not all for the good. Long before they came to central Japan, Dr. Ogawa related, Koreans had cut down their forests to build temples and fuel iron forging. They had developed in their homeland the human-disturbed open pine forests in which matsutake grow long before such forests emerged in Japan. When Koreans expanded to Japan in the eighth century, they cut down forests. Pine forests sprung up from such deforestation, and with them matsutake. Koreans smelled the matsutake—and they thought of home. The first nostalgia: the first love of matsutake. It was in longing for Korea that Japan’s new aristocracy first glorified the now famous autumn aroma, Dr. Ogawa told us. No wonder, too, that Japanese abroad are so obsessed with matsutake, he added. He ended with a funny story about a Japanese American matsutake hunter he met in Oregon who, in 50 Interlude a badly garbled mixture of Japanese and English, saluted Dr. Ogawa’s research, saying, “We Japanese are matsutake crazy!” Dr. Ogawa’s stories tickled me because they situated nostalgia, but they also drove home another point: matsutake grows only in deeply disturbed forests. Matsutake and red pine are partners in central Japan, and both grow only where people have caused significant deforestation. All over the world, indeed, matsutake are associated with the most disturbed kinds of forests: places where glaciers, volcanoes, sand dunes—or human actions—have done away with other trees and even organic soil. The pumice flats I walked in central Oregon are in some ways typical of the kind of land matsutake knows how to inhabit: land on which most plants and other fungi can find no hold. On such impoverished landscapes, the indeterminacies of encounter loom. What pioneer has found its way here, and how can it live? Even the hardiest of seedlings is unlikely to make it unless it finds a partner in an equally hardy fungus to draw nutrients from the rocky ground. (What leaf? What mushroom?) The indeterminacy of fungal growth matters too. Might it encounter the roots of a receptive tree? A change in substrate or potential nutrition? Through its indeterminate growth, the fungus learns the landscape. There are humans to encounter as well. Will they inadvertently nurture the fungus while cutting firewood and gathering green manure? Or will they introduce hostile plantings, import exotic diseases, or pave the area for suburban development? Humans matter on these landscapes. And humans (like fungi and trees) bring histories with them to meet the challenges of the encounter. These histories, both human and not human, are never robotic programs but rather condensations in the indeterminate here and now; the past we grasp, as philosopher Walter Benjamin puts it, is a memory “that flashes in a moment of danger.”6 We enact history, Benjamin writes, as “a tiger’s leap into that which has gone before.”7 Science studies scholar Helen Verran offers another image: Among Australia’s Yolngu people, she relates, the recollection of the ancestors’ dreaming is condensed for present challenges in a rite at the climax of which a spear is thrown into the center of the storytellers’ circle. The toss of the spear merges the past in the here and now.8 Through smell, all of us know that spear’s throw, that tiger’s leap. The past we bring to encounters is condensed in smell. To smell childhood visits with one’s grandparents condenses a great chunk of Japanese history, Smelling 51 not just the vitality of village life in the mid-twentieth century, but the nineteenth-century deforestation that came before, denuding the landscape, and the urbanization and abandonment of the forests that later followed. While some Japanese may smell nostalgia in the forests made by their disturbances, this is not, of course, the only feeling that people bring to such wild places. Consider the smell of matsutake again. It is time to tell you that most people of European origin can’t stand the smell. A Norwegian gave the Eurasian species its first scientific name, Tricholoma nauseosum, the nauseating Trich. (In recent years, taxonomists made an exception to usual rules of precedence to rename the mushroom, acknowledging Japanese tastes, as Tricholoma matsutake.) Americans of European descent tend to be equally unimpressed by the smell of the Pacific Northwest’s Tricholoma magnivelare. “Mold,” “turpentine,” “mud,” white pickers said, when I asked them to characterize the smell. More than one moved our conversation to the foul smell of rotting fungi. Some were familiar with California mycologist David Arora’s characterization of the smell as “a provocative compromise between ‘red hots’ and dirty socks.”9 Not exactly something you would want to eat. When Oregon’s white pickers prepare the mushroom as food, they pickle it or smoke it. The processing masks the smell, making the mushroom anonymous. It is not surprising, perhaps, that U.S. scientists have studied the smell of matsutake to see what it repels (slugs), but Japanese scientists have studied the smell to consider what it attracts (some flying insects).10 Is it the “same” smell if people bring such different sensibilities to the encounter? Does that problem stretch to slugs and gnats as well as people? What if noses—as in my experience—change? What if the mushroom too can change through its encounters? Matsutake in Oregon associate with many host trees. Oregon pickers can distinguish the host tree with which a particular matsutake has grown—partly from the size and shape, but partly from the smell. The subject came up one day when I examined some truly bad-smelling matsutake being offered for sale. The picker explained that he found these mushrooms under white fir, an unusual host tree for matsutake. Loggers, he said, call white fir “piss fir” because of the bad smell the wood emits when you cut it. The mushrooms smelled as bad as a wounded fir. 52 Interlude To me, they did not smell like matsutake at all. But wasn’t this smell some piss fir–matsutake combination, made in the encounter? There is an intriguing nature-culture knot in such indeterminacies. Different ways of smelling and different qualities of smell are wrapped up together. It seems impossible to describe the smell of matsutake without telling all the cultural-and-natural histories condensed together in it. Any attempt at definitive untangling—perhaps like artificial matsutake scent—is likely to lose the point: the indeterminate experience of encounter, with its tiger’s leap into history. What else is smell? The smell of matsutake wraps and tangles memory and history— and not just for humans. It assembles many ways of being in an affectladen knot that packs its own punch. Emerging from encounter, it shows us history-in-the-making. Smell it. I first heard of matsutake from mycologis t David Arora, who studied matsutake camps in Oregon between 1993 and 1998. I was looking for a culturally colorful global commodity, and Arora’s stories of matsutake intrigued me. He told me of the buyers set up tents by the side of the highway to buy mushrooms at night. “They have nothing to do all day, so they’ll have plenty of time to talk to you,” he ventured. And there the buyers were—but so much more! In the big camp, I seemed to have stepped into rural Southeast Asia. Mien wearing sarongs boiled water in kerosene cans over stone tripods and hung strips of game and fish over the stove to dry. Hmong all the way from North Carolina brought home-canned bamboo shoots for sale. Lao noodle tents sold not only pho but also the most authentic laap I had eaten in the United States, all raw blood, chilies, and intestines. Lao karaoke blared from battery-powered speakers. I even met a Cham picker, although he did not speak Cham, which I thought perhaps I could manage from its closeness to Malay. Mocking my linguistic limitations, a Khmer teenager wearing grunge boasted that he spoke four languages: Khmer, Lao, English, and Ebonics. Local Native Americans sometimes 58 Part II came to sell their mushrooms. There were also both whites and Latinos, although most avoided the official camp, staying in the woods alone or in small groups. And visitors: A Sacramento Filipino followed Mien friends up here one year, although he said he never got the point. A Portland Korean thought maybe he might join. Yet there was something not at all cosmopolitan about the scene as well: A rift separated these pickers and buyers from shops and consumers in Japan. Everyone knew that the mushrooms (except for a small percentage bought for Japanese American markets) were going to Japan. Every buyer and bulker longed to sell directly to Japan—but none had any idea how. Misconceptions about the matsutake trade both in Japan and in other supply sites proliferated. White pickers swore that the value of the mushrooms in Japan was as an aphrodisiac. (While matsutake in Japan do have phallic connotations, no one eats them as a drug.) Some complained about the Chinese Red Army, which, they said, drafted people to pick, which depressed global prices. (Pickers in China are independent, just as in Oregon.) When someone discovered extremely high prices in Tokyo on the Internet, no one realized that these prices referred to Japanese matsutake. One exceptional bulker, of Chinese origin and fluent in Japanese, whispered to me about these misunderstandings—but he was an outsider. Except for this man, Oregon pickers, buyers, and bulkers were completely in the dark about the Japanese side of the trade. They made up fantasy landscapes of Japan, and they did not know how to assess them. They had their own matsutake world: a patch of practices and meanings that brought them together as matsutake suppliers—but did not inform the mushrooms’ further passage. This rift between U.S. and Japanese segments of the commodity chain guided my search. Different processes for making and accessing value characterized each segment. Given this diversity, what makes this part of that global economy we call capitalism? It may seem odd to want to tackle capi talis m with a theory that stresses ephemeral assemblages and multidirectional histories. After all, the global economy has been the centerpiece of progress, and even radical critics have described its forward-looking motion as filling up the world. Like a giant bulldozer, capitalism appears to flatten the earth to its specifications. But all this only raises the stakes for asking what else is going on—not in some protected enclave, but rather everywhere, both inside and out. Impressed by the rise of factories in the nineteenth century, Marx showed us forms of capitalism that required the rationalization of wage labor and raw materials. Most analysts have followed this precedent, imagining a factory-driven system with a coherent governance structure, built in cooperation with nation-states. Yet today—as then—much of the economy takes place in radically different scenes. Supply chains snake back and forth not only across continents but also across standards; it would be hard to identify a single rationality across the chain. Yet assets are still amassed for further investment. How does this work? Capitalist edge effects, Oregon. Pickers line up to sell matsutake to a roadside buyer. Precarious livelihoods show themselves at the edges of capitalist governance. Precarity is that here and now in which pasts may not lead to futures. 62 Chapter 4 A supply chain is a particular kind of commodity chain: one in which lead firms direct commodity traffic.1 Throughout this part, I explore the supply chain linking matsutake pickers in the forests of Oregon with those who eat the mushrooms in Japan. The chain is surprising and full of cultural variety. The factory work through which we know capitalism is mainly missing. But the chain illuminates something important about capitalism today: Amassing wealth is possible without rationalizing labor and raw materials. Instead, it requires acts of translation across varied social and political spaces, which, borrowing from ecologists’ usage, I call “patches.” Translation, in Shiho Satsuka’s sense, is the drawing of one world-making project into another.2 While the term draws attention to language, it can also refer to other forms of partial attunement. Translations across sites of difference are capitalism: they make it possible for investors to accumulate wealth. How do mushrooms foraged as trophies of freedom become capitalist assets—and later, exemplary Japanese gifts? Answering this question requires attention to the unexpected assemblages of the chain’s component links, as well as the translation processes that draw the links together into a transnational circuit. Capitalism is a system for concentrating wealth, which makes possible new investments, which further concentrate wealth. This process is accumulation. Classic models take us to the factory: factory owners concentrate wealth by paying workers less than the value of the goods that the workers produce each day. Owners “accumulate” investment assets from this extra value. Even in factories, however, there are other elements of accumulation. In the nineteenth century, when capitalism first became an object of inquiry, raw materials were imagined as an infinite bequest from Nature to Man. Raw materials can no longer be taken for granted. In our food procurement system, for example, capitalists exploit ecologies not only by reshaping them but also by taking advantage of their capacities. Even in industrial farms, farmers depend on life processes outside their control, such as photosynthesis and animal digestion. In capitalist farms, living things made within ecological processes are coopted for Working the Edge 63 the concentration of wealth. This is what I call “salvage,” that is, taking advantage of value produced without capitalist control. Many capitalist raw materials (consider coal and oil) came into existence long before capitalism. Capitalists also cannot produce human life, the prerequisite of labor. “Salvage accumulation” is the process through which lead firms amass capital without controlling the conditions under which commodities are produced. Salvage is not an ornament on ordinary capitalist processes; it is a feature of how capitalism works.3 Sites for salvage are simultaneously inside and outside capitalism; I call them “pericapitalist.”4 All kinds of goods and services produced by pericapitalist activities, human and nonhuman, are salvaged for capitalist accumulation. If a peasant family produces a crop that enters capitalist food chains, capital accumulation is possible through salvaging the value created in peasant farming. Now that global supply chains have come to characterize world capitalism, we see this process everywhere. “Supply chains” are commodity chains that translate value to the benefit of dominant firms; translation between noncapitalist and capitalist value systems is what they do. Salvage accumulation through global supply chains is not new, and some well-known earlier examples can clarify how it works. Consider the nineteenth-century ivory supply chain connecting central Africa and Europe as told in Joseph Conrad’s novel Heart of Darkness. 5 The story turns around the narrator’s discovery that the European trader he much admired has turned to savagery to procure his ivory. The savagery is a surprise because everyone expects the European presence in Africa to be a force for civilization and progress. Instead, civilization and progress turn out to be cover-ups and translation mechanisms for getting access to value procured through violence: classic salvage. For a brighter view of supply-chain translation, consider Herman Melville’s account of the nineteenth-century procurement of whale oil for Yankee investors.6 Moby-Dick tells of a ship of whalers whose rowdy cosmopolitanism contrasts sharply with our stereotypes of factory discipline; yet the oil they obtain from killing whales around the world enters a U.S.-based capitalist supply chain. Strangely, all the harpooners on the Pequod are unassimilated indigenous people from Asia, Africa, America, and the Pacific. The ship is unable to kill a single whale without the expertise of people who are completely untrained in U.S. 64 Chapter 4 industrial discipline. But the products of this work must eventually be translated into capitalist value forms; the ship sails only because of capitalist financing. The conversion of indigenous knowledge into capitalist returns is salvage accumulation. So too is the conversion of whale life into investments. Before you conclude that salvage accumulation is archaic, let me turn to a contemporary example. Technological advances in managing inventory have energized today’s global supply chains; inventory management allows lead firms to source their products from all kinds of economic arrangements, capitalist and otherwise. One firm that helped put such innovations in place is the retail giant Wal-Mart. Wal-Mart pioneered the required use of universal product codes (UPCs), the blackand-white bars that allow computers to know these products as inventory.7 The legibility of inventory, in turn, means that Wal-Mart is able to ignore the labor and environmental conditions through which its products are made: pericapitalist methods, including theft and violence, may be part of the production process. With a nod to Woody Guthrie, we might think about the contrast between production and accounting through the two sides of the UPC tag.8 One side of the tag, the side with the black-and-white bars, allows the product to be minutely tracked and assessed. The other side of the tag is blank, indexing Wal-Mart’s total lack of concern with how the product is made, since value can be translated through accounting. Wal-Mart has become famous for forcing its suppliers to make products ever more cheaply, thus encouraging savage labor and destructive environmental practices.9 Savage and salvage are often twins: Salvage translates violence and pollution into profit. As inventory moves increasingly under control, the requirement to control labor and raw materials recedes; supply chains make value from translating values produced in quite varied circumstances into capitalist inventory. One way of thinking about this is through scalability, the technical feat of creating expansion without the distortion of changing relations. The legibility of inventory allows scalable retail expansion for Wal-Mart without requiring that production be scalable. Production is left to the riotous diversity of nonscalability, with its relationally particular dreams and schemes. We know this best in “the race to the bottom”: the role of global supply chains in promoting coerced labor, dangerous sweatshops, poisonous substitute ingredients, and irresponsible Working the Edge 65 environmental gouging and dumping. Where lead firms pressure suppliers to provide cheaper and cheaper products, such production conditions are predictable outcomes. As in Heart of Darkness, unregulated production is translated in the commodity chain, and even reimagined as progress. This is frightening. At the same time, as J. K. Gibson-Graham argue in their optimistic reach toward a “postcapitalist politics,” economic diversity can be hopeful.10 Pericapitalist economic forms can be sites for rethinking the unquestioned authority of capitalism in our lives. At the very least, diversity offers a chance for multiple ways forward—not just one. In her insightful comparison between the supply chains for French green beans (haricots verts) that link West Africa with France and East Africa with Great Britain, respectively, geographer Susanne Freidberg offers a sense of how supply chains, drawing variously on colonial and national histories, may encourage quite different economic forms.11 French neocolonial schemes mobilize peasant cooperatives; British supermarket standards encourage expatriate scam operations.12 Within and across differences such as these, there is room for building a politics to confront and navigate salvage accumulation. But following GibsonGraham to call this politics “postcapitalist” seems to me premature. Through salvage accumulation, lives and products move back and forth between noncapitalist and capitalist forms; these forms shape each other and interpenetrate. The term “pericapitalist” acknowledges that those of us caught in such translations are never fully shielded from capitalism; pericapitalist spaces are unlikely platforms for a safe defense and recuperation. At the same time, the more prominent critical alternative—shutting one’s eyes to economic diversity—seems even more ridiculous in these times. Most critics of capitalism insist on the unity and homogeneity of the capitalist system; many, like Michael Hardt and Antonio Negri, argue that there is no longer a space outside of capitalism’s empire.13 Everything is ruled by a singular capitalist logic. As for Gibson-Graham, this claim is an attempt to build a critical political position: the possibility of transcending capitalism. Critics who stress the uniformity of capitalism’s hold on the world want to overcome it through a singular solidarity. But what blinders this hope requires! Why not instead admit to economic diversity? 66 Chapter 4 My goal in bringing up Gibson-Graham and Hardt and Negri is not to dismiss them; indeed, I think they are perhaps the early twenty-first century’s most trenchant anticapitalist critics. Furthermore, by setting out strongly contrasting goal posts between which we might think and play, they jointly do us an important service. Is capitalism a single, overarching system that conquers all, or one segregated economic form among many?14 Between these two positions, we might see how capitalist and noncapitalist forms interact in pericapitalist spaces. Gibson-Graham advise us, quite correctly I think, that what they call “noncapitalist” forms can be found everywhere in the midst of capitalist worlds—rather than just in archaic backwaters. But they see such forms as alternatives to capitalism. Instead, I would look for the noncapitalist elements on which capitalism depends. Thus, for example, when Jane Collins reports that workers in Mexican garment assembly factories are expected to know how to sew before they begin their jobs, because they are women, we are offered a glimpse of noncapitalist and capitalist economic forms working together.15 Women learn to sew growing up at home; salvage accumulation is the process that brings this skill into the factory to the benefit of owners. To understand capitalism (and not just its alternatives), then, we can’t stay inside the logics of capitalists; we need an ethnographic eye to see the economic diversity through which accumulation is possible. It takes concrete histories to make any concept come to life. And isn’t mushroom collecting a place to look, after progress? The rifts and bridges of the Oregon-to-Japan matsutake commodity chain show capitalism achieved through economic diversity. Matsutake foraged and sold in pericapitalist performances become capitalist inventory as they are sent to Japan a day later. Such translation is the central problem of many global supply chains. Let me begin by describing the first part of the chain.16 Americans don’t like middlemen, who, they say, just rip off value. But middlemen are consummate translators; their presence directs us to salvage accumulation. Consider the North American side of the commodity chain that brings matsutake from Oregon to Japan. (The Japanese side—with its many middlemen—will be considered later.) Indepen- Working the Edge 67 dent foragers pick the mushrooms in national forests. They sell to independent buyers, who sell, in turn, to bulkers’ field agents, who sell to other bulkers or to exporters, who sell and ship, at last, to importers in Japan. Why so many middlemen? The best answer may be a history. Japanese traders began importing matsutake in the 1980s, when the scarcity of matsutake in Japan first became clear. Japan was bursting with investment capital, and matsutake were prime luxuries, equally suitable as perks, gifts, or bribes. American matsutake were still an expensive novelty in Tokyo, and restaurants competed to get some. Emerging matsutake traders in Japan were like other Japanese traders of that time, ready to use their capital to organize supply chains. The mushrooms were expensive, so the incentives for suppliers were good. North American traders remember the 1990s as a time of extraordinary prices—and high-risk gambling. If a supplier was able to hit the Japanese markets correctly, the payoff was huge. But with an inconsistent and easy-to-spoil forest product and rapidly changing demand, the possibilities for total wipeout were also great. Everyone spoke of those days in casino metaphors. One Japanese trader compared the importers then to the Mafia in international ports after World War I: It was not just that the importers were gambling but that they were also catalyzing gambling—and keeping the gambling going. Japanese importers needed local know-how, and they began through alliances with exporters. In the Pacific Northwest, the first exporters were Asian Canadians in Vancouver—and because of their precedent, most U.S. matsutake continue to be exported by their firms. These exporters were not interested only in matsutake. They shipped seafood, or cherries, or log homes to Japan; matsutake were added to those activities. Some—especially the Japanese immigrants—told me they added matsutake to sweeten long-term relations with importers. They were willing to ship matsutake at a loss, they said, to keep their relations intact. Alliances between exporters and importers formed a basis for the transpacific trade. But the exporters—experts in fish, or fruit, or timber— knew nothing about how to get the mushrooms. In Japan, matsutake come to the market via an agricultural cooperative, or from individual farmers. In North America, matsutake are scattered across enormous national (U.S.) or commonwealth (Canadian) forests. This is where the small companies that I call “bulkers” come in; bulkers gather mushrooms 68 Chapter 4 to sell to exporters. Bulkers’ field agents buy mushrooms from “buyers” who buy from pickers. Field agents, like buyers, must know the terrain and the people likely to search it. In the earliest days of the U.S. Pacific Northwest matsutake trade, most field agents, buyers, and pickers were white men who found solace in the mountains, such as Vietnam veterans, displaced loggers, and rural “traditionalists” who rejected liberal urban society. After 1989, an increasing number of refugees from Laos and Cambodia came to pick, and field agents had to stretch their abilities to work with Southeast Asians. Southeast Asians eventually became buyers, and a few became field agents. Working around each other, the whites and Southeast Asians found a common vocabulary in “freedom,” which could mean many things dear to each group, even if they were not the same. Native Americans found resonance, but Latino pickers did not share the rhetoric of freedom. Despite this variation, the overlapping concerns of self-exiled whites and Southeast Asian refugees became the heartbeat of the trade; freedom brought out the matsutake. Through shared concerns with freedom, the U.S. Pacific Northwest became one of the world’s great matsutake exporting areas. Yet this way of life was segregated from the rest of the commodity chain. Bulkers and buyers longed to export matsutake directly to Japan but did not succeed. Neither buyers nor bulkers could get beyond the already difficult exchange with Canadian exporters of Asian origin, for whom English was not often a first language. They complained about unfair practices, but in fact they were useless at the cultural translation necessary for the making of inventory. For it is not just language that separates pickers, buyers, and bulkers in Oregon from Japanese traders; it is the conditions of production. Oregon mushrooms are contaminated with the cultural practices of “freedom.” The story of an exception makes the point. “Wei” first went to Japan from his native China to study music; when he found he could not make a living, he entered the Japanese vegetable import trade. He became fluent in Japanese, although still prickly about some features of life in Japan. When his company wanted someone to go to North America, he volunteered. This is how he became an idiosyncratic combination of field agent, bulker, and exporter. He goes to the matsutake area to watch the buying, just like other field agents, but he has a direct line Working the Edge 69 to Japan. Unlike the other field agents, he is constantly on the phone with Japanese traders, gauging opportunities and prices. He also talks to Japanese Canadian exporters, although he does not sell his mushrooms through them; because he can talk to them in Japanese, they constantly ask him to explain conditions in the field, including the behavior of the field agents whose mushrooms they buy. Meanwhile, the other field agents refuse to include him in their company and conspire against his buyers. He is not welcomed into their discussions, and, indeed, is shunned by the freedom-loving mountain men. Unlike the other field agents, Wei pays his buyers a salary, rather than a commission. He demands the loyalty and discipline of employees, refusing them the freewheeling independence of the other buyers. He buys matsutake for particular shipments, with particular characteristics, rather than buying for the pleasure and prowess of free competition, as the others do. He is already making inventory in the buying tents. His difference highlights the distinctiveness of the freedom assemblage as a patch. As international matsutake commerce entered the twenty-first century, regularization was afoot in Japan. Prices there stabilized as supply chains in many countries developed, as rankings of foreign matsutake congealed, and as perk-money in Japan diminished and the demand for matsutake became more specialized. The prices of Oregon matsutake in Japan became relatively stable—considering, of course, that matsutake is still a wild product with an irregular supply. However, this stability was not reflected in Oregon, where prices continued to roller-coaster, even if never returning to 1990s’ highs. When I talked to Japanese importers about this discrepancy, they explained it as a matter of American “psychology.” An importer who specialized in Oregon matsutake was thrilled to show me photographs from his visits and reminisce about his Wild West experiences in Oregon. White and Southeast Asian pickers and buyers, he explained, would not produce mushrooms without the excitement of what he called an “auction,” and the more the price fluctuated, the better the buying. (In contrast, he said, Mexican pickers in Oregon were willing to accept a constant price, but the others dominated the trade.) His job was to facilitate American peculiarities; his company had a parallel specialist in Chinese matsutake, whose job was to accommodate Chinese quirks. By facilitating varied cultural economies, his 70 Chapter 4 company could build its business through mushrooms from around the world. It was this man’s expectation of the necessity of cultural translation that first alerted me to the problem of salvage accumulation. In the 1970s, Americans expected the globalization of capital to mean the spread of U.S. business standards all over the world. In contrast, Japanese traders had become specialists in building international supply chains and using them as mechanisms of translation to bring goods into Japan without Japanese production facilities or employment standards. As long as these goods could be made into legible inventory in their transit to Japan, Japanese traders could use them to accumulate capital. By the end of the century, Japanese economic power had slipped, and twentieth-century Japanese business innovations were eclipsed by neoliberal reforms. But no one cares to reform the matsutake commodity chain; it is too small and too “Japanese.” Here is a place, then, to look for the Japanese trading strategies that rocked the world. At their center is translation between diverse economies. Traders as translators become masters of salvage accumulation. Before taking on translation, however, I need to visit the freedom assemblage. One cold October night in the late 1990s, three Hmong American matsutake pickers huddled in their tent. Shivering, they brought their gas cooking stove inside to provide a little warmth. They went to sleep with the stove on. It went out. The next morning, all three were dead, asphyxiated by the fumes. Their deaths left the campground vulnerable, haunted by their ghosts. Ghosts can paralyze you, taking away your ability to move or speak. The Hmong pickers moved away, and the others soon moved too. The U.S. Forest Service did not know about the ghosts. They wanted to rationalize the pickers’ camping area, to make it accessible to police and emergency services, and easier for campground hosts to enforce rules and fees. In the early 1990s, Southeast Asian pickers had camped where they pleased, like everyone else who visits the national forests. But whites complained that Southeast Asians left too much litter. The Forest Communal agendas, Oregon. A Mien pickers’ encampment. Here Mien recalled village life and escaped the confinements of California cities. 74 Chapter 5 Service responded by shunting the pickers to a lonely access road. At the time of the deaths, the pickers were camped all along the road. But soon afterward, the Forest Service built a great grid, with numbered camping spaces, scattered portable toilets, and, after many complaints, a large tank of water at the (rather distant) campground entrance. The campsites had no amenities, but the pickers—escaping from the ghosts—quickly made them their own. Mimicking the structure of the refugee camps in Thailand where many had spent more than a decade, they segregated themselves into ethnic groups: on one end, Mien and then those Hmong willing to stay; half a mile away, Lao and then Khmer; in an isolated hollow, way back, a few whites. The Southeast Asians built structures of slim pine poles and tarps and put their tents inside, sometimes with the addition of wood stoves. As in rural Southeast Asia, possessions were hung from the rafters, and an enclosure gave privacy for dip baths. In the center of the camp, a big tent sold hot bowls of pho. Eating the food, listening to the music, and observing the material culture, I thought I was in the hills of Southeast Asia, not the forests of Oregon. The Forest Service’s idea about emergency access did not work out as it imagined. A few years later, someone called emergency services in behalf of a critically wounded picker. Regulations aimed only at the mushroom camp required the ambulance to wait for police escort before entering. The ambulance waited for hours. When the police finally showed up, the man was dead. Emergency access had not been limited by terrain but by discrimination. This man, too, left a dangerous ghost, and no one slept near his campsite except Oscar, a white man and one of the few local residents to seek out Southeast Asians, who did it once, drunk, on a dare. Oscar’s success in getting through the night led him to try picking mushrooms on a nearby mountain, sacred to local Native Americans and the home of their ghosts. But the Southeast Asians I knew stayed away from that mountain. They knew about ghosts. Oregon’s center of matsutake commerce in the first decade of the twenty-first century was a place not marked on any map, “in the middle of nowhere.” Everyone in the trade knew where it was, but it wasn’t a Open Ticket, Oregon 75 town or a recreation site; it was officially invisible. Buyers had established a cluster of tents along the highway, and every evening pickers, buyers, and field agents gathered there, turning it into a theater of lively suspense and action. Because the place is self-consciously off the map, I decided to make up a name to protect people’s privacy, and to add some characters from the up-and-coming matsutake trading spot down the road. My composite field site is “Open Ticket, Oregon.” “Open ticket” is actually the name of a mushroom-buying practice. In the evening after returning from the woods, pickers sell their mushrooms for the buyer’s price per pound, adjusted in relation to the mushroom’s size and maturity, its “grade.” Most wild mushrooms carry a stable price. But the price of matsutake shoots up and down. Within the night, the price may easily shift by $10 per pound or more. Within the season, price shifts are much greater. Between 2004 and 2008, prices shifted between $2 and $60 per pound for the best mushrooms—and this range is nothing compared with earlier years. “Open ticket” means that a picker may return to the buyer for the difference between the original price paid and a higher price offered on the same night. Buyers— who earn a commission based on the poundage they buy—offer open ticket to entice pickers to sell early in the evening, rather than waiting to see if prices will rise. Open ticket is testimony to the unspoken power of pickers to negotiate buying conditions. It also illustrates the strategies of buyers, who continually try to put each other out of business. Open ticket is a practice of making and affirming freedom for both pickers and buyers. It seems an apt name for a site of freedom’s performance. For what is exchanged every evening is not just mushrooms and money. Pickers, buyers, and field agents are engaged in dramatic enactments of freedom, as they separately understand it, and they exchange these, encouraging each other, along with their trophies: money and mushrooms. Sometimes, indeed, it seemed to me that the really important exchange was the freedom, with the mushroom-and-money trophies as extensions—proofs, as it were—of the performance. After all, it was the feeling of freedom, galvanizing “mushroom fever,” that energized buyers to put on their best shows and pressed pickers to get up the next dawn to search for mushrooms again. But what is this freedom about which pickers spoke? The more I asked about it, the more unfamiliar it became to me. This is not the 76 Chapter 5 freedom imagined by economists, who use that term to talk about the regularities of individual rational choice. Nor is it political liberalism. This mushroomers’ freedom is irregular and outside rationalization; it is performative, communally varied, and effervescent. It has something to do with the rowdy cosmopolitanism of the place; freedom emerges from open-ended cultural interplay, full of potential conflict and misunderstanding. I think it exists only in relation to ghosts. Freedom is the negotiation of ghosts on a haunted landscape; it does not exorcise the haunting but works to survive and negotiate it with flair. Open Ticket is haunted by many ghosts: not only the “green” ghosts of pickers who died untimely deaths; not only the Native American communities removed by U.S. laws and armies; not only the stumps of great trees cut down by reckless loggers, never to be replaced; not only the haunting memories of war that will not seem to go away; but also the ghostly appearance of forms of power—held in abeyance—that enter the everyday work of picking and buying. Some kinds of power are there, but not there; this haunting is a place from which to begin to understand this multiply culturally layered enactment of freedom. Consider these absences that make Open Ticket what it is: Open Ticket is far from the concentration of power; it is the opposite of a city. It is missing social order. As Seng, a Lao picker, put it, “Buddha is not here.” Pickers are selfish and greedy, he said; he was impatient to return to the temple where things were properly arranged. But, meanwhile, Dara, a Khmer teenager, explained that this is the only place she can grow up away from the violence of gangs. Yet Thong is a (former?) Lao gang member; I think he is getting away from warrants for his arrest. Open Ticket is a hodgepodge of flights from the city. White Vietnam vets told me they wanted to be away from crowds, which sparked flashbacks from the war and uncontrollable panic attacks. Hmong and Mien told me they were disappointed in America, which had promised them freedom but instead crowded them into tiny urban apartments; only in the mountains could they find the freedom they remembered from Southeast Asia. Mien in particular hoped to reconstitute a remembered village life in the matsutake forest. Matsutake picking was a time to see dispersed friends and to be away from the constraints of crowded families. Nai Tong, a Mien grandmother, explained that her daughter called her every day to beg her to come home to take care of the grandchildren. But Open Ticket, Oregon 77 she calmly repeated that she had at least to make up the money for her picking permit; she could not go back yet. The important bits were left unsaid in those calls: Escaping from apartment life, she had the freedom of the hills. The money was less important than the freedom. Matsutake picking is not the city, although haunted by it. Picking is also not labor—or even “work.” Sai, a Lao picker, explained that “work” means obeying your boss, doing what he tells you to. In contrast, matsutake picking is “searching.” It is looking for your fortune, not doing your job. When a white campground owner, sympathetic to the pickers, talked to me about how the pickers deserved more because they work so hard, getting up at dawn and staying through sun and snow, something nagged at me about her view. I had never heard a picker talk like that. No pickers I met imagined the money they gained from matsutake as a return on their labor. Even Nai Tong’s time babysitting was more akin to work than mushroom picking. Tom, a white field agent who had spent years as a picker, was particularly clear about rejecting labor. He had been an employee of a big timber company, but one day he put his equipment in his locker, walked out the door, and never looked back. He moved his family into the woods and earned from what the land would give him. He has gathered cones for seed companies and trapped beaver for skins. He has picked all kinds of mushrooms—not to eat but to sell, and he has taken his skills into the buying scene. Tom tells me how liberals have ruined American society; men no longer know how to be men. The best answer is to reject what liberals think of as “standard employment.” Tom goes to great lengths to explain to me that the buyers he works with are not employees but independent businessmen. Even though he gives them large amounts of cash every day to buy mushrooms, they can sell to any field agent—and I know they do. It’s an all cash business, too, without contracts, so if a buyer decides to abscond with his cash, he says, there is nothing he can do about it. (Amazingly, buyers who abscond often come back to deal with another field agent.) But the scales he lends buyers for weighing mushrooms, he points out, are his; he could call the police about the scales. He tells the story of a recent buyer who absconded with several thousand dollars—but made the mistake of taking the scale. Tom drove down the road in the direction he believed the buyer took, and, sure enough, there was the scale abandoned 78 Chapter 5 by the side of the road. The cash was gone of course; but that was the risk of independent business. Pickers bring many kinds of cultural heritage to their rejection of labor. Mad Jim celebrates his Native American ancestors in matsutake picking. After many jobs, he said, he was working as a bartender on the coast. A Native woman walked in with a $100 bill; shocked, he asked where she got it. “Picking mushrooms,” she told him. Jim went out the next day. It wasn’t easy to learn: he crawled through the brush; he followed animals. Now he knows how to stalk the dunes for the matsutake buried deep in the sand. He knows where to look under tangled rhododendron roots in the mountains. He has never gone back to wage work. Lao-Su works in a Wal-Mart warehouse in California when he is not picking matsutake, making $11.50 an hour. To get that pay rate, however, he had to agree to work without medical benefits. When he hurt his back on the job and was unable to lift merchandise, he was given a long leave to recover. While he hopes the company will take him back, he says he gets more money from matsutake picking than from Wal-Mart anyway, despite the fact that the mushroom season is only two months long. Besides, he and his wife look forward to joining the vibrant Mien community in Open Ticket every year. They consider it a vacation; on weekends, their children and grandchildren sometimes come up to join them in picking. Matsutake picking is not “labor,” but it is haunted by labor. So, too, property: Matsutake pickers act as if the forest was an extensive commons. The land is not officially a commons. It is mainly national forest, with some adjacent private land, all fully protected by the state. But the pickers do their best to ignore questions of property. White pickers are particularly aggravated by federal property and do their best to thwart restrictions on using it. Southeast Asian pickers are generally warmer to government, expressing wishes that it would do more. Unlike white pickers, many of whom are proud of picking without a permit, most Southeast Asians register with the Forest Service for permission to pick. However, the fact that law enforcement tends to single out Asians for infractions even without evidence—as one Khmer buyer put it, “driving while being Asian”—makes it seem less worth the effort to stay within the law. Not many do. Open Ticket, Oregon 79 Vast lands without boundary markers makes staying in approved picking zones quite difficult, as I found from my own experience. Once, a sheriff staked out my car to catch me without a permit when I returned with mushrooms. Even as an avid reader of maps, I had been unable to tell whether this place was on or off limits.1 I was lucky; I was just at the border. But it wasn’t marked. Once, too, after I had pleaded with a Lao family for days to take me picking, they agreed, if I would drive. We chugged through forest on unmarked dirt roads for what seemed hours before they told me we had arrived at the place they wanted to pick. When I pulled over, they asked me why I wasn’t trying to hide the car. Only then did I realize that we were surely trespassing. The fines are steep. During my fieldwork, the fine for picking in a national park was $2,000 on the first offense. But law enforcement is thin on the ground, and the roads and trails are many. The national forest is crisscrossed with abandoned logging roads; these make it possible for pickers to travel across extensive forestland. Young men, too, are willing to hike many miles, looking for the most isolated mushroom patches—perhaps on forbidden lands, perhaps not. When the mushrooms get to the buyers, no one asks.2 But what is “public property” if not an oxymoron? Certainly, the Forest Service has trouble with it in these times. Legislation requires that public forests be thinned for fire protection for a square mile around private inholdings; this requires a lot of public funds to save a few private assets.3 Meanwhile, private timber companies do that thinning, making further profits from public forests. And, while logging is allowed within Late Successional Reserves, pickers are forbidden— because no one has found funds for an environmental impact assessment. If pickers have trouble sorting out which kinds of lands are offlimits, they are not alone in their confusion. The difference between the two kinds of confusion is also instructive. The Forest Service is asked to uphold property, even if it means neglecting the public. The pickers do their best to hold property in abeyance as they pursue a commons haunted by the possibility of their own exclusion. Freedom/haunting: two sides of the same experience. Conjuring a future full of pasts, a ghost-ridden freedom is both a way to move on and a way to remember. In its fever, picking escapes the separation of persons and things so dear to industrial production. The mushrooms 80 Chapter 5 are not yet alienated commodities; they are effects of the pickers’ freedom. Yet this scene only exists because the two-sided experience has purchase in a strange sort of commerce. Buyers translate freedom trophies into trade through dramatic performances of “free market competition.” Thus market freedom enters freedom’s jumble, making the holding in abeyance of concentrated power, labor, property, and alienation seem strong and effective. It’s time to get back to the buying in Open Ticket. It’s late afternoon, and some of the white field agents are sitting around joking. They accuse each other of lying and call each other “vultures” and “Wile E. Coyote.” They are right. They agree to open at the price of $10 a pound for number one mushrooms, but almost no one does. The minute the tents open, the competition is on. The field agents call their buyers to offer opening prices—perhaps $12 or even $15 if they agreed on $10. It is up to the buyers to report back about what is happening in the buying tents. Pickers come in and ask about the prices. But the price is a secret—unless you are a regular seller, or, alternatively, you are already showing your mushrooms. Other buyers send their friends, disguised as pickers, to find the price, so it is not something to tell just anyone. Then, when a buyer wants to raise prices, to beat the competition, he or she is supposed to call the field agent. If not, the buyer will have to pay the price difference from his or her commission—but this is a tactic many are willing to try. Soon enough, calls ricochet between pickers, buyers, and field agents. The prices are shifting. “It’s dangerous!” one field agent would tell me as he stalked around the buying area, watching the scene. He could not talk to me during the buying; it demanded his full attention. Barking commands into his cell phone, each tried to stay ahead—and to trip up the others. Meanwhile, field agents are on the phone to their bulking companies and exporters, learning how high they can go. It’s exciting and exacting work to put the others out of business as well as one can. “Imagine the time before cell phones!” one field agent reminisced. Everyone lined up at the two public phone booths, trying to get through as the prices changed. Even now, every field agent surveys the buying field like a general on an old-fashioned battlefield, his phone, like a field Open Ticket, Oregon 81 radio, constantly at his ear. He sends out spies. He must react quickly. If he raises the price at the right time, his buyers will get the best mushrooms. Better yet, he might push a competitor to raise the price too high, forcing him to buy too many mushrooms, and, if all goes really right, to close down for a few days. There are all kinds of tricks. If the price spikes, a buyer can get pickers to take his mushrooms to sell to other buyers: Better the money than the mushrooms. There will be rude laughter for days, fuel for another round of calling the others liars—and yet, no one goes out of business despite all these efforts.4 This is a performance of competition—not a necessity of business. The point is the drama. Let’s say it’s dark now, and pickers are lined up to sell at a buying tent. They have picked this buyer not only because of his prices, but because they know he is a skilled sorter. Sorting is just as important as basic prices, because a buyer assigns a grade to each mushroom, and the price depends on the grade. And what an art sorting is! Sorting is an eye-catching, rapid-fire dance of the arms with the legs held still. White men make it look like juggling; Lao women—the other champion buyers—make it look like Royal Lao dancing. A good sorter knows a lot about the mushrooms just from touching them. Matsutake with insect larvae will spoil the batch before it arrives in Japan; it is essential that the buyer refuse them. But only an inexperienced buyer cuts into the mushroom to look for larvae. Good buyers know from the feel. They can also smell the provenance of the mushroom: its host tree; the region it comes from; other plants, such as rhododendron, which affect the size and shape. Everyone enjoys watching a good buyer sort. It is a public performance full of prowess. Sometimes pickers photograph the sorting. Sometimes they also photograph their best mushrooms, or the money, especially when it is hundred dollar bills. These are trophies of the chase. Buyers try to assemble “crews,” that is, loyal pickers, but pickers do not feel the obligation to continue selling to any buyer. So buyers court pickers, using ties of kinship, language, and ethnicity, or special bonuses. Buyers offer pickers food and coffee—or, sometimes, stronger beverages, such as alcoholic tonics laced with herbs and scorpions. Pickers sit around eating and drinking outside buyers’ tents; where they share common war experiences with the buyers, the camaraderie may last until late at night. But such groups are evanescent; all it takes is a rumor of a 82 Chapter 5 high price or a special deal, and pickers are off to another tent, another circle. Yet the prices are not so different. Might performance be part of the point? Competition and independence mean freedom for all. Sometimes pickers have been known to wait, sitting in their pickup trucks with their mushrooms, because they are dissatisfied with everyone’s prices. But they must sell before the evening is over; they cannot keep the mushrooms. Waiting too is part of the performance of freedom: freedom to search wherever one pleases—holding propriety, labor, and property at arm’s length; freedom to bring one’s mushrooms to any buyer, and for the buyers, to any field agent; freedom to put the other buyers out of business; freedom to make a killing or lose it all. Once I told an economist about this buying scene, and he was excited, telling me this was the true and basic form of capitalism, without the pollution of powerful interests and inequalities. This was real capitalism, he said, where the playing field was level, as it should be. But is Open Ticket’s picking and buying capitalism? The problem is that there isn’t any capital. There is a lot of money changing hands, but it slips away, never forming an investment. The only accumulation is happening downstream, in Vancouver, Tokyo, and Kobe, where exporters and importers use the matsutake trade to build their firms. Open Ticket’s mushrooms join streams of capital there, but they are not procured in what seems to me a capitalist formation. But there are clearly “market mechanisms”: or are there? The whole point of competitive markets, according to economists, is to lower prices, forcing suppliers to procure goods in more efficient ways. But Open Ticket’s buying competition has the explicit goal of raising prices. Everyone says so: pickers, buyers, bulkers. The purpose of playing with prices is to see if the price can be increased, so that everyone at Open Ticket benefits. Many seem to think that there is an ever-flowing spring of money in Japan, and the goal of competitive theater is to force open the pipes so that the money will flow to Open Ticket. Old timers all remember 1993, when the price of matsutake in Open Ticket rose briefly to $600 a pound in the hands of pickers. All you had to do was find one fat button, and you had $300!5 Even after that high, they say, in the 1990s a single picker could make several thousand dollars in one day. How might access to that flow of money be opened again? Open Ticket buyers and bulkers stake their bets on competition to raise prices. Open Ticket, Oregon 83 It seems to me that there are two framing circumstances that allow this set of beliefs and practices to flourish. First, American businessmen have naturalized the expectation that the U.S. government will apply muscle in their behalf: As long as they perform “competition,” the government will twist the arms of foreign business partners to make sure American companies get the prices and market share they want.6 Open Ticket matsutake trading is much too small and inconspicuous to get that kind of government attention. Still, it is within this national expectation that buyers and bulkers engage in competitive performances to get the Japanese to offer them the best prices. As long as they show themselves properly “American,” they expect to succeed. Second, Japanese traders are willing to put up with such displays as signs of what the importer I mentioned called “American psychology.” Japanese traders expect to work in and around strange performances; if this is what brings in the goods, it should be encouraged. Later, exporters and importers can translate the exotic products of American freedom into Japanese inventory—and, through inventory, accumulation. What is this “American psychology” then? There are too many people and histories in Open Ticket to plunge directly into the coherence through which we usually imagine “culture.” The concept of assemblage—an open-ended entanglement of ways of being—is more useful. In an assemblage, varied trajectories gain a hold on each other, but indeterminacy matters. To learn about an assemblage, one unravels its knots. Open Ticket’s performances of freedom require following histories that stretch far beyond Oregon but show how Open Ticket’s entanglements might have come into being. In France they have two kinds, freedom and communist. In the U.S. they just have one kind: freedom. —Open Ticket Lao buyer, explaining why he came to the United States, not France The freedom about which so many pickers and buyers speak has far-flung referents as well as local ones. In Open Ticket, most explain their commitments to freedom as stemming from terrifying and tragic experiences in the U.S.-Indochina War and the civil wars that followed. When pickers talk about what shaped their lives, including their mushroom picking, most talk about surviving war. They are willing to brave the considerable dangers of the matsutake forest because it extends their living survival of war, a form of haunted freedom that goes everywhere with them. Yet engagements with war are culturally, nationally, and racially specific. The landscapes pickers construct vary with their legacies of Communal agendas, Oregon. Foraging with a rifle. Most pickers have terrible stories of surviving war. The freedom of the mushroom camps emerges out of varied histories of trauma and displacement. 86 Chapter 6 engagement with war. Some pickers wrap themselves in war stories without ever having lived through war. One wry Lao elder explained why even young Lao pickers wear camouflage: “These people weren’t soldiers; they’re just pretending to be soldiers.” When I asked about the dangers of being invisible to white deer hunters, a Hmong picker evoked a different imaginary: ïżœïżœïżœWe wear camouflage so we can hide if we see the hunters first.” If they saw him, hunters might hunt him, he implied. Pickers navigate the freedom of the forest through a maze of differences. Freedom as they described it is both an axis of commonality and a point from which communally specific agendas divide. Despite further differences within such agendas, a few portraits can suggest the varied ways the matsutake hunt is energized by freedom. This chapter extends my exploration of what pickers and buyers meant by freedom by turning to the stories they told about war. Frontier romanticism runs high in the mountains and forests of the Pacific Northwest. It is common for whites to glorify Native Americans and identify with the settlers who tried to wipe them out. Self-sufficiency, rugged individualism, and the aesthetic force of white masculinity are points of pride. Many white mushroom pickers are advocates of U.S. conquest abroad, limited government, and white supremacy. Yet the rural northwest has also gathered hippies and iconoclasts. White veterans of the U.S.-Indochina War bring their war experiences into this rough and independent mix, adding a distinctive mixture of resentment and patriotism, trauma and threat. War memories are simultaneously disturbing and productive in forming this niche. War is damaging, they tell us, but it also makes men. Freedom can be found in war as well as against war. Two white veterans suggest the range of how freedom is expressed. Alan felt lucky when an aggravated childhood injury caused him to be sent home from Indochina. For the next six months he served as a driver on an American base. One day he received orders to return to Vietnam. He drove his jeep back to the depot and walked out of the base, AWOL. He spent the next four years hiding in the Oregon mountains, where he gained a new goal: to live in the woods and never pay rent. Later, when War Stories 87 the matsutake rush came, it suited him perfectly. Alan imagines himself as a gentle hippie who works against the combat culture of other vets. Once he went to Las Vegas and had a terrible flashback when surrounded by Asians at the casino. Life in the forest is his way of keeping clear of psychological danger. Not all war experience is so benign. When I first met Geoff I was overjoyed to find someone with so much knowledge about the forest. Telling me of the pleasures of his childhood in eastern Washington, he described the countryside with a passionate eye for detail. My enthusiasm to work with Geoff was transformed, however, when I talked with Tim, who explained that Geoff had served a long and difficult tour in Vietnam. Once, his group had jumped from a helicopter into an ambush. Many of the men were killed, and Geoff was shot through the neck but, miraculously, survived. When Geoff came home, he screamed so much at night that he could not stay home, and so he returned to the woods. But his war years were not over. Tim described a time when he and Geoff had surprised a group of Cambodian pickers on a mushroom patch Geoff thought of as one of his special places. Geoff had opened fire, and the Cambodians scrambled into the bushes to get away. Once Tim and Geoff shared a cabin, but Geoff spent the night brooding and sharpening his knife. “Do you know how many men I killed in Vietnam?” he asked Tim. “One more wouldn’t make a bit of difference.” White pickers imagine themselves not only as violent vets but also as self-sufficient mountain men: loners, tough, and resourceful. One point of connection with those who did not fight is hunting. One white buyer, too old for Vietnam but a strong supporter of U.S. wars, explained that hunting, like war, builds character. We spoke of then Vice President Cheney, who had shot a friend while bird hunting; it was through the ordinariness of accidents such as this that hunting makes men, he said. Through hunting, even noncombatants can experience the forest landscape as a site for making freedom. Cambodian refugees cannot easily join established Pacific Northwest legacies; they have had to make up their own histories of freedom in the United States. Such histories are guided not only by U.S. bombardment 88 Chapter 6 and the subsequent terrors of the Khmer Rouge regime and civil war, but also by their moment of entry into the United States: the shutting down of the U.S. welfare state in the 1980s. No one offered Cambodians stable jobs with benefits. Like other Southeast Asian refugees, they had to make something from what they had—including their war experiences. The matsutake boom made forest foraging, with its opportunities for making a living through sheer intrepidness, an appealing option. What then is freedom? One white field agent, exalting the pleasures of war, suggested I speak with Ven, a Cambodian who, the field agent said, would show me that even Asians love U.S. imperial war. Given that Ven spoke to me with this introduction, I was not surprised by his endorsement of American freedom as a military quest. Yet our conversation took turns that I don’t imagine the field agent would have expected, and yet it echoed other Cambodians in the forest. First, in the confusions of the Cambodian civil war, it was never quite clear on which side one was fighting. Where white vets imagined freedom on a starkly divided racial landscape, Cambodians told stories in which war bounced one from one side to the other without one’s knowledge. Second, where white vets sometimes took to the hills to live out war’s traumatic freedom, Cambodians offered a more optimistic vision of recovery in the forests of American freedom. At the age of thirteen, Ven left his village to join an armed struggle. His goal was to repel Vietnamese invaders. He says he did not know the national affiliations of his group; he later found it to be a Khmer Rouge affiliate. Because of his youth, the commander befriended him and he was kept safe, close to the leaders. Later, however, the commander fell out of favor, and Ven became a political detainee. His group of detainees was sent to the jungle to fend for themselves. By chance, this turned out to be an area Ven knew from his fighting days. Where others saw empty jungle, he knew the concealed paths and forest resources. At this point in the story, I expected him to say that he escaped, especially since he was beaming with pride about his jungle knowledge. But no: He showed the group a hidden spring, without which they would not have had fresh water. Perhaps there was something empowering about this forest detention, even in its coercions. Returning to the forest draws from this spark—but only, he explained, in the safety of American imperial freedom. War Stories 89 Other Cambodians spoke about mushroom foraging as healing from war. One woman described how weak she was when she first came to the United States; her legs were so frail that she could hardly walk. Mushroom foraging has brought back her health. Her freedom, she explained, is freedom of motion. Heng told me about his experiences in a Cambodian militia. He was the leader of thirty men. But while patrolling one day he stepped on a land mine, which blew off his leg. He begged his comrades to shoot him, since the life of a one-legged man in Cambodia was beyond what he imagined as human. Through luck, however, he was picked up by a UN mission and transported to Thailand. In the United States he gets along well on his artificial leg. Still, when he told his relatives that he would pick mushrooms in the forest, they scoffed. They refused to take him with them, since, they said, he would never be able to keep up. Finally, an aunt dropped him off at the base of a mountain, telling him to find his own way. He found mushrooms! Ever since, the matsutake harvest has been an affirmation of his mobility. Another of his buddies is missing the other leg, and he jokes that together in the mountains, they are “complete.” The Oregon mountains are both a cure for and a connection to old habits and dreams. I was startled into seeing this one day when I asked Heng about deer hunters. I had been picking by myself that afternoon when suddenly shots rang out nearby. I was terrified; I didn’t know which way to run. I asked Heng about it later. “Don’t run!” he said. “To run shows that you are afraid. I would never run. That’s why I am a leader of men.” The woods are still full of war, and hunting is its reminder. The fact that almost all the hunters are white, and that they tend to be contemptuous of Asians, makes the parallels to war yet more apparent. This theme was even more consequential for Hmong pickers, who, unlike most Cambodians, identify as hunters as well as hunted. During the U.S.-Indochina War, the Hmong became the front line of the U.S. invasion of Laos. Recruited by General Vang Pao, whole villages gave up agriculture to subsist on CIA airdrops of food. The men called in U.S. bombers, putting their bodies on the line so that Americans 90 Chapter 6 could destroy the country from the skies.1 It is not surprising that this policy exacerbated tensions between the Lao targets of the bombing and the Hmong. Hmong refugees have done relatively well in the United States, but war memories run strong. The landscapes of wartime Laos are very much alive for Hmong refugees, and this shapes both the politics of freedom and freedom’s everyday activities. Consider the case of Hmong hunter and U.S. Army sharpshooter Chai Soua Vang. In November 2004, he climbed into a deer blind in a Wisconsin forest just as the white landowners were touring the property. The landowners confronted him, telling him to leave. It seems they shouted racial epithets, and someone shot at him. In response, he shot eight of them with his semiautomatic rifle, killing six. The story was news, and the main tenor in which it was told was outrage. CBS News quoted local Deputy Tim Zeigle, who said Vang was “chasing after [the landowners] and killing them. He hunted them down.”2 Hmong community spokesmen immediately took their distance from Vang and focused on saving the reputation of the Hmong people. Although younger Hmong spoke up against racism in the trial that followed Vang’s arrest, no one publicly suggested why Vang might have assumed a sharpshooter’s stance to eliminate his adversaries. The Hmong I spoke with in Oregon all seemed to know, and to empathize. What Vang did appeared utterly familiar; he could have been a brother or a father. Although Vang was too young to have participated in the U.S.-Indochina War, his actions showed how well he was socialized in the landscapes of that war. There every man who was not a comrade was an enemy, and war meant to kill or be killed. The elder men of the Hmong community still live very much in the world of these battles; at Hmong gatherings, the logistics of particular battles—the topography, timing, and surprises—are the subject of men’s conversations. One Hmong elder whom I had asked about his life used the opportunity to tell me about how to throw back grenades and what to do if you are shot. The logistics of wartime survival were the substance of his life. Hunting recalls the familiarity of Laos for Hmong in the United States. The Hmong elder explained his coming of age in Laos: as a boy, he had learned to hunt, and he used his hunting skills in jungle fighting. Now in the United States, he teaches his sons how to hunt. Hunting brings Hmong men into a world of tracking, survival, and manhood. War Stories 91 Hmong mushroom pickers are comfortable in the forest because of hunting. Hmong rarely get lost; they use the forest-navigation skills they know from hunting. The forest landscape reminds older men of Laos: Much is different, but there are wild hills and the necessity of keeping your wits about you. Such familiarity brings the older generation back to pick each year; like hunting, this is a chance to remember forest landscapes. Without the sounds and smells of the forest, the elder told me, a man dwindles. Mushroom picking layers together Laos and Oregon, war and hunting. The landscapes of war-torn Laos suffuse present experience. What seemed to me nonsequiturs shocked me into awareness of such layers: I asked about mushrooms, and Hmong pickers answered by telling me of Laos, of hunting, or of war. Tou and his son Ger kindly took my assistant Lue and me for many a matsutake hunt. Ger was an exuberant teacher, but Tou was a quiet elder. As a result, I valued the things he said all the more. One afternoon after a long and pleasurable forage, Tou collapsed into the front seat of the car with a sigh. Lue translated from Hmong. “It’s just like Laos,” Tou said, telling us of his home. His next comment made no sense to me: “But it’s important to have insurance.” It took me the next half hour to figure out what he meant. He offered a story: A relative of his had gone back to Laos for a visit, and the hills had so drawn him that he left one of his souls behind when he returned to the United States. He soon died as a result. Nostalgia can cause death, and then it’s important to have life insurance, because that allows the family to buy the oxen for a proper funeral. Tou was experiencing the nostalgia of a landscape made familiar by hiking and foraging. This is also the landscape of hunting—and of war. As Buddhists, ethnic Lao tend to object to hunting. Instead, Lao are the businessmen of the mushroom camps. Most Southeast Asian mushroom buyers are Lao. In the campgrounds, Lao have opened noodle tents, gambling, karaoke, and barbeque shops. Many of the Lao pickers I met originated from or were displaced to Laotian cities. They are often lost in the woods. But they enjoy the risks of mushroom picking and explain it as an entrepreneurial sport. 92 Chapter 6 I first started thinking about cultural engagements with war when I was hanging out with Lao pickers. Camouflage is popular among Lao men. Most are further covered by protective tattoos—some gained in the army, some in gangs, and some in martial arts. Lao rowdiness is the justification for Forest Service rules that disallow gunfire in the campgrounds. Compared with other picker groups, the Lao I met seemed less wounded by the actual moment of war—and yet more involved in its simulation in the forest. But what is a wound? U.S. bombing in Laos displaced 25 percent of the rural population, forcing fleeing refugees into cities—and, when possible, abroad.3 If Lao refugees in the United States have some characteristics of camp followers, is this not also a wound? Some Lao pickers grew up in army families. Sam’s father served in the Royal Lao Army; he was set to follow in his father’s footsteps by enlisting in the U.S. Army. The fall before his recruitment he joined some friends for a last hurrah—picking mushrooms. He made so much money that he called off his army plans. He even brought his parents to pick. He also discovered the pleasures of illegal picking one season when he made $3,000 in one day by trespassing on national park lands. Like white pickers, the Lao I knew looked for out-of-bounds and hidden matsutake patches. (In contrast, Cambodian, Hmong, and Mien pickers more often used careful observation in well-known common spots.) Lao pickers also—again like whites—took pleasure in boasting of their forays outside the law and their ability to get out of scrapes. (Other pickers went outside the law more quietly.) As entrepreneurs, Lao were mediators, with all the pleasures and dangers of mediation. In my own inexperience, I found the entrepreneurial grasp of combat readiness a confusing set of juxtapositions. Yet I could tell it somehow worked as advocacy for high-risk enterprise. Thong, a strong and handsome man in his mid-thirties, seemed to me a man of contradictions: a fighter, a fine dancer, a reflective thinker, a judgmental critic. Because of his strength, Thong picks in high, inaccessible places. He told of his encounter with a policeman who stopped him for speeding one night more than forty miles from the mushroom camp. He told the policeman to go ahead and impound his car; he would walk through the frozen night. The policeman gave in, he said, and let him go. When Thong said that mushroom pickers are in the War Stories 93 forest to escape warrants, I thought he might be speaking for himself. So, too, until quite recently he was married. In the process of getting a divorce, he quit a well-paying job to pick mushrooms. At the least, I believe he aimed to escape the obligations of child support. The contradictions multiply. He went out of his way to express contempt for pickers who abandon their children for the forest. He is not in touch with his children. Meta thinks a lot about Buddhism. Meta spent two years in a monastery; returned to the world, he works to renounce material things. Mushroom picking is a way to do this work of renunciation. Most of his belongings are in his car. Money comes to him easily but disappears just as easily. He does not mire himself in possession. This does not mean he is ascetic in a Western sense. When he is drunk, he sings a tender tenor karaoke. Only among Lao pickers did I meet children of mushroom pickers who, as adults, became mushroom pickers themselves. Paula first came picking with her parents, who later moved to Alaska. But she maintains her parents’ social networks in the Oregon forests, thus earning the room for maneuver claimed by much more seasoned pickers. Paula is daring. She and her husband arrived ready to pick ten days before the U.S. Forest Service opened the season. When the police caught them with mushrooms in their truck, her husband pretended that he couldn’t speak English, while Paula berated the officials. Paula is cute and looks like a child; she can get away with more sass than others. Still, I was surprised at the chutzpah she claimed. She said she dared the police to interfere with her activities. They asked her where she found the mushrooms. “Under green trees.” Where were these green trees? “All trees are green trees,” she insisted. Then she pulled out her cell phone and started calling her supporters. What is freedom? U.S. immigration policy differentiates “political refugees” from “economic refugees,” granting asylum only to the former. This requires immigrants to endorse “freedom” as a condition of their entry. Southeast Asian Americans had the opportunity to learn such endorsements in refugee camps in Thailand, where many spent years preparing themselves for U.S. immigration. As the Lao buyer quoted at the beginning of this chapter quipped in explaining why he picked the United States rather than France: “In France they have two 94 Chapter 6 kinds, freedom and communist. In the U.S. they just have one kind: freedom.” He went on to say that he prefers mushroom picking to a steady job with a good income—he has been a welder—because of the freedom. Lao strategies for enacting freedom contrast sharply with those of the other picker group that vies for the title “most harassed by the law”: Latinos. Latino pickers tend to be undocumented migrants who fit mushroom foraging into a year-round schedule of outdoor work. During mushroom season many live hidden in the forest instead of in the legally required industrial camps and motels where identification and picking permits might be checked. Those I knew had multiple names, addresses, and papers. Mushroom arrests could lead not just to fines but also to loss of vehicles (for faulty papers) and deportation. Instead of sassing the law, Latino pickers tried to stay out of the way, and, if caught, juggle papers and sources of legitimation and support. In contrast, most Lao pickers, as refugees, are citizens and, embracing freedom, hustle for more room. Contrasts such as these motivated my search to understand the cultural engagements with war that shape the practices of freedom of white veterans and Cambodian, Hmong, and Lao refugees. Veterans and refugees negotiate American citizenship through endorsing and enacting freedom. In this practice, militarism is internalized; it infuses the landscape; it inspires strategies of foraging and entrepreneurship. Among commercial matsutake pickers in Oregon, freedom is a “boundary object,” that is, a shared concern that yet takes on many meanings and leads in varied directions.4 Pickers arrive every year to search out matsutake for Japanese-sponsored supply chains because of their overlapping yet diverging commitments to the freedom of the forest. Pickers’ war experiences motivate them to come back year after year to extend their living survival. White vets enact trauma; Khmer heal war wounds; Hmong remember fighting landscapes; Lao push the envelope. Each of these historical currents mobilizes the practice of picking mushrooms as the practice of freedom. Thus, without any corporate recruitment, training, or discipline, mountains of mushrooms are gathered and shipped to Japan.
0 notes
katebushwick · 5 years
Text
Mushroom at the End of the World 29-45
The problem of precarious survival helps us see what is wrong. Precarity is a state of acknowledgment of our vulnerability to others. In order to survive, we need help, and help is always the service of another, with or without intent. When I sprain my ankle, a stout stick may help me walk, and I enlist its assistance. I am now an encounter in motion, a woman-and-stick. It is hard for me to think of any challenge I might face without soliciting the assistance of others, human and not human. It is unselfconscious privilege that allows us to fantasize—counterfactually—that we each survive alone. If survival always involves others, it is also necessarily subject to the indeterminacy of self-and-other transformations. We change through our collaborations both within and across species. The important stuff for life on earth happens in those transformations, not in the decision trees of self-contained individuals. Rather than seeing only the expansionand-conquest strategies of relentless individuals, we must look for histories that develop through contamination. Thus, how might a gathering become a “happening”? Collaboration is work across difference, yet this is not the innocent diversity of self-contained evolutionary tracks. The evolution of our “selves” is already polluted by histories of encounter; we are mixed up with others before we even begin any new collaboration. Worse yet, we are mixed up in the projects that do us the most harm. The diversity that allows us to enter collaborations emerges from histories of extermination, imperialism, and all the rest. Contamination makes diversity. This changes the work we imagine for names, including ethnicities and species. If categories are unstable, we must watch them emerge within encounters. To use category names should be a commitment to tracing the assemblages in which these categories gain a momentary hold.4 Only from here can I return to meeting Mien and matsutake in a Cascades forest. What does it mean to be “Mien” or to be “forest”? These identities entered our meeting from histories of transformative ruin, even as new collaborations changed them. Oregon’s national forests are managed by the U.S. Forest Service, which aims to conserve forests as a national resource. Yet the conservation status of the landscape has been hopelessly confused by a hundred-year history of logging and fire suppression. Contamination creates forests, 30 Chapter 2 transforming them in the process. Because of this, noticing as well as counting is required to know the landscape. Oregon’s forests played a key role in the U.S. Forest Service’s earlytwentieth-century formation, during which foresters worked to find kinds of conservation that timber barons would support.5 Fire suppression was the biggest result: Loggers and foresters could agree on it. Meanwhile, loggers were eager to take out the ponderosa pines that so impressed white pioneers in the eastern Cascades. The great ponderosa stands were logged out by the 1980s. It turned out that they could not reproduce without the periodic fires the Forest Service had stopped. But firs and spindly lodgepole pines were flourishing with fire exclusion—at least if flourishing means spreading in ever denser and more flammable thickets of live, dead, and dying trees.6 For several decades, Forest Service management has meant, on the one hand, trying to make the ponderosas come back, and, on the other, trying to thin, cut, or otherwise control flammable fir and lodgepole thickets. Ponderosa, fir, and lodgepole, each finding life through human disturbance, are now creatures of contaminated diversity. Surprisingly, in this ruined industrial landscape, new value emerged: matsutake. Matsutake fruit especially well under mature lodgepole, and mature lodgepole exists in prodigious numbers in the eastern Cascades because of fire exclusion. With the logging of ponderosa pines and fire exclusion, lodgepoles have spread, and despite their flammability, fire exclusion allows them a long maturity. Oregon matsutake fruit only after forty to fifty years of lodgepole growth, made possible by excluding fire.7 The abundance of matsutake is a recent historical creation: contaminated diversity. And what are Southeast Asian hill people doing in Oregon? Once I realized that almost everyone in the forest was there for explicitly “ethnic” reasons, finding out what these ethnicities implied became urgent. I needed to know what created communal agendas that included mushroom hunting; thus I followed the ethnicities they named for me. The pickers, like the forests, must be appreciated in becoming, not just counted. Yet almost all U.S. scholarship on Southeast Asian refugees ignores ethnic formation in Southeast Asia. To counteract this omission, allow me an extended story. Despite their specificity, Mien stand in here Contamination as Collaboration 31 for all the pickers—and the rest of us too. Transformation through collaboration, ugly and otherwise, is the human condition. The distant ancestors of Kao’s Mien community are imagined as emerging already in contradiction and on the run. Moving through the hills of southern China to hide from imperial power, they also treasured imperial documents exempting them from taxation and corvĂ©e. A little more than a hundred years ago, some moved farther out of the way—into the northern hills of what are now Laos, Thailand, and Vietnam. They brought a distinctive script, based on Chinese characters and used for writing to spirits.8 As both refusal and acceptance of Chinese authority, the script is a neat expression of contaminated diversity: Mien are Chinese, and not Chinese. Later they would learn to be Lao/ Thai, but not Lao/Thai, and then American, and not American. Mien are not known for their respect for national boundaries; communities have repeatedly crossed back and forth, especially when armies threaten. (Kao’s uncle learned Chinese and Lao from cross-border movement.) Yet, despite this mobility, Mien are hardly an autonomous tribe, free from the control of the state. Hjorleifur Jonsson has shown how Mien lifeways have repeatedly changed in relation to state agendas. In the first half of the twentieth century, for example, Mien in Thailand organized their communities around the opium trade. Only large, polygynous households controlled by powerful senior men could keep hold of the opium contracts. Some households had one hundred members. The Thai state did not mandate this family organization; it arose from the Mien encounter with opium. In a similarly unplanned process in the late twentieth century, Mien in Thailand came to identify as an “ethnic group” with distinctive customs; Thai policy toward minorities made this identity possible. Meanwhile, along the Laos/Thailand border, Mien slipped back and forth, evading state policy on both sides even while being shaped by it.9 Those cross-boundary Asian hills have known many peoples, and Mien sensibilities have developed in engagement with these shifting groups as all have negotiated imperial governance and rebellion, licit and illicit trade, and millennial mobilization. To understand how Mien came to be matsutake pickers requires considering their relationship with another group now in the Oregon forests, Hmong. Hmong are 32 Chapter 2 like Mien in many ways. They also ran south from China; they also crossed borders and occupied the high altitudes suited to commercial opium farming; they also value their distinctive dialects and traditions. A mid-twentieth-century millennial movement started by an illiterate farmer produced a completely original Hmong script. This was the time of the U.S.-Indochina War, and Hmong were in the thick of it. As linguist William Smalley points out, discarded military ordnance in the area would have exposed this inspired farmer to English, Russian, and Chinese writing, and he might also have seen Lao and Thai.10 Emerging from the trash of war, this distinctive and multiply derivative Hmong script, like that of the Mien, is a wonderful icon for contaminated diversity. Hmong are proud of their patrilineal clan organization, and, according to ethnographer William Geddes, clans have been key to forming long-distance ties among men.11 Clan relations allowed military leaders to recruit outside their face-to-face networks. This proved relevant when the United States took over imperial oversight after the French defeat by Vietnamese nationalists in 1954, thus inheriting the loyalty of Frenchtrained Hmong soldiers. One of those soldiers became General Vang Pao, who mobilized Hmong in Laos to fight in behalf of the United States, becoming what 1970s CIA director William Colby called “the biggest hero of the Vietnam War.”12 Vang Pao recruited not just individuals but villages and clans into the war. Although his claims to represent Hmong disguised the fact that Hmong also fought for the communist Pathet Lao, Vang Pao made his cause simultaneously a Hmong cause and a U.S. anticommunist cause. Through his control over opium transport, bombing targets, and CIA rice drops, as well as his charisma, Vang Pao generated enormous ethnic loyalty, consolidating one kind of “Hmong.”13 It is hard to think of a better example of contaminated diversity. Some Mien fought in Vang Pao’s army. Some followed Hmong to the Ban Vinai refugee camp Vang Pao helped to have established in Thailand after he fled Laos following the U.S. withdrawal in 1975. But the war did not give Mien the sense of ethnic-political unity it gave Hmong. Some Mien fought for other political leaders, including Chao La, a Mien general. Some left Laos for Thailand long before the communist victory in Laos. Jonsson’s oral histories of Mien in the United States suggest that what are often imagined as innocent “regional” Contamination as Collaboration 33 groupings of Laotian Mien—northern Mien, southern Mien—refer to divergent histories of forced resettlement by Vang Pao and Chao La, respectively.14 War, he argues, creates ethnic identities.15 War forces people to move but also cements ties to reimagined ancestral cultures. Hmong helped to stimulate the mix, and Mien came to participate. In the 1980s, Mien who had crossed from Laos to Thailand joined U.S. programs to bring anticommunists from Southeast Asia to the United States and allow them, through refugee status, to become citizens. The refugees arrived in the United States just as welfare was being cut; they were offered few resources for livelihood or assimilation. Most of those from Laos and Cambodia had neither money nor Western education; they moved into off-the-grid jobs such as matsutake picking. In the Oregon woods, they use skills honed in Indochinese wars. Those experienced in jungle fighting rarely get lost, since they know how to find their way in unfamiliar forests. Yet the forest has not stimulated a generic Indochinese—or American—identity. Mimicking the structure of Thai refugee camps, Mien, Hmong, Lao, and Khmer keep their places separate. Yet white Oregonians sometimes call them all “Cambodians,” or, with even more confusion, “Hong Kongs.” Negotiating multiple forms of prejudice and dispossession, contaminated diversity proliferates. I hope that at this point you are saying, “This is hardly news! I can think of plenty of similar examples from the landscape and people around me.” I agree; contaminated diversity is everywhere. If such stories are so widespread and so well known, the question becomes: Why don’t we use these stories in how we know the world? One reason is that contaminated diversity is complicated, often ugly, and humbling. Contaminated diversity implicates survivors in histories of greed, violence, and environmental destruction. The tangled landscape grown up from corporate logging reminds us of the irreplaceable graceful giants that came before. The survivors of war remind us of the bodies they climbed over—or shot—to get to us. We don’t know whether to love or hate these survivors. Simple moral judgments don’t come to hand. Worse yet, contaminated diversity is recalcitrant to the kind of “summing up” that has become the hallmark of modern knowledge. Contaminated diversity is not only particular and historical, ever changing, but also relational. It has no self-contained units; its units 34 Chapter 2 are encounter-based collaborations. Without self-contained units, it is impossible to compute costs and benefits, or functionality, to any “one” involved. No self-contained individuals or groups assure their selfinterests oblivious to the encounter. Without algorithms based on selfcontainment, scholars and policymakers might have to learn something about the cultural and natural histories at stake. That takes time, and too much time, perhaps, for those who dream of grasping the whole in an equation. But who put them in charge? If a rush of troubled stories is the best way to tell about contaminated diversity, then it’s time to make that rush part of our knowledge practices. Perhaps, like the war survivors themselves, we need to tell and tell until all our stories of death and near-death and gratuitous life are standing with us to face the challenges of the present. It is in listening to that cacophony of troubled stories that we might encounter our best hopes for precarious survival. This book tells a few such stories, which take me not only to the Cascades but also to Tokyo auctions, Finnish Lapland, and a scientist’s lunchroom, where I am so excited I spill my tea. Following all these stories at once is as challenging—or, once one gets the hang of it, as simple—as singing a madrigal in which each singer’s melody courses in and out of the others. Such interwoven rhythms perform a still lively temporal alternative to the unified progress-time we still long to obey. To lis ten to and tell a rush of stories is a method. And why not make the strong claim and call it a science, an addition to knowledge? Its research object is contaminated diversity; its unit of analysis is the indeterminate encounter. To learn anything we must revitalize arts of noticing and include ethnography and natural history. But we have a problem with scale. A rush of stories cannot be neatly summed up. Its scales do not nest neatly; they draw attention to interrupting geographies and tempos. These interruptions elicit more stories. This is the rush of stories’ power as a science. Yet it is just these interruptions that step out of the bounds of most modern science, which demands the possibility for infinite expansion without changing the research framework. Arts of noticing are considered archaic because Conjuring time, Tokyo. Arranging matsutake for auction at the Tsukiji wholesale market. Turning mushrooms into inventory takes work: commodities accelerate to market tempos only when earlier ties are severed. 38 Chapter 3 they are unable to “scale up” in this way. The ability to make one’s research framework apply to greater scales, without changing the research questions, has become a hallmark of modern knowledge. To have any hope of thinking with mushrooms, we must get outside this expectation. In this spirit, I lead a foray into mushroom forests as “anti-plantations.” The expectation of scaling up is not limited to science. Progress itself has often been defined by its ability to make projects expand without changing their framing assumptions. This quality is “scalability.” The term is a bit confusing, because it could be interpreted to mean “able to be discussed in terms of scale.” Both scalable and nonscalable projects, however, can be discussed in relation to scale. When Fernand Braudel explained history’s “long durĂ©e” or Niels Bohr showed us the quantum atom, these were not projects of scalability, although they each revolutionized thinking about scale. Scalability, in contrast, is the ability of a project to change scales smoothly without any change in project frames. A scalable business, for example, does not change its organization as it expands. This is possible only if business relations are not transformative, changing the business as new relations are added. Similarly, a scalable research project admits only data that already fit the research frame. Scalability requires that project elements be oblivious to the indeterminacies of encounter; that’s how they allow smooth expansion. Thus, too, scalability banishes meaningful diversity, that is, diversity that might change things. Scalability is not an ordinary feature of nature. Making projects scalable takes a lot of work. Even after that work, there will still be interactions between scalable and nonscalable project elements. Yet, despite the contributions of thinkers such as Braudel and Bohr, the connection between scaling up and the advancement of humanity has been so strong that scalable elements receive the lion’s share of attention. The nonscalable becomes an impediment. It is time to turn attention to the nonscalable, not only as objects for description but also as incitements to theory. A theory of nonscalability might begin in the work it takes to create scalability—and the messes it makes. One vantage point might be that early and influential icon for this work: the European colonial plantation. In their sixteenth- and seventeenth-century sugarcane plantations in Brazil, for example, Portuguese planters stumbled on a formula for Some Problems with Scale 39 smooth expansion. They crafted self-contained, interchangeable project elements, as follows: exterminate local people and plants; prepare nowempty, unclaimed land; and bring in exotic and isolated labor and crops for production. This landscape model of scalability became an inspiration for later industrialization and modernization. The sharp contrast between this model and the matsutake forests that form the subject of this book is a useful platform from which to build a critical distance from scalability.1 Consider the elements of the Portuguese sugarcane plantation in colonial Brazil. First, the cane, as Portuguese knew it: Sugarcane was planted by sticking a cane in the ground and waiting for it to sprout. All the plants were clones, and Europeans had no knowledge of how to breed this New Guinea cultigen. The interchangeability of planting stock, undisturbed by reproduction, was a characteristic of European cane. Carried to the New World, it had few interspecies relations. As plants go, it was comparatively self-contained, oblivious to encounter. Second, cane labor: Portuguese cane-growing came together with their newly gained power to extract enslaved people from Africa. As cane workers in the New World, enslaved Africans had great advantages from growers’ perspectives: they had no local social relations and thus no established routes for escape. Like the cane itself, which had no history of either companion species or disease relations in the New World, they were isolated. They were on their way to becoming self-contained, and thus standardizable as abstract labor. Plantations were organized to further alienation for better control. Once central milling operations were started, all operations had to run on the time frame of the mill. Workers had to cut cane as fast as they could, and with full attention, just to avoid injury. Under these conditions, workers did, indeed, become self-contained and interchangeable units. Already considered commodities, they were given jobs made interchangeable by the regularity and coordinated timing engineered into the cane. Interchangeability in relation to the project frame, for both human work and plant commodities, emerged in these historical experiments. It was a success: Great profits were made in Europe, and most Europeans were too far away to see the effects. The project was, for the first time, scalable—or, more accurately, seemingly scalable.2 Sugarcane plantations expanded and spread across the warm regions of the world. Their 40 Chapter 3 contingent components—cloned planting stock, coerced labor, conquered and thus open land—showed how alienation, interchangeability, and expansion could lead to unprecedented profits. This formula shaped the dreams we have come to call progress and modernity. As Sidney Mintz has argued, sugarcane plantations were the model for factories during industrialization; factories built plantation-style alienation into their plans.3 The success of expansion through scalability shaped capitalist modernization. By envisioning more and more of the world through the lens of the plantation, investors devised all kinds of new commodities. Eventually, they posited that everything on earth— and beyond—might be scalable, and thus exchangeable at market values. This was utilitarianism, which eventually congealed as modern economics and contributed to forging more scalability—or at least its appearance. Contrast the matsutake forest: unlike sugarcane clones, matsutake make it evident that they cannot live without transformative relations with other species. Matsutake mushrooms are the fruiting bodies of an underground fungus associated with certain forest trees. The fungus gets its carbohydrates from mutualistic relations with the roots of its host trees, for whom it also forages. Matsutake make it possible for host trees to live in poor soils, without fertile humus. In turn, they are nourished by the trees. This transformative mutualism has made it impossible for humans to cultivate matsutake. Japanese research institutions have thrown millions of yen into making matsutake cultivation possible, but so far without success. Matsutake resist the conditions of the plantation. They require the dynamic multispecies diversity of the forest—with its contaminating relationality.4 Furthermore, matsutake foragers are far from the disciplined, interchangeable laborers of the cane fields. Without disciplined alienation, no scalable corporations form in the forest. In the U.S. Pacific Northwest, foragers flock to the forest following “mushroom fever.” They are independent, finding their way without formal employment. Yet it would be a mistake to see matsutake commerce as a primitive survival; this is the misapprehension of progress blinders. Matsutake commerce does not occur in some imagined time before scalability. It is dependent on scalability—in ruins. Many pickers in Oregon are displaced from industrial economies, and the forest itself is the remains of Some Problems with Scale 41 scalability work. Both matsutake commerce and ecology depend on interactions between scalability and its undoing. The U.S. Pacific Northwest was the crucible of U.S. timber policy and practice in the twentieth century. This region attracted the timber industry after it had already destroyed midwestern forests—and just as scientific forestry became a power in U.S. national governance. Private and public (and, later, environmentalist) interests battled it out in the Pacific Northwest; the scientific-industrial forestry on which they tenuously agreed was a creature of many compromises. Still, here is a place to see forests treated as much like scalable plantations as they might ever be. During the heyday of joint public-private industrial forestry in the 1960s and 1970s, this meant monocrop even-aged timber stands.5 Such management took a huge amount of work. Unwanted tree species, and indeed all other species, were sprayed with poison. Fires were absolutely excluded. Alienated work crews planted “superior” trees. Thinning was brutal, regular, and essential. Proper spacing allowed maximum rates of growth as well as mechanical harvesting. Timber trees were a new kind of sugarcane: managed for uniform growth, without multispecies interference, and thinned and harvested by machines and anonymous workers. Despite its technological prowess, the project of turning forests into plantations worked out unevenly at best. Earlier, timber companies had made a killing by just harvesting the most expensive trees; when national forests were opened for logging after World War II, they continued “high grading,” a practice dignified under standards that said mature trees were better replaced by fast-growing youngsters. Clear-cutting, or “even-aged management,” was introduced to move beyond the inefficiencies of such pick-and-choose harvesting. But the regrowing trees of scientific-industrial management were not so inviting, profit-wise. Where the great timber species had earlier been maintained by Native American burning, it was difficult to reproduce the “right” species. Firs and lodgepole pines grew up where great ponderosas had once held dominance. Then the price of Pacific Northwest timber plummeted. Without easy pickings, timber companies began to search elsewhere for cheaper trees. Without the political clout and funds of big timber, the region’s Forest Service districts lost funding, and maintaining plantationlike forests became cost-prohibitive. Environmentalists started going to 42 Chapter 3 the courts, asking for stricter conservation protections. They were blamed for the crashing timber economy, but the timber companies— and most of the big trees—had already left.6 By the time I wandered into the eastern Cascades, in 2004, fir and lodgepole had made great advances across what once had been almost pure stands of ponderosa pine. Although signs along the highways still said “Industrial Timber,” it was hard to imagine industry. The landscape was covered with thickets of lodgepole and fir: too small for most timber users; not scenic enough for recreation. But something else had emerged in the regional economy—matsutake. Forest Service researchers in the 1990s found that the annual commercial value of the mushrooms was as least as much as the value of the timber.7 Matsutake had stimulated a nonscalable forest economy in the ruins of scalable industrial forestry. The challenge for thinking with precarity is to understand the ways projects for making scalability have transformed landscape and society, while also seeing where scalability fails—and where nonscalable ecological and economic relations erupt. It is key to take note of the careers of both scalability and nonscalability. But it would be a huge mistake to assume that scalability is bad and nonscalability is good. Nonscalable projects can be as terrible in their effects as scalable ones. Unregulated loggers destroy forests more rapidly than scientific foresters. The main distinguishing feature between scalable and nonscalable projects is not ethical conduct but rather that the latter are more diverse because they are not geared up for expansion. Nonscalable projects can be terrible or benign; they run the range. New eruptions of nonscalability do not mean that scalability has disappeared. In an era of neoliberal restructuring, scalability is increasingly reduced to a technical problem rather than a popular mobilization in which citizens, governments, and corporations should work together. As chapter 4 explores, the articulation between scalable accounting and nonscalable workplace relations is increasingly accepted as a model for capitalist accumulation. Production does not have to be scalable as long as elites are able to regularize their account books. Can we keep sight of the continuing hegemony of scalability projects while immersing ourselves in the forms and tactics of precarity? Some Problems with Scale 43 Part 2 of this book traces the interplay between scalable and nonscalable in forms of capitalism in which scalable accounting allows nonscalable labor and natural resource management. In this “salvage” capitalism, supply chains organize the translation process in which wildly diverse forms of work and nature are made commensurate—for capital. Part 3 returns to matsutake forests as anti-plantations in which transformative encounters create the possibilities of life. The contaminated diversity of ecological relations takes center stage. But first, a foray into indeterminacy: the central feature of the assemblages I follow. So far, I’ve defined assemblages in relation to their negative features: their elements are contaminated and thus unstable; they refuse to scale up smoothly. Yet assemblages are defined by the strength of what they gather as much as their always-possible dissipation. They make history. This combination of ineffability and presence is evident in smell: another gift of the mushroom.
0 notes
katebushwick · 5 years
Text
Anti-Crisis by Janet Roitman
“Normalcy, Never Again” is the title of the speech penned for an ad- dress to be delivered by Martin Luther King Jr. on the steps of the Lin- coln Memorial on August 28, 1963. That day, however, Martin Luther King Jr. deviated from his “Normalcy—Never Again” text, instead improvising what is now known as the “I Have a Dream” speech.1 I learned of the original, official title of his address on the very day of his birthday on January 15, 2009. Five days later, deeply conscious of King’s legacy and his dream on the Washington Mall, Barack Obama, only just anointed as the forty-fourth president of the United States, defined contemporary American history in terms of crisis: “We are in the midst of crisis.”2
Like King’s “normalcy, never,” Obama’s crisis is used to character- ize a moment in history so as to mark off a new age, or what is char- acterized as a “journey.” This journey, defined by Obama in terms of “struggle” and “sacrifice,” is historical insofar as it pertains to an eco- nomic and political conjuncture. And yet, after giving an inventory of the historical facts of crisis—homes lost, jobs shed, businesses shut- tered—Obama added a qualifier: “These are the indicators of crisis,” he said, “subject to data and statistics. Less measurable but no less
introduction
profound is a sapping of confidence across our land—a nagging fear that America’s decline is inevitable, and that the next generation must lower its sights.” He then concluded: “This is the source of our confidence—the knowledge that God calls upon us to shape an uncer- tain destiny.” Such knowledge in the face of uncertainty implies that the historical crisis entails, or perhaps constitutes, a transhistorical journey, being, as he insisted in his closing words, a matter of hope, promise, and grace: “With hope and virtue, let us brave once more the icy currents and endure what storms may come. Let it be said by our children’s children that when we were tested we refused to let this journey end, that we did not turn our back nor did we falter; and with eyes fixed on the horizon and God’s grace upon us, we carried forth that great gift of freedom and delivered it safely to future genera- tions.” Crisis is a historical event as much as it is an enduring condi- tion of life and even the grounds for a transcendent human condition.
Obama noted in his address that the lived experience of what is deemed “crisis” should not be reduced to an ensemble of socioeco- nomic indicators. He sought to convey to the American public that he would face their present conditions of life as entailing an experi- ence of crisis. His secular narrative of human history is conjugated with a Christian narrative of witnessing. And yet it clearly echoes self- described secular accounts in the social sciences that attempt to relate the ways in which history can be characterized as crisis; the ways that social life can be said to be in crisis; and the ways that crisis becomes an imperative, or a device for understanding how to act effectively in situations that belie, for the actors, a sense of possibility (Mbembe and Roitman 1995). But here the question arises: if crisis designates something more than a historical conjuncture, what is the status of that term? How did crisis, once a signifier for a critical, decisive mo- ment, come to be construed as a protracted historical and experiential condition? The very idea of crisis as a condition suggests an ongoing state of affairs. But can one speak of a state of enduring crisis? Is this not an oxymoron?
In reflecting upon the status of this term as the most common and most pervasive qualifier of contemporary historical conditions—and manner of denoting “history” itself—this book sets the stage for a general inquiry into the status of “crisis” in social science theory and writ- ing and therefore offers a departure, not a resolution.3 In what follows, I am not concerned to theorize the term “crisis” or to come up with a working definition of it. Rather than essentialize it so as to make better use of it, I seek to understand the kinds of work the term “crisis” is or is not doing in the construction of narrative forms. Likewise, I am not concerned to demonstrate that crisis signifies something new in contemporary narrative accounts or that it now has a novel status in a history of ideas. I will not offer a review of the literature on crisis, nor will I show how contemporary usages of the term “crisis” are wrong and hence argue for a true, or more correct meaning.4
What I will consider is how crisis is constituted as an object of knowledge. Crisis is an omnipresent sign in almost all forms of nar- rative today; it is mobilized as the defining category of historical situa- tions, past and present. The recent bibliography in the social sciences and popular press is vast; crisis texts are a veritable industry.5 The geography of crisis has come to be world geography cnn-style: crisis in Afghanistan, crisis in Darfur, crisis in Iran, crisis in Iraq, crisis in the Congo, crisis in Cairo, crisis in the Middle East, crisis on Main Street. But beyond global geopolitics, crisis qualifies the very nature of events: humanitarian crisis, environmental crisis, energy crisis, debt crisis, financial crisis, and so forth. Through the term “crisis,” the singularity of events is abstracted by a generic logic, making crisis a term that seems self-explanatory. As I hope to make clear in what follows, crisis serves as the noun-formation of contemporary histori- cal narrative; it is a non-locus from which to claim access to both history and knowledge of history. In other words, crisis is mobilized in narrative constructions to mark out or to designate “moments of truth”; it is taken to be a means to access historical truth, and even a means to think “history” itself. Such moments of truth are often de- fined as turning points in history, when decisions are taken or events are decided, thus establishing a particular teleology. And similarly, though seemingly without recourse to teleology, crisis moments are defined as instances when normativity is laid bare, such as when the contingent or partial quality of knowledge claims—principles, sup- positions, premises, criteria, and logical or causal relations—are disputed, critiqued, challenged, or disclosed. It follows that crisis is posited as an epistemological impasse and, as we will see below, is claimed to found the possibility for other historical trajectories or even for a (new) future.
Barack Obama invoked the revelatory power of crisis in this way: as a moment that reveals truth, the crisis denoted by the limits—or “bursting”—of the so-called financial bubble divulged alleged “false value” and offered the hope of reestablishing or relocating “true value,” or what we like to think of as the fundamentals of the economy and the proper trajectory of history, both being dependent on adequate knowledge claims. As a category denoting a moment of truth in this way, and despite presumptions that crisis does not imply, in itself, a definite direction of change, the term “crisis” signifies a diagnostic of the present; it implies a certain telos because it is inevitably, though most often implicitly, directed toward a norm. Evoking crisis entails reference to a norm because it requires a comparative state for judg- ment: crisis compared to what? That question evokes the significance of crisis as an axiological problem, or the questioning of the episte- mological or ethical grounds of certain domains of life and thought.
This book inquires into the significance of crisis in-and-of-itself. In- stead of starting with particular crises—the crisis of Africa, the finan- cial crisis, the crisis of subjectivity, the neoliberal crisis—and then rushing to explain their causes and fundaments, I first ask questions of the concept of crisis itself.6 To do so, I explore how we think crisis came to be a historical concept: I ask how crisis achieves its status as a historico-philosophical concept and I ask how we practice that very premise in narrations of history and in the determinations of what even counts as history. To explore the orthodoxy of crisis—the con- ventional historiography of the term and its consequential practice— I take an impudent and somewhat puzzling step. In the pages that fol- low, we meet up with Reinhart Koselleck and Robert Shiller, Thomas Hobbes and David Harvey, John Locke and Michael Lewis, the Ma- sonic lodges and the hedge fund managers. We shift from prophecy and prognosis to risk-based pricing and adjustable rate mortgages, from epochal consciousness to asset bubbles, from judgment and cri- tique to foreclosures and forbearance. We move between the conceptual history of crisis and the practice of crisis analysis, from historiog- raphy to contemporary financial history. There is no rush to explain the crisis. Instead, what follows is a deliberate review of the conven- tional account of the emergence of crisis as a historico-philosophical concept and examination of how that concept is therefore practiced in contemporary accounts of financial crisis, permitting and enabling certain narrations and giving rise to certain questions, but not others.
While most financial analysts and homeowners are not necessarily aware of the historico-philosophical status of the term “crisis,” this book indicates that the lines drawn between academic and popular crisis narrations are not as bold as is presumed. This book attempts to erase, or at least lighten, those lines. It does so by putting on par academic analyses of financial crisis and so-called popular accounts of financial crisis. In 2007–9, accession to crisis—or credence in the claim that “this is crisis”—led to a frenzy of academic analyses, which included economists, sociologists, political scientists, historians, and anthropologists, all attempting to explain “the crisis.” It likewise in- spired a host of journalistic and novelistic accounts of the financial crisis of 2007–9.7 A cross-reading of these literatures gives insight into how the technical “facts” of the financial crisis become folk wis- dom or, better, tacit knowledge—and this, despite the “mutually in- consistent narratives” that can be gleaned from the dizzying variety of accounts (Lo 2012). Indeed, the very distinction between expert and lay relies on stable subject positions that are not tenable. Where, for instance, do we draw the lines between experts and laypersons, aca- demics and commoners? Accountants and corporate managers are not necessarily academic economists, but are they considered layper- sons with respect to financial analysis? Are lawyers, engineers, and mathematicians working in private nonfinancial firms or laboratories to be considered laypersons in contrast to academic economists and fi- nancial analysts? Are economic anthropologists housed in universities laypersons in relation to their colleagues in economics departments?
One might reply that the real laypersons are those holding mort- gages, those who have been foreclosed upon. But even here, the dark line drawn between academic and lay must be blanched. In his bril- liant, carefully crafted elaboration of an anthropology of the contemporary, Paul Rabinow explains and illustrates the “mode of adjacency” necessary to anthropological inquiry, the goal of which is “identifying, understanding, and formulating something actual neither by directly identifying with it nor by making it exotic” (2008, 49, my emphasis). He notes the disjuncture between “those authorized to pronounce prescriptive speech acts” and those who are not—between, let’s say, financial analysts and journalists, on the one hand, and homeowners, on the other. And he concludes (79): “Thus, while many of the serious speech acts about the moral landscape are produced by actors who are reflective about their own positions, the anthropologist can approach their discourses and practices like those of any other. Theorists, phi- losophers, ethicists, scientists, and the like can thus qualify for in- clusion in the category that used to be called ‘natives.’”8 While the present book is not based on anthropological fieldwork of the prac- titioners of crisis analysis, it takes its cue from Rabinow’s sense of “untimely work.” I suspend judgment about expert claims to crisis so as to see how those very (expert) claims and (lay) accession to those claims serve not radical change, as expected with crisis, but rather the affirmation of long-standing principles, thereby preclud- ing certain thoughts and acts, such as the outright refutation of the very idea of foreclosure as a germane or valid concept and action. This book tacks between the historiography of the concept of crisis and re- cent interpretations of what is now known as the subprime mortgage crisis, excavating the epistemological bases for certain claims (“this is crisis”) and reflecting upon how those claims engender certain types of action or practice (devaluation, foreclosure) and not others (human protest-chains around homes, the denial of the very legibility of the terms “foreclosure” and “forbearance”). In that way, this book is out of synch with the “hyperoccupied lives” (Rabinow 2008, 47) of those producing feverish crisis pronouncements, urgent crisis analyses, and clamorous crisis pamphlets—out of step with those seeking to manage or overcome the crisis.
Both Martin Luther King Jr. and Barack Obama attempt to inaugurate new historical times with reference to the concept of crisis. The redemptive and utopian quality of their historical narrations speaks to the normative and teleological nature of the concept of crisis, which, taken to be the grounds for both the human sciences and critique, is likewise construed as the grounds for transformative action, as will be made clear below.
The following account of the ways in which crisis is conceived as a historical concept—as both a particular entry point into history and as a means to reveal historical truth—makes clear how crisis is posited “as” history itself. In other words, in the social sciences, when history is taken to be immanent to social relations, crisis serves as the term that enables the very elaboration of such history. This founding role of the concept of crisis in social science narration and in the constitution and elaboration of history itself is set forth by the late German historian Reinhart Koselleck, author of perhaps the only conceptual history of crisis, which thus serves as the authoritative historiography. As I outline in detail in the chapters that follow, Koselleck provides an illustration of the temporalization of history, or the emergence of “history” as a temporal category. He attributes the emergence of the category of history as a temporality to the concomitant displacement of the term “crisis,” arguing that, by the end of the eighteenth century, crisis is the basis for the claim that one can judge history by means of a diagnosis of time. Koselleck likewise maintains that both this claim and this judgment entail a specific historical consciousness— a consciousness that posits history as a temporality upon which one can act. For this historical consciousness, crisis is a criterion for what counts as “history”; crisis signifies change, such that crisis “is” history; and crisis designates “history” as such. In this way, crisis achieves the status of a historico-philosophical concept; it is the means by which history is located, recognized, comprehended, and even posited.
I take Reinhart Koselleck’s remarkable conceptual history of crisis to be indicative of the practice of the concept of crisis. His account of how crisis achieves status as a historico-philosophical concept like- wise illustrates the practice of the premise of crisis, or how it serves a set of interlocking determinations: what counts as an event, the status of an event, the qualification of history itself, and the basis of narration. I refer to Koselleck’s conceptual history on two registers: as the orthodox historiography of the term and as an account that, itself, partakes of a conventional practice of historiography, which presup- poses criteria for what counts as an event and premises as to what can be narrated—or the means to distinguish between “a properly histori- cal account of reality and a nonhistorical or ahistorical or antihistori- cal account” (White 2002, xii). Less concerned with the question of whether or not Koselleck’s rendering of the emergence of historical consciousness is correct or accurate, I dwell instead on the question of how the term “crisis” is posited as fundamental to this very idea of historical consciousness and to a metaphysics of history. My point is not that crisis is false or merely a constructed basis for narration; my aim is to raise questions about the status of the concept of crisis as a founding term for the elaboration of “history” per se—history being the ultimate locus of significance and the ontological status of histori- cal temporality being taken for granted. In its practice, as we learn from Koselleck, crisis is figured as judgment: judging time in terms of analogous intervals and judging history in terms of its significance. But it equally serves expectations for world-immanent justice, or the faith that history is the ultimate form of judgment. I ask herein—in- spired by Koselleck and yet putting the question to him, as well: what is the burden of proof for such judgments?
By way of response, I consider the forms of critique that are nec- essarily engendered by crisis narrations. Critique and crisis are cog- nates, as Reinhart Koselleck (1988) reminds us: crisis is the basis of social and critical theory. Being bound to its cognate (critique), the concept of crisis denotes the prevailing and fairly peculiar belief that history could be alienated in terms of its philosophy—that one could perceive a dissonance between historical events and representations of those events. Crisis-claims evoke a moral demand for a difference between the past and the future. And crisis-claims evoke the possi- bility for new forms of historical subjectivity, transpiring through de- terminations of the limits of reason and knowledge. That is, crisis, or the disclosure of epistemological limits, occasions critique. This desire for (temporal) difference is described by scholars new and old as a moral task or an ethical demand, being based on a perceived discrepancy between nature and reason, technical developments and moral positioning, knowledge and human interests, constituted cate- gories and epistemological limits, or a critical consciousness of the present state of affairs.9 No matter its quality, the discrepancy is taken to be an aporia; it establishes the formal or logical possibility of crisis. And in all cases, both prognosis and the very apprehension of history are defined by the negative occupation of an immanent world: what went wrong? For critical historical consciousness—or the specific, historical way of knowing the world has “history”—historical signifi- cance is discerned in terms of epistemological or ethical failure. With- out an inviolate transcendental realm—God, reason, truth—from which to signify human history, or because observation takes place from within immanence, we effectively assume a negative occupation of the immanent world.10
By excavating the crisis term in the critique-and-crisis cognate, by marking their co-constitution, I hope to draw attention to the means by which crisis serves as a distinction or transcendental placeholder in the occupation of an immanent world. In the words of William Rasch (2002, 20), inspired by Niklas Luhmann, “In a world where descrip- tions proliferate and faith in the authority of reason has gone the way of faith in the authority of God, contingency becomes the transcen- dental placeholder.”11 As we will see below, crisis serves as a transcen- dental placeholder because it is a means for signifying contingency; it is a term that allegedly allows one to think the “otherwise.” Though not concerned with the term “crisis,” Rasch presents my point of de- parture clearly: “If . . . moral codes (commandments), Holy Scripture, papal and royal edicts, and the voice of prophets and visionaries no longer deliver direct evidence of the transcendent realm, but rather become historicized and seen as socially constructed artifacts, the task of reclaiming authority must be negotiated within the domain of an immanence that has been loosed from its transcendent anchor- age. The world is as it is, but it could be otherwise. How that ‘other- wise’ is to be thought becomes the ‘quasi-transcendental’ task of an im- manence trying to think itself ” (Rasch 2000, 130, my emphasis). The concept of crisis is crucial to the “how” of thinking otherwise. And as a term that serves the practice of unveiling supposed underlying contradictions, or latencies, it is a distinction that transcends oppositions and dichotomies. Therefore, this book designates anti-crisis: there is not “crisis” versus “noncrisis,” both of which can be observed empirically; rather, crisis is a logical observation that generates mean- ing in a self-referential system, or a non-locus from which to signify contingency and paradox.12 And the judgment of crisis is necessarily a post hoc interrogation: what went wrong? Crisis is posited as an a priori; the grounds for knowledge of crisis are neither questioned nor made explicit. And hence contemporary narratives of crisis elude two questions: How can one know crisis in history? And how can one know crisis itself ?
Crisis is a historical “super concept” (Oberbegriffe) (Koselleck 2006, 392) that, to my mind, raises questions rather than facilitating an- swers. If crisis denotes a critical, decisive moment, or a turning point, does this not imply a certain philosophy of history? And what does it take to posit the very idea that meaning or thought can be in a state of crisis? Moreover, when crisis is posited as the very condition of contemporary situations, is it not the case that certain questions be- come possible while others are foreclosed? This book explores these questions.
To do so, we embark on a trek over the anxious terrain of crisis nar- ration. This trek is one of observation: we observe how academic and nonacademic observers themselves observe economic and financial actors, both human and technical, which they locate, define, and in- terpret as having produced crisis. We observe, then, the blind spot of second-order observation. Moreover, through this survey of the prac- tice of crisis in contemporary narrations of “the 2007–9 financial crisis,” we see how accession to crisis engenders certain narrations and note how the term enables and forecloses various kinds of ques- tions. Through this review of a host of recent narratives of financial crisis, I am not seeking to establish the relative veracity of these ac- counts; I am not interested in whether or not certain purported ex- planations of “the crisis” are more or less tenable. Although I do ex- plore questions relating to the production of value and risk, and the status of subprime and houses, I do so only insofar as these terms constitute the grammar of financial crisis narratives. The point of this grand tour of crisis narratives is not to determine the best way to de- cipher the crisis or to establish who “got it right” in recent analyses. The point is to demonstrate how the term “crisis” establishes the con- ditions of possible histories and to indicate how it is a blind spot in social science narrative constructions.
We thus take a journey through a wide-ranging array of interpre- tations, each of which claims a particular tradition: liberal economy, neo-Keynesian, neo-Marxist, cultural studies, and cultural economy. All proceed from the question, what went wrong? All search for ori- gins, sources, roots, causes, reasons . . . none waver in their faith in crisis, a term that is posited without question or doubt. All seek to demonstrate deviations from the proper course of history and distor- tions in human knowledge and practice—the discrepancy between the world and human knowledge of the world. Crisis signifies a pur- portedly observable chasm between “the real,” on the one hand, and what is variously portrayed in the accounts reviewed below as ficti- tious, erroneous, or an illogical departure from the real, on the other. The chasm signifies a supposed dissonance between empirical his- tory and a philosophy of history—between truly grounded material value, on the one hand, and hypothetical judgments and evaluations, on the other.13 What is at issue is our alienation from history and the potential for revelation of true value and the true significance of events—of redemption, emancipation, deliverance. I ask: how can we claim to represent that chasm? What is the basis of a claim to know the locus of our alienation from underlying value, from ma- terial value, from real value, from truth value?
To conclude this expedition over the terrain of crisis narration, I put a set of particularly pragmatic questions to the narratives that I review herein: When does a credit (asset) become a debt (toxic asset)? How do we distinguish the former from the latter? At what point do houses figured as equity become figured as a debt? At what point do subprime mortgage bonds transform from an asset to a liability? And the ultimate question: When does the judgment of crisis obtain? We see, by putting these questions to contemporary crisis narratives, how crisis, in itself, cannot be located or observed as an object of first-order knowledge. The observation “money” is a first-order observa- tion based on a distinction (money/not money); the statements “I lost money” or “Lost money is a crisis” are second-order observations. A first-order observation (money) does not indicate how the distinction (money/not money) was made; and the distinction (how the obser- vation was made) is necessarily the object of a second-order observa- tion.14 But taking note of crisis as a distinction, or as a second-order operation, does not amount to denying crisis. The point is to take note of the effects of the claim to crisis, to be attentive to the effects of our very accession to that judgment. Crisis engenders certain forms of critique, which politicize interest groups. This is a politics of crisis. Would not crisis, if it effectively obtained, engender not merely critique of exist- ing relations and practices, but rather occasion the reorganization and transformation of the very boundary between “the economic” and “the political,” and, more significant, the transformation of the very in- telligibility of constitutive terms, such as “debt,” “liquidity,” and “risk”? In assuming crisis as a point of departure, we remain closed off in a politics of crisis. We can ask, echoing the Occupy Wall Street move- ment, who should bear the burden of fading prosperity? But other constitutive questions, related to the production of effective practice, remain unarticulated, such as, how did debt come to be figured as an asset class in the first place?
To answer this latter question, I turn to the few studies of the pro- duction of value through market devices and financial infrastructures that help us to account for the efficacy of economic and financial prac- tices, which sustain the production of value—figured as debt. Here, in- stead of financial crisis due to irrational speculation, corrupt culture, erroneous policy, faulty regulation, defective models, missed fore- casting, or systemic failure and underlying contradictions, we have an accounting of specific practices and the production of positive—or, better, practical—knowledge, such that the claim to crisis becomes a particular (political) solution to what is declared a problem for cer- tain domains of life. These rare observations of the production of eco- nomic and financial value without positing crisis help us to grasp how “crisis” is less a claim about error in valuation than a judgment about value. But noncrisis accounts cannot be taken as distinct “alternative” narrations insofar as they do not provide evidence against “X account of crisis” so as to prove or affirm “Y account of crisis.” In that sense, my turn to these accounts is a thought experiment: this exercise ex- plores the grounds of narrative without crisis, but these are not alter- native explanations because crisis is not their object. Doubtless, this thought experiment risks reproducing the “problem of meaning”— or the belief that there is a discrepancy between history and repre- sentations of history—insofar as it raises the possibility of narrating history otherwise.15 But here I want to underscore that critique and crisis are cognates, and so want to bring to our attention the forms of critique engendered by crisis narratives. We see that these forms of critique rest on assumptions about how categories like “the market” or “finance” should function and therefore generate conjecture about how deviations from “true” market or financial value were produced; they do not account for the ways that such value is produced in the first place. In other words, when crisis is posited as an a priori, it ob- viates accounts of positive, pragmatic spaces of calculative possibility. I therefore raise the possibility of noncrisis narratives and explore how possible, alternative narratives about houses and their worth might be generated without recourse to a “sociology of error” (Bloor 1991, 12), without constructing a post hoc narrative of denunciation or post hoc judgments of deviation and failure.16
Ultimately, I invite the reader to put less faith in crisis, which means asking what is at stake with crisis in-and-of-itself. “Crisis” is a term that is bound up in the predicament of signifying human his- tory, often serving as a transcendental placeholder in ostensible solu- tions to that problem. In that sense, the term “crisis” serves as a pri- mary enabling blind spot for the production of knowledge. That is, crisis is a point of view, or an observation, which itself is not viewed or observed. I apprehend the concept of crisis through the metaphor of a blind spot so as to apprehend crisis as an observation that, like all observations or cognitions, does not account for the very conditions of its observation.17 Consequentially, making that blind spot visible means asking questions about how we produce significance for our- selves. At least, it means asking about how we produce “history.” At most, it means asking how we might construct accounts without discerning historical significance in terms of ethical failure. Thus we might ask: what kind of narrative could be produced where meaning is not everywhere a problem?18 An answer to that question, no matter how improbable, as we will see below, requires, as a first, inaugural step, consideration of the ways in which crisis, as an enabling blind spot for the production of knowledge, entails unremitting and often implicit judgment about latencies, or errors and failings that must be eradicated and, evidently hopefully, overcome.
Anti-Crisis
Judgment day
“What does it mean, and what does it take to envisage a society as breaking down?” This is Michael Taussig’s question (1992, 17, empha- sis in original), which he puts to himself in his reflections on terror in Colombia during the 1990s. In thinking alongside him, I de- emphasize the problem of meaning and the process of breakdown, such that the question becomes, what does it take to envisage society as breaking down? Such visions could only arise in counter-distinction to imagined alternative societies. Without them, we could not allow for such a judgment: the affirmation “this society is breaking down” requires a comparison, a comparative state of affairs.
The very etymology of the term “crisis” speaks to that requirement of judgment. Though the details of its semantic history can be found in many places, it is worth reiterating that its etymology is said to originate with the ancient Greek term krinô (to separate, to choose, cut, to decide, to judge), which suggested a definitive decision. It is said to have had significance in the domains of law, medicine, and theology, with the medical signification prevailing by the fifth and fourth centuries bc. Associated with the Hippocratic school (Corpus Hippocratum) as part of a medical grammar, crisis denoted the turn- ing point of a disease, or a critical phase in which life or death was at stake and called for an irrevocable decision. Significantly, crisis was not the disease or illness per se; it was the condition that called for de- cisive judgment between alternatives. But the term “crisis” no longer clearly signifies a singular moment of decisive judgment; we now pre- sume that crisis is a condition, a state of affairs, an experiential cate- gory. Today, crisis is posited as a protracted and potentially persistent state of ailment and demise.
And yet, while crisis serves as the basis of a great deal of writing on topics ranging from conjunctural affairs to the human condition, extremely scant attention has been given to the conceptual histories of the term.1 Indeed, we have but one significant conceptual history, elaborated by Reinhart Koselleck and now taken to be the conven- tional historiography of the term. His inquiry into how crisis achieves status as a historico-philosophical concept gives insight into how we practice the premise of crisis—how it serves an ensemble of interlock- ing determinations: what counts as an event, the status of an event, the qualification of history itself, and the basis of narration. Kosel- leck is best known as a founder and practitioner of Begriffsgeschichte, a practice of historiography devoted to the study of the fundamental concepts that have given rise to, and partake of, a specific concept of “history” and a distinctly historical consciousness.2 Koselleck is cru- cially concerned with the historicity of concepts and the emergence of historical self-understanding, or what he takes to be the develop- ment of historical consciousness—the philosophical and experien- tial awareness of one’s own historical formation and of the historical quality of that knowledge itself. Through this “cultural achievement,” the concept of history emerges as a philosophical and experiential category: “European society began to think and act as if it existed in history, as if its ‘historicity’ was a feature, if not the defining feature of its identity” (White 2002, x). Koselleck characterizes this emer- gence in terms of discontinuity and rupture; historical consciousness marks off Neuzeit (the modern age, modernity) and prompts a guid- ing question for conceptual history: what kind of experience is entailed by this historical consciousness? (Koselleck 2004; and cf. Tribe 2004, xvii). He asserts (2002, 111), for instance, that the production of his- torical time “is subjectively enacted in humans as historical beings,” by which he means that there is a consequent “compulsion to coordinate past and future so as to be able to live at all.”3 The experience of this temporal differentiation between past and future generates a concomitant differential between experience and expectation—the source of crisis. For Koselleck, Neuzeit is experienced as problem- atic—riddled with crisis or in permanent crisis—because of the con- stant, allegedly accelerating production of both this open future and the utopian hope of fulfillment.4
Koselleck is interested in the semantic power of concepts; con- cepts impart meaning and experience in relation to a complex se- mantic network, through which we can apprehend historical trans- formations. In tracing this emergence of historical consciousness and Neuzeit as an experience of crisis, his conceptual history of crisis de- scribes a decisive shift in the semantics of the term, which he claims transpired between Hippocratic medical grammar and Christian exe- gesis.5 Not surprisingly, one did not replace the other: in the elabora- tion of Christian theology, with reference to the New Testament and alongside Aristotelian legal language, krisis was paired with judicium and came to signify judgment before God, which Koselleck character- izes as possibly the unsurpassable signification of crisis in the course of its conceptual history (2002, 237; 2006, 358–59).6 Through the history of its conceptual displacements—involving the elaboration of semantic webs as opposed to a linear development of substitutions, and which I have drastically abbreviated7—the term “crisis” entailed a prognosis, which increasingly came to imply a prognosis of time.
Koselleck’s conceptual history of crisis illustrates how, over the course of the eighteenth century, a spatial metaphor comes to be a historical concept through the temporalization of the Last Judgment. His account of this complex semantic shift is part of his larger oeuvre on the emergence of the European concept of history and the ways in which associated historico-political concepts (e.g., progress) the- matize time.8 Koselleck argues that prior to the achievement of this shift, crisis did not have a time; it was not historically dated and it did not signify historical dates. While throughout the seventeenth century the term had a range of political applications related to the body politic, constitutional order, and military situations, by the late eighteenth century, its religious connotation is exacerbated, though in a “post-theological mode,” or as a philosophy of history (Kosel- leck 2006, 370). Through its semantic history, crisis, as a concept, sheds its apocalyptic meaning: “It turns into a structural category of Christianly understood history pure and simple; eschatology is, so to speak, historically monopolized” (2002, 242).9
With the temporalization of history—or the process by which, since the late eighteenth century, time is no longer figured as a medium in which histories take place, but rather is itself conceived as having a historical quality—history no longer occurs in time; rather, time itself becomes an active, transformative (historical) principle ([1979] 2004, 236, and 2002, 165–67). For Koselleck, the temporalization of the Last Judgment is the temporalization of history: prophecy is displaced by prognosis.10 According to this (highly contested) inter- pretation (which relies on a “Christian conception of supersession”), prophecy involves symbols of what is already known and entails ex- pectation in constant similitude while prognosis, to the contrary, gen- erates novel events.11 Rational prognosis related to intrinsic possibili- ties hinges on an imagined novel time that is in flight.12 Crisis serves this transposition from prophecy to prognosis, or the “channeling of millennial expectations,” because it becomes the basis for claims that one can interpret the entire course of history via a diagnosis of time.
Such evaluations about a putative temporal situation require knowl- edge of both the past and the future, which implies that, as a concept that has been integral to the temporalization of history, crisis entails a theory of time. In effect, time is constituted as historical through crisis; more than just a novel manner of defining and representing history per se, the temporalization of history amounts to a tempo- ral shift in experience.13 The very notion of a historical perspective, which allows for the identification and judgment of a temporal situa- tion, presupposes that history has a temporal quality. And, in similar fashion, the historical perspective itself is taken to have a temporal quality, making the truth of history contingent, not given once and for all. That now familiar point is based on the assumption that time is constantly being produced and that it is always new: the future is fundamentally open.14
But this constant production of the new, or of new time, is not without the production of new pasts. In order to incorporate new experiences into one’s own history—inspired by the awareness of an elsewhere and by the very idea that one constructs history, or in order to account for nonsimultaneous (diverse) and yet simultaneous (chronologically) histories15—one must be able to conceive of the past in terms of its radical or fundamental difference. Crisis comes to sig- nify the marking out of “new time” insofar as it denotes a unique, im- manent transition phase, or a specific historical epoch. The somewhat odd practice of the retrospective recognition of the past as new—an epoch can only be recognized as such (i.e., in its “true significance” for history) ex post facto—allegedly distinguishes this “epochal con- sciousness” and the philosophy of history of the late eighteenth cen- tury. In effect, Koselleck’s account of this historical consciousness and philosophy of history presupposes that, because time is not manifest and thus cannot be intuited, we necessarily draw on terms from the spatial realm. This notion of time as a formal, a priori condi- tion of intuition, associated with Kant, can be contrasted to a notion of subjective historical times, or simultaneous, plural objects defined by their own measure. Koselleck’s point is that historical concepts are dependent upon metaphorical language and a spatial referent: “To talk about history and time is difficult for a reason that has to do with more than ‘history.’ Time cannot be intuited (ist anschauungs- los). If a historian brings past events back to mind through his lan- guage, then the listener or reader will perhaps associate an intuition with them as well. But does he thereby have an intuition of past time? Hardly so, or only in a metaphorical use of language, for instance, in the sense in which one speaks of the time of the French Revolu- tion without thereby making visible anything specifically temporal” (Koselleck 2002, 102). And of course the temporal significance of such concepts is always experienced and apprehended in terms of retrospective effects.16
Crisis, as a historical concept, refers to the retrospective effects of events and to their constitutive presuppositions. For what Koselleck calls the “epochal consciousness” that arises by the late eighteenth century, crisis is a criterion for what counts as “history” and it is a means of signifying change. It is a means of designating history in and-of-itself.17 While typical to an eighteenth-century philosophy of history and a corresponding conceptualization of history in terms of progress, this epochal consciousness is nevertheless very familiar to us; it is in keeping with contemporary usage of the term as a turning point in any particular history, or as an iterative, periodizing concept. In this instance, crisis is defined as both entirely specific (because it defines a historical epoch) and as structural recurrence (because it establishes and fulfills the notion that historical change takes place in analogous forms). In sum, crisis acquires a historico-philosophical dimension and becomes, by the end of the eighteenth century, a free- standing historico-philosophical concept. Thereafter, one speaks of crisis pure and simple; it is the means by which history can be located and understood.18 This history, which is for Koselleck specifically “modern,” is constituted out of its own conditions of knowledge and action: the criteria of time.19 This new concept of history in-and-for- itself nonetheless requires a referent from which movement, trans- formation, and change—historical change itself—can be posited. It is in that sense that crisis is the means to “access” history and to qualify “history” as such: crisis marks history and crisis generates history.
What we forget when invoking this technical or scholastic sense of the term is its theological genealogy, which Koselleck recalls: this manner of marking both a threshold and the possibility of analogous forms that translate specificity into a general logic is the occasion for the claim to “offer historically immanent patterns of interpretation for crises that are theoretically able to do without the intervention of God” (2002, 24; and see 2004, 40–41, 240; 2006, 371).20 Kosel- leck affirms a secularization narrative and a Christian conception of supersession that is nicely recapitulated by Kathleen Davis (2008, 18): “Wrested from God and claimed for the ‘world,’ time only be- comes truly historical through a political-theological tear that inaugu- rates a new ‘age’—a tear that thereby defines the relation of the world and time, and that paradoxically occupies a transcendent position by virtue of banishing transcendence.” Crisis serves that narrative, as Koselleck maintains, but it thereby becomes, as Davis says with re- spect to the practice of historical periodization, “its own logic, a self-identity that, through rupture rather than through presence, supplies the necessary platform for a claim to sovereignty” (2008, 18).21 To be sure, acquiescence to the truth of the notion of history as imma- nent does not imply that history is no longer postulated as a pre- dicament. With reference to a host of witnesses of the impending or affirmed crises, including Robespierre, Rousseau, Diderot, Thomas Paine, Burke, Herder, Fichte, Saint-Simon, Auguste Comte, Lorenz von Stein, Schleiermacher, Schlegel, and Marx and Engels, Koselleck declares: “That the crisis in which one currently finds oneself could be the last, great, and unique decision, after which history would look entirely different in the future—this semantic option is taken up more and more frequently the less the absolute end of history is believed to be approaching with the Last Judgment. To this extent, it is a question of recasting a theological principle of belief. It is expected of world- immanent history itself” (2002, 243, my emphasis; and see 2002, 243–44; 2006, 370–97).
What is expected of history? With the temporalization of the Last Judgment, history, in its immanence, becomes a problem of meaning. If the emergence of crisis as a historical concept occluded practices of prophecy in favor of practices of prognosis, as Koselleck argues, this raises the issue of the burden of proof for meaning in history, and for the meaning, or significance, of history itself. Koselleck comments on this burden of proof, invoking Schiller’s influential dictum: “World history is the judgment of the world.”22
This model is compatible with fate, which in Herodotus appears behind all individual histories and which can be read again and again as the consummation of a world-immanent justice. How- ever, Schiller’s dictum raises a greater claim. An inherent justice, one which acquires almost a magical air, is not only required of individual histories but of all world history in toto. Logically, every injustice, every incommensurability, every unatoned crime, every senselessness and uselessness is apodictically excluded. Thus the burden of proof for the meaning of this history increases enor- mously. It is no longer historians who, because of their better knowledge, believe themselves to be able to morally judge the past ex post facto, but rather it is assumed that history, as an acting sub- ject, enforces justice. (2002, 241; 2006, 371)23
Through the invocation of the term “crisis” as a historically unique transition phase, which would mark an epoch, historical experience is likewise generalized as a logical recurrence—the historian is the judge of events. And yet history itself is posited as serving the ultimate form of judgment, a judgment we take to be effected, retrospectively, through acts and errors. (Tellingly and perhaps evocatively, Schiller’s dictum originated in a love poem he composed about a missed oppor- tunity.) But knowledge about that past—glorious consummation or disgraced failure—distinguishes the possible, open future, which is a problem. Judging time (sorting change from stasis, perceiving inter- vals) and judging history (diagnosing demise or improvement, de- fining winners and losers) is a matter of prognosis. And such progno- sis depends upon the stabilization of “a single concept limited to the present with which to capture a new era that may have various tempo- ral beginnings and whose unknown future seems to give free scope to all sorts of wishes and anxieties, fears and hope” (Koselleck 2006, 372).
The very notion that one could judge historical time (that it presents itself to us as an objective entity to be judged) and that history is defined by a teleology of justice (that there are winners and losers, errors and victories) conjures an extraordinarily self-conscious mode of being. The emergence of this particular form of historical self- consciousness is the motivating spirit of Reinhart Koselleck’s overall intellectual project; but it is most notably the subject of his remark- able first book, Critique and Crisis, in which he presents a concep- tual history of the mutual constitution of those cognates: critique and crisis.24 His aim is to illustrate that this historical self-consciousness is related to what he defines as a specifically “modern” attitude toward politics.25 Koselleck is fundamentally interested in what constitutes politics, as is reflected in his accounting of the emergence of histori- cal consciousness in terms of contestation over concepts.
Koselleck apprehends this contestation through two key terms: morality and politics. He puts forth the counterintuitive argument that over the course of the eighteenth century, a novel distinction was formulated between the respective categories of morality and politics, which allowed for what he terms the “exclusion of morality from politics” and what he takes to be the Enlightenment style of cri- tique. His lengthy rendering of the positions and debates that ran the course of at least a century both assembles and differentiates various celebrated figures: Barclay, d’Aubigné, Hobbes, Locke, Simon, Bayle, Schiller, Voltaire, Diderot, the Illuminati, the Encyclopédistes, Turgot, Rousseau, Raynal, Paine. He intends to relate the now familiar story of the emergence of a bourgeois public (an account most frequently attributed to Jürgen Habermas), or to account for the emergent “self- understanding” of this public as a distinct realm that constitutes “society” and is invested with Natural Law, thus marking off a self- proclaimed “moral society” from politics.26 The disassociation be- tween political and moral authority that he describes is generally as- sumed to be an actual foundational moment in Western intellectual history, marking off what has been called “The Great Separation,” or “The Crisis” that allegedly marks off new time—that of secular his- tory (cf. for example, Lilla 2007).27 This much critiqued “triumphalist history of the secular” (Asad 1993, 25) assumes the effective disasso- ciation between the religious and the political realms, between the juridical and the religious, or between political and moral authority.28
Thus this apparent contradiction in terms, this “anti-political poli- tics” put forth by Koselleck as the basis for critique, is contingent upon acceptance of the notion that morality marks out a pre-political or non- political realm: “The internal and fundamental precondition for anti- state activity, namely the moral distancing from politics, becomes transformed into the ostensibly non-political basis of the fight against Absolutism” (Koselleck 1988, 96). This notional separation between morality (conscience) and politics (the state) has consequences for man- ners of positing social change, which come to be understood as transpir- ing through changes in moral positions, or via rational persuasion and the telos of reason, and thus from outside the institutions of the state. Thus, by way of illustration, Koselleck describes the conversion
of the Masonic lodges and the Republic of Letters from “enclaves of internal exile” in the realm of the absolutist state to “centres of moral authority” in eighteenth-century France. In the political transforma- tion of these moral societies, claims to “political legitimacy [grow] out of moral innocence” (Koselleck 1988, 95)29—a statement about poli- tics that rings as a truism to our twenty-first-century ears, perhaps most recently in Barack Obama’s inauguration speech.
Of course, as I have signaled already, one can put many questions to Koselleck’s historiography. Does he not assume both the efficacy and historical adequacy of “the Enlightenment” as a political project? Does he not assume a premodern versus modern distinction, which could be undermined via alternative periodizations, categorizations, and narratives? Does he not make use of particular personae as re- ductive examples of a style of thought? Does he not portray the dis- tinction between morality and politics in absolute terms, which is a fallacy? Doesn’t his conceptual history partake of a teleological under- standing of historical development? And does he not affirm a mis- leading—and even Orientalist—divide between modern historical consciousness, on the one hand, and a theological Middle Ages in- capable of history, on the other?30
Bedrich Loewenstein (1976), for example, takes issue with Kosel- leck’s depiction of an “absolute contrast” between morality and poli- tics, and raises the question of the historical adequacy of the Enlight- enment project. I agree entirely with his point that Koselleck does not question the historical adequacy of the Enlightenment as a political project. However, the norms of reason and humanity still serve as principles of justification for politics, or for a form of political utopia, today, as Loewenstein himself acknowledges.31 More significantly, al- though in related manner, Koselleck’s brand of transition narrative is grounded in a fundamental period division between a theologically determined Middle Ages, which supposedly does not apprehend his- torical temporality, on the one hand, and a modern historical con- sciousness, which is allegedly determined by the apprehension of his- torical temporality, on the other (Davis 2008). The presumption or substantiation of this divide between the religious Middle Ages (the religious past) and secular modernity (the secular present) maps onto
the geopolitical divide between a timeless elsewhere and a fully his- torical present (Said 1979; Fabian 1983; Asad 1998, 2003; Anidjar 2006; and especially Davis 2008).32
These reasons to take issue with Koselleck’s narration of the emer- gence of historical consciousness gesture to questions we can also ask about the status of the concept of crisis in historical narration— most obviously, its role as an iterative, periodizing concept. But we can likewise note that Koselleck’s preoccupation is not fundamen- tally empirical; he seeks to thematize emergence, or the prospect of radical newness (Neuzeit) and the concomitant problem of political legitimacy.33 If we do not take Koselleck’s account to be authoritative history—if not taken as a truth correspondence theory of history, but rather as orthodox historiography that is indicative of the practice of the concept of crisis—we see how Koselleck’s illustration sheds light on the various fault lines that have given rise to a form of political utopianism based on a discrepancy between immorality and inno- cence, or between what is thought to be contrived and what is taken as natural.
To better grasp the claim that political legitimacy issues from moral innocence, it is worth summarizing, albeit schematically, Koselleck’s presentation of debates that raged over the course of the seventeenth and eighteenth centuries about the nature of natural law and moral law. For Koselleck, the early European debate over natural law and moral law centers on the seemingly unrelated questions of the sub- ordination of action to conscience, on the one hand, and the perma- nent state of religious warfare, on the other. Hobbes in particular addresses these two subjects, arguing that, regardless of the good intentions of the various Christian communities, their respective claims to exclusiveness and the “subjective plurality” of the “authority of conscience” inevitably lead to civil war. Appeals to conscience, taken as private, subjective opinion, are the source of strife; they do not guarantee peace even when that opinion is motivated by the high- est good of peace. For Hobbes, as is well known, peace requires a sov- ereign who is not subject to private conscience. Koselleck takes pains to underscore Hobbes’s historical starting point: civil war. The call for the state was a call for the end of civil wars, not a call for progress;
a rationally derived destiny was not man’s historical destiny because, for Hobbes, history was the history of sectarian warfare.34 Peace was not equated with progress; rather the concern for peace was the con- text in which faith in moral progress developed.35
Koselleck’s contention is that while Hobbes insists upon the neces- sity of politics for peace, he sets the distinction between morality and politics at the heart of absolutism.36 The substance or content of the law does not establish its legality; instead, only formal legality itself is the rational basis for political obedience and peace, making politi- cal morality a matter of obeying raison d’état, or the strict validity of formal law. The move from the realm of private beliefs to public au- thority obtains through the social contract. Koselleck (1988, 31) com- ments, “The public interest, about which the sovereign alone has the right to decide, no longer lies in the jurisdiction of conscience. Conscience, which becomes alienated from the State, turns into private morality.”37 This notion of a private inner space as the natu- ral site for the formulation of critique—the all-too-familiar moral or ethical subject—is what he refers to as the “apolitical politics” of the Enlightenment.
Koselleck moves forward in fairly linear fashion to note that, when the very possibility of differentiating between civil and private reli- gion is questioned, most significantly, for instance, by Locke, the then-novel notion of “civil conscience” is given value.38 Subjective opinion, described as having the character of laws (moral law, as op- posed to divine or civil law), becomes the standard for judgment of both oneself and others. Citizens, not sovereigns, establish moral law via their judgments; and the legality of such law is likewise a matter of pure judgment.39 Contrary to Hobbes, for whom there is a funda- mental distinction between the private and public realms, Locke pro- poses that the public realm is constituted by the private realm. What is at stake is not the substance of the law, which was dictated by God and nature, but the validity of the law, which was subject to evaluation through reason and thus to judgment by bourgeois society. “Moral turns of mind are . . . interpreted by [Locke] in their social function— but not in order to deduce the State, as Hobbes had done, but to turn
them into constant acts of judgement by the rising society” (Koselleck 1988, 57, my emphasis).
Koselleck traces the politicization of these debates, especially through the Masonic lodges and Bayle’s Republic of Letters in France, to show how they involved utopian ideologies. For Bayle, the seventeenth-century skeptic known for his theorizations of the egalitarian Republic of Letters, reason, as the basis for judgment, was distinct from theology and revelation. He argued, for instance, that textual analysis entailed the systematic and persistent demand for reasons such that errors could be exposed or revealed, which of course implies that what is established on the grounds of reason is always subject to falsification as an error in judgment. Falsification, as a function of reason, constitutes an ever-present horizon, such that the very reliability of reason is constantly assailed, making reason by and large a matter of faith.40
The triumph of reason through the pure authority of private ver- dicts over both politics and the state entails a notion of historical progress that is necessarily a form of moral progress posing the ulti- mate challenge of emancipation. Self-rule, as an ethical principle, is generalized as a public, political demand, based on the assumption that “inner freedom” is realized in the external world. This principle amounts to the plainly incongruous demand for “a complete and total liberation of human beings from human rule” (Koselleck 2002, 250). What would be the burden of proof for such a demand? Koselleck notes that this burden of proof, as produced through reason, would have to be free of logical self-contradiction. By the end of the eigh- teenth century, the grounds for such proof had shifted from natural law to the historical future: “The transformation from personal rule into rational custodianship may be empirically demonstrated: such an expected, contested, and anticipated liberation of human beings from human subordination, in other words, their redemption within history or the negation of alienation” (2002, my emphasis). This Euro- pean challenge, he argues, became a world historical challenge.41
Koselleck’s (doubtless evident to my contemporaries) general point is that political utopianism entails a philosophy of history: the
morally just and rational planning of history coincides in a hoped-for future, and the achievement of that future requires an interpretation of the relationship of the present to the past. He asks, as noted in the previous section of this chapter, what history itself might be, if it is established from the distance of time. And he replies that it is a mat- ter of a moral demand for a difference between the past and the future (Koselleck 1988, 98–137; 2002, 110–44). If history comes to be de- fined as “where time itself occurs,” or “in the relationship between past and future, which always constitutes an elusive present,” the correlation between experience and expectation is necessarily trans- formed. Koselleck (2002, 111–12) clarifies this transformation via two problematic but generally accepted examples of historical modes of prognosis: “Until the early modern period, it was a general principle derived from experience that the future could bring nothing funda- mentally new. Until the expected end of the world, sinful human beings (as seen from a Christian perspective) would not change; until then, the nature of man (as seen from a humanist perspective) would remain the same. For that reason, it was possible to issue prognoses, because the factors of human action or the naturally possible forms of government (as seen from an Aristotelian viewpoint) remained fundamentally the same.” However, he concludes, “a prognosis that in principle expects the same as what has always been possible so far is no real prognosis at all.” Koselleck (112–13) notes that Kant, as an exemplar, “assumes that the future will be different from the past be- cause it is supposed to be different” and that this expectation is ulti- mately “a moral demand for a difference between past and future.”
Regardless of whether or not this is a veritable empirical account- ing of the past—whether or not the so-called pre- and early moderns truly expected nothing whatsoever from the future—this notion of a radical temporal distinction that engenders and is even necessary to transformation is a fundamental postulate for both historiography and for the notion of effective critique.42 This demand for a temporal difference can be described in terms of a notion of progress as a moral task; and it is based on an alleged discrepancy between scientific or technological progress, on the one hand, and the moral positioning of human beings, on the other; or between honorable social emanci-
pation and suspicious economic or political technologies. Morality must respond and constantly adapt to the exigencies of knowledge; it is posited as always insufficient or inadequate.43 This discrepancy be- tween morality and knowledge is taken to be an aporia, and it is sig- nified by the term “crisis.” And this signification refers to the formal, or logical, possibility of crisis, as found in the thought of Aristotle, Hegel, and Marx. In the expectation of temporal difference, it implies or entails an ethical imperative, be that explicit or not.44
For Koselleck, political utopianism as a philosophy of history— or the positing of a transcendent that accommodates the idea that humanity can devise its own destiny—actually produces “crisis,” and it does so in two ways.45 First, it is a philosophy of history that allows one to posit the very possibility of a “break” with the past. His central thesis is that, with the French Revolution, the conviction that conclu- sions about the past are necessary to an understanding of the future is challenged by the idea that the future is to be apprehended as indis- cernible. The Revolution thus represents “the crisis of the Enlighten- ment,” or a new mode of consciousness of history as crisis.46 Second, and equally novel, is the practical mode of social action that this his- torical consciousness entails: one can act “on” history to transform it, which, for Koselleck, denotes a distinctly modern way of postulating the relationship between theory and practice.47
The concept of critique, as understood by the end of the eighteenth century—that is, not as criticism of the state or of political policy, but as a judgment of the validity of institutions and concepts themselves— informs this manner of understanding the relationship between theory and practice. As a universal standard of judgment, through the exercise of reason to resolve historical contingency, critique en- gendered what Koselleck sees as a form of “hypocrisy” because the depiction of political crisis as the logical outcome of historical progress obscures the contingent political significance of such critique.48 Hypoc- risy lies in the simultaneous “critique of all power” and yet the desire to achieve “the power to judge all mankind” (Edwards 2006, 438). In that sense, perpetual critique—of oneself via moral conscience and of the world against a standard of reason—is coterminous with a per- petual state of crisis. Critique makes the future “a maelstrom,” says Koselleck (1988, 109), astutely. “If criticism is the ostensible resting point of human thought, then thought becomes a restless exercise in movement” (108). In other words, the constant quest to authen- ticate the supreme authority of reason transpires through the per- petual process of critique, which entails infinite renewal and is based on the idea of duty toward the future and motivated by faith in the yet-to-be-discovered truth.
To summarize, in his demonstration of the mutual constitution of the cognate concepts, critique and crisis, Koselleck apprehends the Enlightenment not as a sociopolitical organization but rather as an ethos that formed around key concepts, such as “state,” “society,” “politics,” “morality.” This formation depended fundamentally on the temporalization of history, for which the concept of crisis was crucial. For Koselleck, by the eighteenth century, “crisis” denoted a freestand- ing, primarily historical concept. Its emergence as such was concur- rent with the gradual establishment of “history” as a discipline—or with the practice of political and social history as the diagnosis of time. Effectively, Reinhart Koselleck provides a remarkable inquiry into the process of temporalization and the metaphysics of history.49
But then one can ask, whose history? and what history? Kosel- leck’s account clearly serves the self-narration or self-constitution of “modernity” as (sovereign) History. And, alongside the question of “other people’s” histories, or other histories, is the issue of taking myriad ontic possibilities into account, of displacing humanist or human-centric history. As Kathleen Davis maintains regarding such narration and its attendant practice of periodization, “the paradox of a self-constituting modernity is folded into the cut of periodization itself, and the ‘modern’ can emerge as unproblematically sovereign” (2008, 87). This is an iteration of the claim I make herein about the term “crisis.” Similarly to Kathleen Davis, I take Koselleck’s work to be “both an example of and a factor in critical theory’s difficulty with addressing, and sometimes even recognizing, events that defy pre- conceived concepts of religion, secularism, democracy, and politics” (2008, 87, and 5–6). However, not being preoccupied with so-called modernity or the problem of secularization and its constitutive terms, I do not dwell on the concepts of religion, democracy, and politics. In-stead, I raise a question that is crucially related to her characterization of Koselleck’s method and narrative, though with my sites trained on the term “crisis”: what counts as an event? I likewise take up the dilemma of critical theory, though only insofar as “critique” is the cognate of crisis, to which I turn below. As we shall see, and as Kosel- leck has made clear, crisis invokes a moral demand for a difference between the past and the future. Critical historical consciousness— or the specific, historical way of knowing the world as “history”— discerns historical significance in terms of ethical failures: what went wrong?50
the test
With reason as our judge, we are consumed with the problem of establishing the validity of claims to social or political critique, which makes both moral righteousness and faith in deliverance the un- certain terms of our historical self-consciousness. Of course, the grounds for human progress have been an object of suspicion for several centuries. Historical narratives produced by “Enlightenment rationalists” themselves displayed the form of irony associated with a self-conscious, self-critical awareness and an ethics of skepticism (see Burke 1969; White 1973).51 And by the end of the nineteenth century, despite faith in technological progress, the search for general causes in history, or a philosophy of history, was deemed by many a forsaken enterprise. The very notion of historical progress and pretensions to a science of history generally were abandoned, especially after the scourge of the First World War.52 But what is obscured in denuncia- tions of the notion of historical progress and the disavowal of non- contingent grounds for judgment is the way in which the temporal understanding of action and history, or theory and practice, remains contingent upon the concept of crisis. The concept of crisis is bound to its cognate “critique” and is established, as a concept, through the very widespread but strange idea that history could be alienated in terms of its philosophy—that is, that one could perceive a dissonance between historical events and representations of those events.
One might suppose that contemporary modes of critique take into
account the problem of assuming a dissonance between history and a philosophy of history, or even simply between history and morality. Since the time of the differentiation of reason as a category of his- tory, ostensibly initiated during the eighteenth century, reason itself has been posited as a problem: reason cannot claim a position from which to transcend history, or an Archimedean point of observation and validation; it is a wholly contingent mode of observation and yet it is taken to be the ultimate means to overcoming the condition of contingency. The centrality of this dilemma is evident through cen- turies of debate over the possibility of a legitimate or valid place of criticism, of the critic, and of critique. Criticism operates through judgment, making the critic somehow distant or even apparently au- tonomous from social practice. Or, in denial of judgment as a fea- sible or defensible operation, critique is taken to be praxis, or a form of practice. Debates about these positions run through the legendary work of scholars associated with the Institute for Social Research, or the Frankfurt School, who sought to secure the grounds for im- manent critique, or non-foundationalist grounds for political action. Ultimately, whether judgment or praxis, immanent critique takes as its object the discrepancy between how things are, on the one hand, and how they ought to be or how they could be, on the other (see, for example, Adorno 1973; Horkheimer 1974; Habermas 1984–87).53
Habermas (1984–87, 1987, 1982), for instance, maintains that, without a theory of history, there can be no immanent critique. In order to secure the normative foundations of critique, where norms are universally valid and binding statements, one must elaborate nor- matively justified goals—how things ought to be. Because the contin- gency of immanence is paralyzing, it must be transcended via formal structures of rationality, which can be related to historically or cul- turally situated (linguistic) reason.54 For Habermas, the very critique of reason, which demonstrates that there are no epistemological or philosophical foundations for securing rationality beyond its contin- gent or partial manifestation, is itself a rational critique, or “perfor- mative contradiction” arising from self-referentiality. Thus the prob- lem of self-grounding, or the legitimation of theories in terms of the very distinctions (e.g., rational versus irrational) that permit their
elaboration, leads to infinite regress. This dilemma of self-grounding and legitimation is what Habermas takes to be the “crisis of moder- nity.” In keeping with a long-standing tradition of critical theory, or analyses of the generalization of instrumental reason and modes of domination inherent to late capitalism, his scholarship is defined by the problem of meaning (“lost meaning”) and alienation. The object of critique is a distorted pattern of historical development. The crisis of modernity signifies the discrepancy between the world and what the world ought to be, as well as the problem of securing the grounds for such critique.
And this disquiet over the grounds for knowledge of this discrep- ancy persists in the work of contemporary scholars, navigating in the wake of critical theory and the unshakable, impossible question, what is critique? In reply to that very question, Judith Butler, in a re- flection on Michel Foucault’s writing, says: “Critique is always a cri- tique of some instituted practice, discourse, episteme, institution, and it loses its character the moment in which it is abstracted from its operation and made to stand alone as a purely generalizable prac- tice. But if this is true, this does not mean that no generalizations are possible or that, indeed, we are mired in particularisms. On the con- trary, we tread here in an area of constrained generality, one which broaches the philosophical, but must, if it is to remain critical, remain at a distance from that very achievement” (2002, 212).55 For Butler— following Foucault’s own grappling with the unshakable, impossible “What is critique?” (the title of an essay he delivered in 1978, him- self revisiting the labors of Kant’s 1784 text, “Was ist Aufklärung?” [“What Is the Enlightenment?”])56—critique entails suspension of judgment; and it “offers a new practice of values based on that very suspension” (2002, 212). This suspension of judgment means that “the primary task of critique will not be to evaluate whether its ob- jects—social conditions, practices, forms of knowledge, power, and discourse—are good or bad, valued highly or demeaned, but to bring into relief the very framework of evaluation itself” (2002, 214, and cf. Foucault 1997a, 59). A critical relation to norms inheres in a re- flexive relationship to modes of categorization and the forms of ratio- nality that organize and give sense and significance to practice (Foucault 1997b, 315–17). Thus, although this practice of critique elides the judgment between the way the world is, on the one hand, and the way the world ought to be, on the other, it reiterates a Kantian conception of critique as an operation for ascertaining and revealing the limits of reason—though this is achieved not through a transcen- dental deduction, but by seeking the limits of knowledge (Foucault 1997b, 315).57 But why seek the limits of knowledge? Not just for pure transgression, as Butler explains: “One does not drive to the limits for a thrill experience, or because limits are dangerous and sexy, or be- cause it brings us into a titillating proximity with evil. One asks about the limits of ways of knowing because one has already run up against a crisis within the epistemological field in which one lives” (215, my em- phasis). This is a “tear in the fabric of our epistemological web,” the “discursive impasse from which the necessity and urgency of critique emerges” (215)—Koselleck’s rupture, the a priori of the event, “a means for a future” (Foucault 1997a, 42).
So when the grounds for critical reason are deserted for the even more unstable lands of partial and local truths, crisis is not solved. To the contrary, the concept of crisis becomes a prime mover in post- structuralist thought: while truth cannot be secured, it is nonethe- less performed in moments of crisis, when the grounds for truth claims are supposedly made bare and the limits of intelligibility are potentially subverted or transgressed. Thus, for example, epistemo- logical crisis is defined by Judith Butler (tautologically?) as a “crisis over what constitutes the limits of intelligibility” (1993, 138). Many academic authors, including myself (Roitman 2005), take crisis to be the starting point for narration.58 Inspired by the work of Foucault, we assume that if we start with the disciplinary concepts or tech- niques that allow us to think ourselves as subjects—that enable us to tell the truth about ourselves—then limits to ways of knowing nec- essarily entail epistemological crises.59 For Butler, then, subject for- mation transpires through crisis: that is, crisis, or the disclosure of epistemological limits, occasions critique, and potentially gives rise to counter-normativities that speak the unspeakable (1999, 2004, 307–8; and see Boland 2007). For Foucault, crisis signifies a discur- sive impasse and the potential for a new form of historical subject.
For both, crisis is productive; it is the means to transgress and is nec- essary for change or transformation. In keeping with this, because reason has no end other than itself, the decisive duty of critique is essentially to produce crisis—to engage in the permanent critique of one’s self “through a practice of knowledge that is foreign to one’s own.” To be in critical relation to normative life is a form of ethics and a virtue (Foucault 1985, 1997a, 1997b, 303–19). In the words of Simon Critchley, who sees crisis as necessary for politics, or for producing a “critical consciousness of the present,” philosophy would have no purpose in a world without crisis: “the real crisis would be a situation where crisis was not recognized” (1999, 12). If the grounds for truth are necessarily contingent or partial, and if philosophy thus has no in- trinsic object, its authority only possibly emerges as such in moments of crisis, which are defined as the “time when philosophy happens.”60
Meaning, significance, and truth are of course problems—it seems that they constitute our condition of crisis and are addressed by re- flection on the possibility for critique.61 But this category of crisis, so integral to the production of new forms and the very intelligibility of the subject, is never problematized despite its cognate and historical- semantic relationship to critique. Apparently, for scholars past and present, attention to the problem of the grounds for critique has eclipsed the seemingly less imperative question of the grounds for positing crisis. The imperative to critique or even to sustain a criti- cal relation to normativity ironically risks ontologizing the category of crisis. This is curious: Why should crisis, as a category, be so self- evident? How is it that the grounds for critique became the defining problem of epistemology while the grounds for thinking the human condition in terms of crisis did not?62 Although I cannot answer such a very broad question, it is worth noting that its effects are with us today. Indeed, even for those who renounce the possibility and duty of critique, crisis is self-evident. Thus the very first section of Bruno Latour’s influential book We Have Never Been Modern (1993) is en- titled “Crisis,” referring to “the crisis of the critical stance” but never problematizing the very grounds for the concept of crisis. One might conjecture: if modernity has never obtained, then crisis has not either.
Unable to establish the noncontingent grounds from which to claim critique, truth is necessarily immanent and critique is consigned to the constant unveiling of latencies.63 The latter have been character- ized in terms of invisible relations, sediments of tradition, false con- sciousness, ideologies, subjectivization, naturalized categories, or normalization. Even when the criterion for truth is no longer defined in terms of the logic of noncontradiction, or internal consistency, cri- tique is thought to occur through paradox: through the purging of contradiction and paradox; through the commitment to obstinately demonstrate the paradox of power, or the necessary exclusions (the Other, non-sovereigns) that expose the foundations of power to be contingent suppositions; or through the confidence that paradox is a manifestation of conditions of crisis, and hence for critique and trans- formation, thus seemingly resolving or at least provisionally settling conditions of paradox . . .
If by paradox, we mean “a permissible and meaningful statement that leads nonetheless to antinomies or undecidability (or, more strictly, a demonstrable proposition that has such consequences)” (Luhmann 2002, 142), then an ample conceptual history or, better, genealogy of the concept of crisis would account for the antinomies, or how crisis has come to be a manner of signifying such a state of affairs.64 A paradox that is said to be an antimony “produces a self contradiction by accepted ways of reasoning. It establishes that some tacit and trusted pattern of reasoning must be made explicit and henceforward be avoided or revised” (Quine 1966, 5). This kind of paradox “brings on the crises in thought” (5). And such crises are seen to be the bases for critique. When faced with two equally valid or persuasive propositions, which are irreducible the one to the other, critique is elaborated in the disjuncture between “is” and “ought” or between “is” and “could be.”65 This disjuncture denotes the formal possibility of crisis: the contradiction that drives dialectical methods typical to social science narrative (Marx and Hegel being the obvious examples) as well as the dichotomies (subject/object, theory/practice, validity/value, intelligible/empirical, transcendence/immanence) that are at the foundation of all social theory and social science nar- ration.66 The paradox at hand is best described as that established be- tween the invariance of logical truth and the constitutive mutability of our experience of that truth. And because we can only observe or differentiate—that is, produce these very dichotomies—from within immanence, we effectively assume a negative occupation of the im- manent world (Fuchs 1989, 24, cited by Rasch 2000, 109; and see Luhmann [1992] 1998; Deleuze and Guattari 1996, 35–60).67
This self-referential or nonempirical moment of empirical knowl- edge (rejected by Quine, the empiricist) does not deny the existence of an external world or order of being, nor does it indicate pure self- reference or the inaccessibility of that world/order: “There is an exter- nal world . . . but we have no direct contact with it. Without knowing, cognition could not reach the external world. In other words, know- ing is only a self-referential process. Knowledge can only know itself, although it can—as if out of the corner of its eye—determine that this is only possible if there is more than only cognition. Cognition deals with an external world that remains unknown and has to, as a result, come to see that it cannot see what it cannot see” (Luhmann 1990a, 64–65).68 Without delving unnecessarily into Luhmann’s theory of knowledge, it is important to note that his approach is situ- ated “beyond realism and constructivism”: “Concepts are not empiri- cal theories and, hence, cannot be true or false. They constitute the language in which we formulate empirical statements (about the de- gree of inflation) and substantive theories that are either causal (about the causes and effects of inflation), or functional in nature (about the functions of inflation, i.e., about the problems solved by infla- tion). Conceptual frameworks constitute . . . the non-empirical (in- tensional, grammatical or self-referential) aspect of empirical theo- ries” (Christis 2001, 345).69 Concepts originate not in the structure of experience of a Kantian transcendental subject, or in the knowing subject, but rather in observing systems, or empirical “epistemic sub- jects,” which are interfaces between biological, psychic, or social sys- tems and environments.70 There is no transcendental point of depar- ture for Luhmann; there are only self-referential systems.
Without a correspondence between knowledge and the external world or order of things, Luhmann posits the concept of observa- tion as a means of designation via distinctions. A first observation (money) requires a prior distinction (money/not money), which can only be the object of a second-order observation. As Luhmann has demonstrated consistently, this is not a matter of empirical observa- tions, but rather a matter of logical observations, which are distinctions and which are meaning-constituting. Crisis is just one distinction. It is a postulate that brings a descriptive situation “under conceptual con- trol” (Luhmann, 2002, 38).71 This basic operation allows some thing to become distinct and hence intelligible—the observation gives rise to a form, and thus meaning (Luhmann [1992] 1998, 46–55). Sig- nificantly, the concept of observation or distinction does not proceed from binaries or oppositions. For example, my claim is that it cannot be the case that there is crisis/noncrisis, which then permits obser- vations. Rather “crisis” is a distinction that transcends oppositions between knowledge and experience, or subject and object; it is a dis- tinction that generates meaning precisely because it contains its own self-reference.72 As Luhmann says, “What can be distinguished by means of these distinctions will become ‘information’” (1990b, 131). That is to say, from my point of view, the term “crisis” establishes second-order observation; it is not an object of first-order observa- tion.73 Crisis is a means to externalize self-reference. This external reference for judgment in a necessarily self-referential system—or a distinction that generates and refers to an “inviolate level” of order (not crisis)—is seen to be contingent (historical crises) and yet is like- wise posited as beyond the play of contingency, being a logical neces- sity that is affirmed in paradox (the formal possibility of crisis).74
Without doing justice to the depths of Luhmann’s work, suffice it to underscore the point that, in a world that is posited as an immanent field of observations, one is necessarily in a self-referential system, which is unavoidably paradoxical (Luhmann 1990b, 123–43; 1995, 56–57; 1998; 2002, 130–33; Deleuze 1994). In other words, if we take ourselves to be without a position from which to observe society in its totality, there can be no universal principles, but only self-referential principles, which are paradoxical.75 What Luhmann communicates to us is that such self-referential paradoxes are unavoidable, and there is no cause to prohibit or avoid them. Habitually posited as a logi- cal contradiction, paradox is a foundational sign for an order with- out an origin. This means, as he says, that “all knowledge and all action have to be founded on paradoxes and not on principles; on the self-referential unity of the positive and the negative—that is, on an ontologically unqualifiable world” (Luhmann 2002, 101; see also 86–87 and 142–43). Without recognition of these conditions of para- dox, standards for evaluating social conditions produce descriptions and judgments in terms of pathology—that is, as deficient but not as merely paradoxical (cf. Luhmann 1990b, 136–37).76
Crisis is a blind spot that enables the production of knowledge.77 It is a distinction that, perhaps as least since the late eighteenth cen- tury, and like all latencies, is not seen as simply paradox, but rather as an error or deformation—a discrepancy between the world and knowledge of the world. But if we take crisis to be a blind spot, or a distinction, which makes certain things visible and others invisible, it is merely an a priori. Crisis is claimed, but it remains a latency; it is never itself explained because it is necessarily further reduced to other elements, such as capitalism, economy, neoliberalism, finance, politics, culture, subjectivity.78 In that sense, crisis is not a condition to be observed (loss of meaning, alienation, faulty knowledge); it is an observation that produces meaning. More precisely, it is a distinction that secures “a world” for observation or, in Obama’s terms, it secures the grounds for witnessing and testing. 
0 notes
katebushwick · 5 years
Text
How do people maintain deeply held moral identities in a seemingly immoral social environment? Cultural sociologists and social psychol- ogists have focused on how individuals cope with contexts that make acting on moral motivations difficult by building supportive networks and embedding themselves in communities of like-minded people. In this article, however, the author argues that actors can achieve a moral “sense of one’s place” through a habitus that leverages the material di- mensions of place itself. In particular, he shows how one community of radical environmental activists make affirming moral identities cen- tered on living “naturally” seem like “second nature,” even in a seem- ingly unnatural and immoral urban environment, by reconfiguring their physical world. The author shows how nonhuman objects serve as proofs of moral labor, markers of moral boundaries, and reminders of moral values, playing both a facilitating and constraining role in moral life.
INTRODUCTION
How do people for whom living “morally” is a key part of their identity le- verage the apparent moral challenges posed by their environment to sus- tain a sense of moral selfhood? The relationship between moral values, action, and social context is a long-standing area of inquiry for social psychologists Ă°see Blasi 1980; Hardy and Carlo 2005; Lizardo and Strand 2010Þ, but it also bears heavily on a range of sociological analyses. Members of impoverished inner-city minorities Duneier 1999; Anderson 2003; Liebow 2003 or the work- ing classĂ°Lamont 2000; Sayer 2005Þ frequently confront the low status af- forded to them by society by asserting their moral worth. Employees in non- profit hospitals or hospices must balance a commitment to health care as a social right with pressure to economize on or limit treatment Ă°Livne 2014; Reich 2014Þ. Political activists, too, must weigh wanting to change the world with living in a social milieu where most do not share their worldviews. This article shows how such actors may make living morally seem like “second nature” by drawing on the material world.2 I approach morality as a set of individual or collective beliefs that spec- ify the kinds of persons or actions that are “good” or “right” evalua- tions that apply to actors across different situations and over time Ă°Tavory 2011, p. 273Þ. This definition sets up the central problematic of this article: how actors, in the face of situations that appear to make living up to the range of their motivations to act morally difficult, nonetheless achieve a sense that maintaining a moral identity is a habitual, relatively unproblem- atic, and sustainable second nature. A long line of thinking within cultural sociology, frequently drawing on Durkheim has focused on how an indi- vidual sense of living morally is facilitated by group life.4 This article argues, however, that the material characteristics of place can provide resources for sustaining a sense of moral selfhood. In particular, I show how nonhuman objects can serve as proofs of the substance and significance of moral labor, markers of boundaries that distinguish moral actors from those they per- ceive as less moral, and totemic reminders of moral commitments. At the same time, these material proofs, reminders, and markers add a dimension of unpredictability to moral life that actors must manage see Latour 2005. In short, I argue that one way individuals can achieve what Bourdieu Ă°1990, p. 295Þ might call a moral “sense of one’s place”—a degree of comfort with the possibilities and limits of living up to a moral identity but that nonethe- less allows for the ongoing development of a moral identity through the crea- tion of new moral practices—is through a habitus that constructs and is con- structed in dialogue with material objects. I develop this argument through an ethnographic study of the moral lives of “freegans” in New York City. Freeganism is a small emerging movement within radical environmentalism in the United States and Western Europe whose participants attempt to dramatize the unsustainability and excesses of mass consumerism by claiming to minimize their participation in the cap- italist economy and living off its waste Ă°Edwards and Mercer 2007; Gross 2009Þ. Freegans are best known for publicly “dumpster diving” and redis- tributing discarded but edible food from supermarkets, but freegan practices also include gardening in abandoned lots; creating and repairing bicycles, clothing, or furniture from discarded materials; foraging for wild food in ur- ban parks; and limiting paid employment in favor of full-time activism. Freegans are ideologically heterogeneous: some describe themselves as anarchists while others evince a more reformist critique of capitalism’s ex- cesses. Nonetheless, nearly all frame their activism as centered on a deep, moral motivation to live more “naturally.” Jeff, a tall, muscular white free- gan in his mid-20s with a degree in filmmaking, explained: “My vision is that eventually we live in a world where we don’t have any of this modern tech- nology. Live with the land, on the land, and everything we get comes from nature. Civilization is fundamentally, inherently crazy and unsustainable, and eventually it exhausts itself. I think we can be mature, responsible beings, but still be wild animals. That’s what other animals on the planet do, why should we be any different?” Jeff’s description of freeganism harkened to the “back- to-the-land” communalism that flourished in the 1960s, except in one obvious respect: Jeff, and the other freegans studied in this article, all live in New York City. In fact, Jeff continued to work at a job he said he loathed in order to make monthly rent payments to a landlord he claimed was exploitative, so he could live in a city he characterized as a “black hole sucking up the re- sources of the planet.” Yet the apparently problematic human environment of the city was none- theless necessary for freegans’ practices, such as publicly displaying and politi- cizing wasted food. What is more, I argue that the very adversity of the city, when combined with the physical resources the freegans made out of their environment, allowed freegans to carve out a sense of moral place in the city. For all his rhetoric, there was an evident comfort and familiarity in the way Jeff navigated the streets of Brooklyn on the bike he built from abandoned parts, combed the curb looking for useful waste, and cultivated a garden amid slabs of broken concrete behind a local community center. Jeff’s everyday habitus belied this discursive clash between the ideals of living “naturally” Making the City Second Nature 1019 American Journal of Sociology and the reality of life in a city. In fact, living naturally in the city seemed like second nature thanks to one of the city’s apparently most problematic fea- tures: waste. I begin this article with a review of recent literature on morality, which has emphasized interaction and group life as sustaining moral identities and motivating moral action. I then reconsider Durkheim’s later work on to- temism and Bourdieu’s work on practical action, supported by more recent work in cultural sociology, to reemphasize the role of the material world in moral life. I theorize how nonhumans can serve as proofs of moral labor, markers of moral boundaries, and reminders of moral values. I then explore freegans’ contradictory relationship to urban life, showing how freegans make living naturally central to their identities yet live in a city that appears to make doing so difficult. I then demonstrate how freegans invert the seeming “prob- lems” posed by the city, turning it into a place in which morality can seem second nature, through engagement with the physical world. Nonhuman proofs, markers, and reminders are not just props or conduits for the con- struction of moral selves, but active players that both enable and constrain moral action, findings I reflect on in the conclusion. THE MATERIALITY OF MORALITY Moral Identities, Motivations, and the Habitus Theories of the relationship between moral values, moral action, and social context have undergone several paradigm shifts within post-Parsonian so- ciology Ă°see Lizardo and Strand 2010Þ. “Tool kit theorists” recognize the frequent divergence between what people say and do and thus reject the notion that a coherent moral worldview shapes action Ă°Swidler 1986, 2001; DiMaggio 1997Þ. Instead, individual action is patterned by an external scaf- folding of cultural codes, roles, and institutions from which individuals draw in a situational, ad hoc fashion. This approach to the relationship between values and actions presents “morality” as, foremost, justifications for actions undertaken for potentially nonmoral reasons Ă°Lamont 1992, 2000; Boltanski and Thévenot 2006Þ. From the perspective of tool kit theory, asking how individuals act in ways they see as “moral” in social contexts that make doing so difficult does not really make sense. If “moral responsibilities are not fixed, but are im- provised” Ă°Sanghera, Ablezova, and Botoeva 2011, p. 169; see also Brown 2009; Turowetz and Maynard 2010Þ, the problem becomes one of situational impression management rather than bringing action into accordance with some inner moral “core” Ă°see Goffman 1959Þ. Actors might need to expli- cate gaps between beliefs and actions but feel little need to close those gaps themselves in the name of some stable moral sense of self. 1020 More recent work within social psychology and sociology, however, has asserted a more systematic relationship between moral beliefs and actions. As Vaisey Ă°2009Þ argues, deeply internalized, but not necessarily verbalized, moral worldviews may “motivate” action across time and across social con- texts. From this perspective, the “problem” of maintaining a moral sense of self becomes more comprehensible: actors carry relatively constant moral motivations but confront environments that vary in the degree to which they facilitate acting on them. Even if individuals can live with contradictory moral commitments, struggle to articulate what those commitments are, and hold them alongside nonmoral desires, the ability to act on moral beliefs can none- theless be an important source of personal “ontological security” Ă°Giddens 2009, p. 50Þ. While these two visions of moral action appear incompatible, social psy- chologists have partly bridged them by suggesting that the relationship be- tween moral norms and action may depend on “moral identities” Ă°Blasi 1980; Monroe 2001; Hardy and Carlo 2005; Reynolds and Ceranic 2007Þ. Nearly everyone sees himself or herself as a “moral” person and thus feels some need to account for his or her actions in terms of shared moral codes. At the same time, the degree to which acting morally is central to the conception of the self—and, as such, plays a motivating role—is variable Ă°Monroe 2001; Aquino et al. 2009; Stets and Carter 2012Þ. Disparities between motivations and ac- tions might be primarily a concern for individuals with a high degree of moral identity—such as, I will show, the freegans—for whom not being able to act morally is injurious to the sense of self Ă°Burke and Stets 2009, pp. 69, 80Þ. How do individuals with a high level of moral identity interact with the world around them in practice? Vaisey Ă°2009Þ observes that to constantly reevaluate one’s lifestyle vis-à-vis moral values would be “cognitively over- whelming.” Instead, as ethnographers in the Bourdieusian tradition have ar- gued, becoming a “moral” actor with a “moral” identity entails the devel- opment of a “moral habitus,” a “thoroughly embodied and practical form of moral subjectivity” Ă°Winchester 2008, p. 1755; see also Ignatow 2009; Abramson and Modzelewski 2011Þ. This moral habitus is more deliber- ately cultivated and less deeply ingrained than the primary habitus but none- theless serves as a powerful subjective and behavioral force Ă°Wacquant 2014, p. 6Þ. Although Bourdieu himself was skeptical that moral norms were the basis for action Ă°for a critique, see Sayer 1⁄22005Þ, this extension of habitus captures important points that have appeared elsewhere in the sociological literature on morality. Moral beliefs and identities are not just prior to moral action but are constructed in a dialectical fashion through action, creating a sense of one’s moral place relative to the surrounding social structure Ă°Winchester 2008, p. 1755Þ. Moral assumptions and beliefs are often intuitive and embod- Making the City Second Nature 1021 American Journal of Sociology ied rather than discursively articulated Ă°Sayer 2005, pp. 42– 43; Abend 2014, pp. 30, 55Þ. And even as morality can constrain individual action, it can be generative of new practices Ă°Joas 2000, pp. 14, 66Þ. When the everyday moral habitus and the actor’s position in social space are aligned, actors are like a “fish in water” that “does not feel the weight of the water, and takes the world about itself for granted” Ă°Wacquant and Bourdieu 1992, p. 127Þ. In such situations, following the motivating im- pulses of one’s moral identity becomes “second nature,” something “experi- enced as non-problematic—expected, understood, 1⁄2and navigable” Ă°Martin 2000, p. 197Þ. This happens not just through occasional situations when actors can make themselves feel they are “moral enough” but through the ongoing dialectic of everyday habitus and social environment. Group Life and a Moral Sense of Place Bourdieu’s Ă°1984, 1990; Wacquant and Bourdieu 1992Þ work usually em- phasized the homology between the mental structures of the habitus and the “rules of the game” in the surrounding field. Nonetheless, it is clear that the specific moral habitus and the avenues of action open to it are not always congruent Ă°Sayer 2005, pp. 26, 44Þ. To be a committed Muslim in a Chris- tian country or an animal rights activist at an event catered for meat eaters entails adjustments to a pure enactment of moral motivations. What is the consequence of these situations? Bourdieu suggested that one result could be “hysteresis”—a habitus ill adapted to action in a particular social environ- ment Ă°Bourdieu 1990, p. 62; Lizardo and Strand 2010, p. 221Þ.5 But while Bourdieu is often read as describing a habitus that stems from and thus reproduces the outside world Ă°see Sallaz and Zavisca 2007, p. 25; Wacquant 2014, p. 5Þ, Bourdieu Ă°1990, p. 61Þ himself points out that the social world and the habitus are constructed together. Agents can generate contexts in which, even as a fish out of water in the wider society, their moral habitus can align with its social milieu. For example, Vaisey and Lizardo Ă°2010Þ show how actors “prune” their social networks to increase interac- tions with others who share their moral worldviews.6 Participants in de- viant communities, for example, often differentiate themselves on the basis of moral criteria of personal or collective worth, which almost by definition put them “out of place” in society Ă°Becker 1963; Goffman 1963; Moon 2012Þ. Subcultural participants can sustain their opposition to conventional norms partly through group life, which provides “free spaces” and rituals that re- inforce identities and motivations and create contexts for acting on them 5 This is similar to “identity theorists’ ” suggestion that an “unverifiable” identity is liable to be replaced Ă°Burke and Stets 2009, p. 80Þ: frustrated freegans, e.g., reverting to their identity as middle-class urban denizens or more moderate political agents. 6 Identity theory, as cited above, makes a similar point about how actors search out sit- uations in which salient identities are likely to be confirmed Ă°Burke and Stets 2009, p. 73Þ. 1022 Ă°Fischer 1975; Fine and Kleinman 1979Þ.7 Recent work has more explicitly argued that the appeal of subcultures stems from simultaneous development of an individual moral habitus and the structures, rules, and rituals of devi- ant group life Ă°Wacquant 2004; Abramson and Modzelewski 2011Þ. These conclusions are consistent with a long line of sociological thinking on morality. Drawing on Durkheim’s Ă°1997Þ analysis of suicide, for example, sociologists of religion and health have focused on how the presence of so- cial ties facilitates individual moral worth, meaning, and self-preservation Ă°Idler and Kasl 1992; Maimon and Kuhl 2008; Wray, Colen, and Pescoso- lido 2011Þ. Offering one canonical reading of Durkheim’s analysis, Bellah Ă°1973, p. xliiiÞ concludes that “it is the very intensity of group interaction itself that produces social ideas and ideals and . . . it is from the warmth of group life that they become compelling and attractive to individuals.” In ad- dition to providing “warmth” through social integration, groups also exert regulation, shaping and constraining the ability of actors to diverge from their moral motivations or abandon their moral identities Ă°Durkheim 1997Þ. This literature thus offers a clear prediction that can be brought to bear on empirical material. If freegans have achieved any sense of living morally as second nature, it likely stems from having created groups or interactional spaces within their moral habitus in line with the social environment. This is not the same as saying that group life is purely harmonious, only that it affords individuals the opportunity to act out moral motivations in ways that affirm moral identities. As noted in the introduction, however, I posit another, material route to finding a moral sense of place. Materiality and Moral Second Nature Durkheim’s thinking about morality evolved over the course of his life Ă°see Abend 2008Þ. Although he maintained that “society . . . is the source and the end of morality” Ă°Durkheim 1953, p. 59Þ, in Elementary Forms he ex- plored more circuitous connections between individual and group moral life. In fact, although morality is derived from society, its power stems from the fact that it is perceived as extrasocial, coming from “something greater than us” Ă°1965, p. 257Þ. Along the same lines, in Suicide, he insisted that “man cannot live without attachment to some object which transcends and sur- vives him” Ă°1997, p. 210Þ. Hence, we invariably see morality as originating not in society but in external entities, such as gods, or abstract concepts, like “nature” Ă°Durkheim 1953, p. 79Þ.8 7 The same point has been made for social movements Ă°Hirsch 1990; Polletta 1999; O’Hearn 2009Þ. 8 Durkheim’s argument in Elementary Forms for “primitive” societies is analogous to his argument about “advanced” ones, in which the moral regulation of society must come from an entity outside of it: the state Ă°see also Durkheim 1957Þ. Making the City Second Nature 1023 American Journal of Sociology It is from this interplay of the social and nonsocial in moral life that Durkheim’s conception of totems originates. Actors make totems out of the desire to represent the impersonal social forces that they see as acting on them. Thus, although totems are “the source of the moral life of the clan” Ă°1965, p. 219Þ, they are nonetheless always, in part, tied to something out- side the clan, such as wild animals Ă°p. 87Þ. Far from being simple outgrowths of moral life, totems exert moral influence over individuals, as evidenced by prohibitions on eating animals of the totemic species. Consequently, the religious forces Durkheim describes are “physical as well as human, moral as well as material” Ă°p. 254Þ. Subsequent work provides a further basis for considering the material world in moral life. Drawing on Lévi-Strauss’s Ă°1962Þ critique of Durkheim, Jerolmack Ă°2013, p. 14Þ shows that the animals and plants drawn on in totemic religion are not just “good to think with” but enable qualitatively different ways of thinking, perceiving, and classifying the social world. The implica- tion is that the objects coded as “moral” are not just arbitrarily pulled from the environment but instead are selected on the basis of moral beliefs and re- worked through moral practices. In fact, groups in a “moral minority,” like the freegans, may indeed be pushed to draw on items that are not coded as moral by the dominant group—such as, I show later, waste. A central contribution of this article is that relationships between humans and the material world may not just enhance or contribute to the confir- mation of moral identities in group life but may actually themselves be- come the basis of an individual’s moral sense of place. The notion of a practi- cal reworking of the material environment is an important element of habitus Ă°Lizardo and Strand 2010, p. 211Þ, but I break from Bourdieu’s Ă°1990, pp. 71, 76, 273Þ assumption of a three-way homology between the subjective hab- itus and the objective social and material world. Instead, an actor whose moral habitus is out of sync with the behavioral expectations and patterns of the social environment may nonetheless be like a “fish in water” with re- spect to his or her ongoing reordering of physical space or material milieu. At the same time, linking the dialogical relationship between habitus and en- vironment to developments elsewhere in sociology, I insist that objects are not just passive props in a social morality play. Instead, as Latour Ă°2005, pp. 10, 74Þ argues, objects may actually do things that social actors cannot and can transform rather than simply transmit the meanings that humans attribute to them.9 I focus on three distinctive roles that objects can—and, as hinted at by the existing literature, do—play in moral life: Ă°1Þ proofs of moral labor, Ă°2Þ mark- ers of moral boundaries, and Ă°3Þ reminders of moral commitments. 9 Although I agree with Latour that objects “make a difference,” I make no claims to the existence of a “flat” world in which objects live moral lives or are intentional or reflexive in the same way as humans Ă°see Jerolmack and Tavory 2014Þ. 1024 Moral proofs.—Recent work in the symbolic interactionist tradition has shown how behaviors toward nonhumans can reflect, anticipate, and even prompt human action Ă°Tavory 2010Þ. Jerolmack Ă°2013, chap. 5; Jerolmack and Tavory 2014Þ, for example, explores how urban pigeon handlers’ rela- tionships with birds can serve to foster new human connections. Yet even if we accept the Durkheimian notion that the roots of moral values themselves always reside in social life, this does not mean that all moral action is directed toward or made with reference to other human beings. Pigeon handlers— like an animal shelter employee or art conservator—may very well have moral identities founded on their relationships with the birds themselves. I draw on the study by Boltanski and Thévenot Ă°2006, p. 131Þ, who argue that moral justifications must be buttressed by moral “proofs,” which in turn are “based on objects that are external to persons.”10 But, once again, moral- ity is not just about proving that we are moral to others. Actors with strong moral identities in social worlds that make acting on moral motivations dif- ficult must also find ways of proving their morality to themselves. In this re- spect, having tangible, physical evidence of moral action can be a crucial confirmation of the depth of moral commitments, even while other actions or objects can contradict them. Moral markers.—The drawing of boundaries between groups and indi- viduals graded on a hierarchy of moral worth is a key aspect of moral life Ă°Lamont 1992, 2000; Edgell, Gerteis, and Hartmann 2006; Sherman 2006; Sanghera et al. 2011Þ. But what makes the “symbolic” boundaries of moral- ity “real”? Cultural sociologists have argued that symbolic meanings are sta- bilized and transmitted through physical carriers Ă°Mukerji 1994; Molotch 2003; McDonnell 2010Þ. Indeed, a range of research has suggested that phys- ical objects can make boundaries a more consistent presence in social life than discursive expressions.11 I show how freegans distinguish themselves from others, even when placed in social situations Ă°like jobsÞ when acting on the moral motivation to live “naturally” seems difficult, using material identifiers. Yet precisely because of their material presence, objects can also invoke moral boundaries when human actors do not intend to do so Ă°see Tavory 2010Þ. The “wrong” object— like a Wal-Mart bag carried into a “fair-trade” shop Ă°Brown 2009, p. 872Þ— can highlight discrepancies between moral values and action to both external audiences and actors themselves. 10 Identity theory also considers “resources”—physical objects alongside social relation- ships—as crucial for “identity verification” Ă°Burke and Stets 2009, chap. 5Þ. However, these authors quite explicitly move away from viewing material resources as distinctive from social ones in their functions. 11 The role of physical markers in constructing boundaries has been shown in studies on subcultures Ă°Hebdige 1979, p. 78Þ, class differentiation Ă°Goffman 1959, p. 36; Bourdieu 1984, p. 184Þ, or religion Ă°Winchester 2008, p. 1770; Tavory 2010Þ. Making the City Second Nature 1025 American Journal of Sociology Moral reminders.—As Durkheim Ă°1965, p. 391Þ noted, society “cannot be assembled all the time.” Totems serve to remind the individual of his or her moral motivations even when that individual is outside of the social context from which those motivations originated. We might predict that, in a modern city, where individuals move rapidly between different groups and locations Ă°see Simmel 1⁄21903 1971Þ, such “totems,” far from being primitive hold- overs, might actually become more important in sustaining moral identities. Indeed, Jerolmack and Tavory Ă°2014, p. 73Þ argue that “everyday totemism” reaches far beyond religious life. Interactions with even “mundane” non- humans such as pigeons Ă°or more obviously signifying ones, such as flags or clothingÞ can allow humans to connect with social groups “in absentia.” Once again, though, we should go beyond simply seeing objects as a proxy for social ties, or what Durkheim Ă°1997Þ described as “social integration.” Objects can also step into the other role Durkheim envisioned for the group: moral “regulation,” one of “monitoring, oversight, and guidance” Ă°Wray et al. 2011, p. 508Þ. As I show, nonhuman objects can forcefully remind freegans of their moral identities, even when they are outside the freegan group itself, and in contexts in which freegans might prefer to set them aside temporarily. DIVING IN: METHODOLOGY I elaborate my theoretical argument about the relationship between moral identities, moral motivations, the habitus, and physical objects with an em- pirical study of how freegans rework their material environment. This study is based on nearly two years of ethnographic fieldwork with the activist organization freegan.info in New York City, between 2007 and 2009. Over this time, I attended scores of freegan.info events: “trash tours” Ă°publicly an- nounced dumpster dives open to newcomers and mediaÞ, wild food foraging expeditions in city parks, collaborative sewing “skillshares,” “feasts” held in freegans’ homes, and monthly organizing meetings. As time went on, I be- gan to spend time in the freegan bike workshop and freegan “office”—really, a nook in the cluttered, windowless basement of a converted warehouse in Brooklyn—which led to more interactions outside of formal group events. In spring 2009, I conducted 20 interviews, which constituted nearly a com- plete census of active members of the freegan.info group who self-identified as “freegan.”12 I also analyzed several thousand e-mails from freegan.info’s “freeganworld” listserve Ă°which has over a thousand subscribersÞ, giving me a better sense of freegan ideology and practice across contexts. In 2012, I re- turned to New York and conducted follow-up interviews and observations. 12 I defined “active” group members as people who attended freegan events over a period of at least three months. Only two such individuals declined to be interviewed. 1026 Fieldwork initially focused on freegans’ public, performative claims- making. The centrality of nature to freegans’ moral worldviews, and their discursive critiques of urban life, emerged through the course of observation. These findings led me to ask whether and how freegans carved out a moral place in a city they frequently characterized as immoral. As time went on, I attempted to test emerging hypotheses derived from theory through field- work, a process of theoretical reconstruction congruent with the extended case method Ă°Burawoy 1998Þ. As Tavory Ă°2011, p. 289Þ observes, “the less the environment is built to cater to a specific category of people, the more moral situations would arise in these people’s lives.” I thus view freegans—with the apparently gaping chasm between their articulated moral identities and the reality of the urban environment—as a strategic research site for examin- ing in accentuated form how living in an adverse context can actually be- come the basis of a moral sense of place. A crucial objective of participant observation was getting past the ad hoc reasons freegans offered for their behavior to identify any underlying mo- tivations, which Vaisey Ă°2009Þ argues are best identified with forced-choice survey questions. But his argument assumes that sociologists must ultimately rely on some kind of verbalized representation to study moral beliefs and be- haviors. Using participant observation, however, we can actually see pat- terns of behavior and identify trends that reveal the underlying motiva- tions behind them by “sampling” across a range of situations and moments in time Ă°Jerolmack and Khan 2014Þ. Ethnography thus is a valuable tech- nique for studying morality “in the wild” Ă°Hitlin and Vaisey 2010, p. 11Þ, as actors deal with practical moral conundrums and conflicts. Nonetheless, the concern with the materiality of morality adopted here poses problems for ethnographic examination. The value of observing moral action rather than asking about it stems from the notion that meanings are made “between individuals” rather than “by or within individuals” Ă°Jerol- mack and Khan 2014, p. 200Þ. However, I assert that moral motivations are also acted out between individuals and nonhuman objects. By defini- tion, though, any situation I could access involved at least two humans: the observer and the observed. I adopted three strategies to evaluate if, how, and why freegans’ actions were directed at objects. First, I looked for the unintended material “traces” Ă°Latour 2005, p. 193Þ of freegan practices. Second, by quite literally “getting my hands dirty” at freegan events—by, for example, eating discarded food— I gradually gained access to the more unguarded and candid moments of freegans’ lives. Finally, I also began to engage in freegan practices on my own, including subsisting almost entirely on discarded food for a six-month period. Through embodying freegan morality myself, I more fully under- stood how everyday relationships to the physical world could help sustain a sense of moral place in an apparently adverse context. Making the City Second Nature 1027 American Journal of Sociology FREEGAN MORAL IDENTITY AND THE CONTRADICTIONS OF URBAN LIFE Consistent with the definitions of morality cited above, freegans invoked deeply rooted, cross-situational and cross-temporal identities founded on the “right” or “wrong” way to live to explain their involvement in freegan- ism. David, a bearded white male in his early 30s, claimed that—despite growing up in a conventional, middle-class home—“1⁄2I always felt like I had to minimize my impact and live as nonviolently as possible. I’ve ba- sically always been an anarchist.” Prior to discovering freeganism, three- quarters of freegan.info participants reported their primary activist involve- ment as animal rights, a movement whose participants are overwhelmingly motivated by moral beliefs Ă°Jasper and Nelkin 1992Þ. Most moved beyond veganism when they realized the moral limits of a vegan diet: continued sup- port for environmentally destructive agriculture or poor working conditions in the food industry. While freegans’ worldviews were undoubtedly shaped by their early in- volvement in other social movements and activist networks, freegans none- theless experienced their motivations to act morally as a permanent, intrinsic part of their identities. As Jeff articulated, “I was always radical. Sometimes it was latent, sometimes it wasn’t encouraged, sometimes it was covered up by other things. But I was always radical.” My own observations of freegan. info participants during an extended period of time Ă°over five yearsÞ sug- gested that living morally, for them, was not just a temporary project. In- stead, as one put it, “Realizing what you believe and trying to live that is very complicated and something that a lot of people—especially myself— are going to spend the rest of our lives trying to figure out.” Whether or not they still identified as freegans, when I returned in 2012, all of the rein- terviewed informants offered examples of how trying to live morally con- tinued to structure their lives. More than just rhetoric, freegans’ moral beliefs were built into their everyday practices, or habitus. David began dumpster diving when he real- ized that even organic farming killed small mammals and insects. Although I could not verify his claim not to have bought food for 13 years, I never saw him acquire food any other way than “dumpstering.” At various times, I also observed him spending hours searching for and dismantling mouse- traps, meticulously picking live flies off of wax paper, and berating other residents of his shared house for poisoning bedbugs. Madison quit a job with a six-figure salary and sold a luxurious Manhattan apartment after having her “mind blown” at a freegan.info event. Perhaps most notably, even though it was, as one freegan put it, “horrifying and disgusting” to others, most freegans regularly recovered and ate wasted food because they perceived purchasing food as morally anathema. Freegans, then, appeared to be indi- 1028 viduals with strong moral identities who made their capacity to act on their moral motivations a core and enduring part of their sense of self. But what did living morally actually mean? The definition of freegan- ism on freegan.info’s website is a sprawling list of virtues and vices: “Free- gans embrace community, generosity, social concern, freedom, cooperation, and sharing in opposition to a society based on materialism, moral apathy, competition, conformity, and greed” Ă°http://freegan.infoÞ But in interviews as well as in public events that explained freeganism to those unfamiliar with the movement, freegans frequently focused on a moral imperative to live more “naturally.” As Benjamin—a freegan activist in his mid-20s who lived in a squatted building in Brooklyn—explained, “We’re just so disconnected from it 1⁄2nature. One of the goals 1⁄2of freeganism is just connecting with each other and connecting with the rest of life on earth, connecting with the earth itself.” Freegans evoked humanity’s fall from grace, central to Judeo-Christian moral narratives, and made nature central to the story. As Evie, a speech pathologist in a public hospital and lifetime Palestinian liberation activist, articulated during one meeting, “There was a point where human beings stepped out of nature and decided to control nature,” and it was at this point that the seeds for a whole range of social ills—mass consumption, ex- ploitation of animals and humans, and ecological devastation—were planted. Nature provided both a centerpiece of freegan discourse and a guiding principle for new freegan.info projects. Proposing that the group start an urban garden, Guadalupe noted, “My ideal is a little different than just having a mini-farm. I’m very interested in letting the plants that just nat- urally grow in the area do their thing and even help them grow. This in- cludes ‘weeds.’ I don’t really believe in the concept of an undesirable plant. I believe in biodiversity.” That freegans would evoke nature as central to their moral worldviews was unsurprising. The power of nature as moral principle has a long history in strands of American culture Ă°Nash 1973; Cronon 1996Þ, often in opposi- tion to very different framings of “the good” in terms of consumption, com- petition, or free-market capitalism. Unsurprisingly, sociologists have shown that beliefs about nature—for example, freegans’ claim that living natu- rally meant not consuming animals—are culturally and temporally variable social constructs Ă°Greider and Garkovich 1994; Freudenburg, Frickel, and Gramling 1995; Fourcade 2011Þ. Yet, paradoxically, research also shows that the appeal of nature as a framework for determining right and wrong stems from the popular belief that nature is free from social influence Ă°Bell 1994, p. 7; Jerolmack 2013, pp. 134–35Þ. This was precisely the sense in which freegans used nature: to refer to something that was immutable, primor- dial, and uncontestable, a moral concept “outside of us” in the Durkheim- ian sense. Making the City Second Nature 1029 American Journal of Sociology Freegans’ discursive commitment to living naturally and the reality of freegan.info as a group based in a city built by humans would thus seem to be in direct contradiction. Indeed, when I asked freegans in interviews about their views of urban life, they often repeated a familiar American cul- tural trope that valorizes the natural aspects of rural life and demonizes the city Ă°see Hummon 1990Þ. One freegan characterized the city as an “evil haven of decadence and debauchery”; another described it as “incredibly psychologically destructive” because it separated residents from natural spaces. Ryan speculated that “rates of depression are so high in America because we’re in a city, and we aren’t in some heavily forested area being spontaneous and finding wild asparagus.” On a more practical level, aspects of the urban environment made con- forming to some elements of the officially articulated freegan identity dif- ficult. According to the movement’s informal manifesto Why Freegan? and the homepage of freegan.info, freegans engaged in a “total boycott” of the capitalist economic system, meeting as many needs as possible outside the market. For example, freegans claimed they could live for “free” by squat- ting illegally in abandoned Ă°or “wasted”Þ buildings. Yet in New York, prop- erty values are high enough that abandoned buildings are rarely left un- occupied for long, and the police actively search for and remove unlawful occupants. As such, even though “true freegans don’t pay rent,” as one told me, the reality was that nearly all of them did. Some had eliminated rent payments, but only by buying a home outright. The result was an admission that, for all their political and moral commitments, there were many parts of freegans’ urban lives that they could not control. As one freegan told me, “there are so many things I see that I can’t change. I can’t change the way the building I live in operates. I know that if I lived elsewhere, I would do things completely differently.” A similar apparent divergence between articulated values and avenues for moral action could be made for employment. One of the pamphlets that freegan.info passed out during events on public sidewalks averred, “Free- gans are able to greatly reduce or altogether eliminate the need to constantly be employed.” But nearly all freegans maintained conventional, waged oc- cupations, because needing money was an “unpleasant reality” as long as they lived in a city. While some had found employment in activist organizations or nonprofits, others worked in more clearly problematic fields like product design. As Evie, herself a homeowner, admitted, “I’m freegan in lots of lit- tle things in my life. But at the end of the day, I am paying taxes and fund- ing a couple of enormous wars, and sort of everything bad that’s going on in the world.” Reflecting on the divergence between freegans’ moral ideals and the exigencies of urban life, one freegan sighed, “Manhattan is one giant contradiction.” 1030 CLAIMING CITY CONTRADICTION, FINDING URBAN NATURE How did freegans respond to this apparent disparity? When confronted with the gap between the two, most freegans offered to others what might be framed as a moral justification: they stayed in the city because it was an efficacious site for their activism. At one “freeganism 101” event, a new- comer asked Jeff why he hadn’t moved to the countryside. He replied, “Setting up a commune out in the country would be good for me, but I don’t know how that would be for the overall resistance. I definitely want to get out of the city eventually . . . 1⁄2but there’s a lot of work that needs to be done in all different places . . . and lots of it needs to happen here, and not in the country.” In truth, it is difficult to imagine freegans’ polit- ical tactics outside of an urban context. Cities concentrate retailers in a small geographic area, allowing freegans to organize public, performative dumpster dives for passersby and the media and to recover a wide range of goods relatively easily.13 In a sense, what looks like moral contradiction is thus inherent in freeganism: the movement depends on the unnatural urban environment in order to protest the economic and social system that the city symbolizes, all in the name of living more naturally. If we view “morality” through the lens of tool kit theory, freegans’ ex- planation for the gap between beliefs and practice, and the various ways they deploy that reasoning in interaction, could be the focal point of anal- ysis. Or we could view freegans as satisfied with reaching a certain, sub- optimal threshold of acting morally Ă°see Gigerenzer 2010Þ. I have argued, however, that the notion of moral habitus implies that action creates a more ongoing sense that living out a moral identity is “second nature.” Did this process of finding a moral place in the city happen through group life— that is, social dynamics within freegan.info? Certainly, as Durkheim would suggest, freegan.info provided a space where freegans could freely discuss, develop, and reinforce moral beliefs that would otherwise struggle to find a hearing. During feasts, for example, freegans held freewheeling debates about elements of their natural ethos, such as whether humans were “pri- mordial vegetarians,” if they should return to agriculture or revert to forag- ing, or whether human beings should voluntarily go extinct. Yet, despite the sharing of freegan skills I describe below, freegan.info was less successful in helping freegans act on their moral motivations. Free- gan meetings were often filled with announcements that one or another practice or product had turned out to be environmentally destructive or 13 In five years of monitoring the freegan world e-mail list, I have not encountered a single mention of rural freeganism. The one academic account I can find of rural free- gans notes that they frequently go into a nearby city in order to dumpster dive Ă°Gross 2009, p. 61Þ. Making the City Second Nature 1031 American Journal of Sociology exploitative, leading to a new escalation of what moral living entailed with little sense of how to achieve it. In 2008, Rob and Leslie, two core freegan. info activists, attempted to extend freegan principles and address concerns that freegan.info was facilitating an insufficient range of anticapitalist or ecoconscious practices by founding a collective household for anarchists in Brooklyn. The space, “Surrealestate,” hosted a bike workshop, community meals with dumpster-dived food, and activist fund-raisers. Yet even that space charged rent, which led many in the group to reject the very notion that it was “freegan.” Others alleged that the project constituted first-wave gentrification. This acrimony was emblematic of a frequently evoked “basic lack of trust” within freegan.info, which I saw play out in strident arguments during freegan.info meetings. The overall sense, as one person told me, was that “there’s no real freegan community.” While the presence of conflict certainly does not invalidate the possibil- ity of a social group providing a moral sense of place—Durkheim, after all, never claimed that groups had to be harmonious—other evidence also sug- gests otherwise. In interviews, many freegans claimed that “true” freegans engaged in practices like dumpster diving “on their own”—not just at free- gan.info events. As one explained, “Freegan.info is just a side project to the real thing, which is being freegan itself.” Some freegans even experienced the group as a barrier to living morally: in 2008, Guadalupe, a Latina mother from a low-income background, announced that she would be “stepping back” because she had spent so much time with the group that she had been unable to dumpster dive enough to support her family and thus was buying food—a situation she saw as morally untenable. When I returned in 2012, freegan.info had collapsed under the weight of internal discord, yet most freegans described how they continued to deepen their understanding of what was required to maintain a moral identity and thus faced the same challenges of creating a sense of affirming that identity—albeit without any support from the group itself. What is more, while at its height freegan.info met only a few times a month, clashes between freegans’ moral habitus and the social environment of the city were frequent. In a society where many social situations involve buying something—from a beer to a movie ticket—being a freegan meant either profound isolation or constant violations of freegan principles. As Barbara told me, “You can sit in a room of five or ten people, and they’re talking about bargains and sales and ‘Where’d you buy that?’ and what the latest technology is, and you can really feel like you don’t want to partic- ipate at all, or that you have to guard it 1⁄2your freeganism.” Benjamin elaborated how the ideology behind his freeganism fed into a feeling of alien- ation and disaffection: “I always stand around in a room full of people and think, ‘Oh my God, no one is an anticapitalist here.’ I feel so alone, I feel so out of place. . . . It’s so lonely. It’s depressing as hell to live here 1⁄2in 1032 New York.” Others reported an involuntarily shrinking social network as nonfreegans were pushed away from them and few new freegans appeared to fill the holes. Despite their deeply rooted moral identities and the barriers that social existence in the city posed to acting on them, though, freegans still insisted they were living morally. Perhaps more importantly, many debates about abstract principles did not translate into anxiety in day-to-day life, suggest- ing that freegans were not among those actors who “churn through their moral narratives in their internal conversations almost obsessively” Ă°Sayer 2005, p. 29Þ. In their daily lives, both within and outside freegan.info, free- gans showed few signs of a Bourdieusian “hysteresis,” suggesting that their habitus and environment were, in a sense, aligned. As I show, though, the en- vironment they were aligning with may not have been primarily a social one. Freegans could rarely articulate how they managed to find a moral sense of place in the city, except that it had something to do with nature and the city itself. As one told me, “Freeganism . . . it’s a way of downscaling the city somehow. It tells me, ‘Okay, I can live small here.’” Rob, a tall freegan with a shock of curly red hair, speculated, “Within the city, nature is a park, a tree, or a bug. Or maybe it’s noises or creepy things or shadows. That’s nature to me. Freeganism is a way of relating to nature in the city. It lets things happen organically. Everyone is part of the equation. It ends up being just, sort of, magic. People are like nature and there are all sorts of varie- ties and uniquenesses in any situation.” As I argue in the rest of this article, freegans made a city seemingly full of contradictions into a “common-sense world” Ă°Bourdieu 1990, p. 58Þ, within which living naturally was second nature, by practically appropriating and reconfiguring their material world into moral proofs, markers, and reminders. MAKING THE CITY “SECOND NATURE” Moral Proofs and Natural Resources Freegans’ “wild food foraging tours” through city parks were, in large part, neither political nor practical. Foraging events lacked the performative cri- tique of capitalism that made freegan trash tours appealing to the media. They were not particularly helpful in allowing freegans to survive “out- side” capitalism either. Ryan, an experienced forager, got only a fraction of his calories from it; Guadalupe remarked that dumpsters have “tastier food.” Yet whenever Ryan announced his willingness to lead a tour, the group was invariably enthusiastic and turnout high. The appeal, I argue, stemmed from the way tours functioned as a kind of “nature work” Ă°Fine 1998, p. 4Þ, a directed process of relating to the physical environment that enabled freegans to see the city as providing natural resources that func- tioned as tangible proofs of their efforts. Making the City Second Nature 1033 American Journal of Sociology On one tour along the northwest edge of Manhattan, a visiting activist from California commented how, to his surprise, the plants the group was finding were identical to those he found in his home state, despite vast dif- ferences in climate. Ryan replied, “There’s lots of biodiversity in the rain- forest, but there’s unique species here 1⁄2in the city too.” Both presented the city as an ecosystem, replete with its own species, flows of resources, and cycles of food availability. Wild food foraging tours were not just a way of imagining the city as a natural ecosystem, but treating it as such through concrete and material—yet, as the notion of habitus suggests, simultaneously also symbolically and morally laden—practices. As Ryan admonished the group, “Here you see a bunch of ostrich ferns growing in a clump together. If you know to only pick half of them, they’ll grow back. But pick all of them, and it dies.” At another point, Ryan’s guidance more directly touched on a key moral motivation for freegans—finding value in waste. Motioning to a downed tree, he observed, “Lots of things that look like waste aren’t waste when you look a little closer.” He took us to the other side and re- vealed edible mushrooms growing on it, which freegans then picked—in moderation. Expeditions to find burdock root and edible flowers were not the only moments in which freegans approached the city as a natural resource base that furnished proofs of their ability to live naturally. They also did so with respect to human-made urban waste. Of course, despite a social scientific trope of waste as “urban metabolism” Ă°Heynen, Kaika, and Swyngedouw 2006Þ, there is nothing superficially natural about New York’s vast waste disposal apparatus. Indeed, in their public events, freegans often went to great lengths to emphasize the highly unnatural social processes that cre- ated waste. As one freegan explained to a group of 15 newcomers on a trash tour, “It’s not individuals, it’s the system 1⁄2that produces waste. The stores are trying to extract surplus value, to borrow a Marxist term. But our sys- tem ends up with a huge amount of waste and unrecognized costs.” While in their deliberate, planned events waste served as a symbol of all that was wrong with the city, in everyday practice waste became a fixed aspect of the physical environment. One weekend, I joined Benjamin and Lucie, two young freegans, for a free art festival on Governor’s Island. We had been discussing the recent closure of the Occupy Wall Street encamp- ment, and I commented that the island had large tracts of open space that could be occupied. Benjamin replied, pensively, “Yeah, but what would you eat? You’d have to go into the city to dumpster 1⁄2dive, and there are only ferries on the weekend.” Lucie laughed, “You remember that food comes from places other than dumpsters, right? You could farm it.” “Oh right,” Benjamin replied, “I forgot.” In effect, the social origins of food waste had receded to the background in a moral habitus that drew its power from treating waste as a natural resource. 1034 The availability of garbage depends on the vicissitudes of store employ- ees and sanitation workers, yet for self-described “urban foragers” like freegans, it was nature that provided the waste. Noted one freegan, “The difference between foraging and agriculture is trying to control nature, versus preparing yourself to respond to whatever nature throws at you.” Although waste in New York is so abundant that freegans could easily eat only prepared food or only organic produce if they wanted, freegans none- theless often “rescued” unappealing items and turned them into food. One autumn evening, the group uncovered dozens of ears of dried, ornamental corn. When one newcomer moved to put them back in the garbage—as- suming they were inedible—Madison snatched them. The next week, she returned having transformed them into hominy: a time-consuming and im- practical move, but one that affirmed a moral identity that, as she put it, allowed her to make use of “whatever nature throws at you.” While freegan political activities were a direct challenge to urban social institutions, freegan nature-work transformed the environment in more subtle ways, through de- veloping a habitus that would allow freegans to partly subsist on precisely what their nature-work on the city made available. While freegans’ self-description as “urban foragers” and their labeling of waste as a “natural” resource might seem strained, these discourses were tied to concrete practices. One freegan observed how the often unreflective, in- grained habits of a dumpster diver paralleled those she envisioned foragers— the reference point for her moral motivations—as having: “When you go dumpster diving . . . you do things in the natural way. It’s like . . . going in the forest to find food. . . . You need to explore, first, to find good spots. Then you need to really work for your food: it’s harder, you need to open bags, to search, to climb into a dumpster. . . . It’s always surprising. You don’t know what you’re going to find. It makes it more natural. It’s like going back to the time when people would go into natural spaces to get food.” For her, dumpster diving was “natural”—and, therefore, also in her eyes, moral—precisely because it required effort. It was precisely the adver- sity of place that allowed her to have a “sense of her place” that she could envision as analogous to life in a forest. As the quote suggested, even as freegans imbued the urban waste stream with moral meaning, the physical characteristics of the waste stream itself required ongoing readjustments. This was particularly evident with respect to the way the rhythms of the urban waste disposal system structured free- gans’ time. While a grocery store might be open 16 hours a day, the win- dow of time for dumpster diving is just a few hours between when stores close and garbage trucks appear. One night, I was working in the freegan office with David—who did not cook and usually ate directly from dump- sters—when he looked at his computer and declared, “It’s 8:30. We can almost go dumpstering.” Eating like a forager meant gathering food at the Making the City Second Nature 1035 American Journal of Sociology inconvenient times it was available and going without otherwise. Need- less to say, frequent changes in stores’ disposal practices themselves pushed freegans to reconfigure their routines to shift to new sites or new times. For freegans working normal jobs, this was not necessarily easy—which was, perhaps, part of what made it meaningful. Marion, a woman who had been “surviving” from waste reclamation for more than five years, despite having a significant income, explained, “I try to project and say ‘This is what I have, I probably won’t go on this day because of the weather.’ But I have to plan in advance to make sure I’m prepared. . . . It gets laborious, to stay on the street, late late at night, day after day. So I try to limit it to get what I need, at least. It can so easily turn into still 1⁄2being on the street at 1:30 in the morning. It’s exhausting for me.” In order to act on their con- ceptualization of living naturally, freegans had to conform to the rhythm of waste metabolism on a seasonal as well as daily basis. Back-to-school shop- ping season, for example, was one of the only times freegans could dumpster dive office supplies. Barbara—a tenured and, by her own admission, well- paid public school teacher—noted that the “only” time she could find instant oatmeal was during move-out days from college dorms. While she could certainly have bought instant oatmeal and no one in the group would know the difference, it appeared that she didn’t. Instead, for two Saturdays in a row, I found her alone in the dumpsters of New York University looking for oat- meal. Her solitary efforts suggested that, insofar as urban waste functioned as a moral “proof,” she herself was the primary audience. At times, freegans’ public denunciation of waste and their treatment of waste as a finite natural resource base were overtly in tension. In 2009, Ryan lamented, “There has been less waste lately. . . . No more bulk boxes with one bottle broken and the rest intact but slimy.” Some speculated that the decline in waste output was a result of the economic downturn. Others, though, returned to ecological metaphors, noting that a particular “fertile” chain of stores in Murray Hill had been “overharvested” and thus become “exhausted” by the overly frequent “exploitation” of local divers. A lack of care toward the natural resource base that waste represented, then, could serve as a sign of a habitus gone awry. In a context in which physical rather than social relationships were key to affirming moral identities, these cir- cumstances threatened freegans’ “identity kit” Ă°Goffman 1961, pp. 14–21Þ. Some freegans even embarked on a series of collective efforts—including a futile visit to the stores’ managers asking them to “give back the garbage”— to rectify the situation. Indeed, throughout my time with the group, there was an ongoing con- flict between those who wanted to call attention to waste in order to grow the movement and those who wanted to keep it hidden in order to ensure their ability to maintain themselves on the system’s margins. This conflict played out in practice: while some welcomed others to join them on dives 1036 outside those scheduled by freegan.info, others would hide evidence of their activities out of fear that nonfreegan divers would discover their favorite spots. Paradoxically, the nature of freegans’ resource base—and its depen- dence on store managers’ and employees’ fickle actions—meant that free- gans’ political actions threatened to deprive them of the very objects they used to prove to themselves that they were living naturally. Such tensions are constitutive of moral life, insofar as we recognize that moral identities exist alongside other, nonmoral identities Ă°Stryker and Burke 2000, p. 290Þ, and the dispositions of the habitus are only partly coherent and integrated, owing to their construction within multiple environments Ă°Wacquant 2014, p. 6Þ. Moral Markers and Human Nature Urban waste was not just a proof of freegans’ moral identities but also a way of physically differentiating freegans from both the capitalist main- stream and other animal rights or environmental activists, with whom free- gans made common political cause but whom they saw as morally wanting. Speaking in front of an otherwise receptive audience—an undergraduate class on food, waste, and sustainability at NYU—David lectured about the uselessness of formal education: “We live in a profoundly deskilled society. We’ve been infantilized, and very few of us know how to do anything out- side of our little narrow box of employment.” Real skills, he observed, were those that would allow humans to survive in nature—skills that freegans were already developing. “We have false ideas about what constitutes fresh food,” he noted. “A lot of food tastes better when it looks worse. But those are not the tactile and aesthetic qualities people look for when they pur- chase produce.” During the trash tour after the presentation, David pulled me over to a bin filled with discarded tofu, chicken, and cheese from the store’s hot food salad bar. He commented, “A lot of vegans would just leave this here, but look.” David plunged into the mixture and pulled out a sauce-covered white chunk and explained how to identify whether it was meat on the basis of the way it broke when crushed between the thumb and forefinger. For him, living naturally off the city’s resource base—rather than unnaturally from its supermarkets—required connecting with another version of nature: human nature, embodied in corporal practice Ă°Ignatow 2009, p. 100Þ. Indeed, suc- cessful urban foraging required all the senses to be constantly if not always consciously attuned to the physical surroundings in a natural way, because edible items were signaled not by neon signs but by more subtle and diffi- cult to discern hints: lumpy plastic bags or the faint smell of food. The above example was not the only time that David used physical ob- jects as a marker of moral distinction. At monthly “Really Really Free Mar- Making the City Second Nature 1037 American Journal of Sociology kets,” where freegans would gather with other scavengers and anarchists to swap surplus household items, freegan.info would often provide a buffet replete with carefully washed, aesthetically pleasing dumpster-dived food. But when I accompanied David to more mainstream animal rights con- ferences—where he was a frequent gadfly—he reversed the style of presen- tation. He would make a show of the fact that freegans’ flyers were printed on the back sides of “rescued” paper—in sharp contrast to the glossy pam- phlets of the Humane Society of the United States—and flaunt that the food on offer at the table was past its sell-by date, not free of genetically modified organisms or organic, and obviously from a dumpster. These objects helped David balance a political desire to be present at the conference with a moral motivation to distance himself from mainstream veganism, which he saw as a “bourgeois ideology that worships consump- tion.” Certainly, anyone in attendance who spoke to David would become aware of his views, but, on one level, they didn’t need to because the bound- ary was materially manifest. A similar duality appeared during regular trash tours, when freegans used expensive and desirable recovered items—like still-bagged organic coffee—as an “interactional hook” Ă°Tavory 2010, p. 57Þ. The lure of free stuff would temporarily drag passersby into freegans’ po- litical project. Yet if when the freegans revealed the foods’ origins the others expressed disgust, these objects instantiated moral boundaries. Although, in these instances, freegans’ self-differentiation was overt, their moral boundary marking through sensory relationships to food was a more implicit part of their everyday habitus. In response to a query about food safety, Marion quipped, “I never look at the sell-by date, it’s irrelevant to me. It’s about the condition of the food: you smell it, you taste it, and if it’s horrible, don’t 1⁄2eat it.” Eating safely meant cultivating knowledge of the material properties of food, knowledge that freegans claimed had been lost with urbanization: “Not knowing about food, and thinking about safety standards, that comes from living in the city. . . . If you take a yogurt, and you don’t know what it is and you don’t know how it’s made, and all you know is the expiration date, then after the expiration date you’ll throw it away. If you know how a yogurt works, you know it could be good two months after. You just taste it.” Media and bystanders frequently queried whether dumpster divers ever got sick. Invariably, freegans responded that no one ever ailed from recovered food, asserting first their own knowledge of food—which set them aside from the incompetence of the ordinary con- sumer—and then a more general claim about the real nature of the human body. As Guadalupe told one reporter, “People in this country are a lot more freaked out about dirt than they need to be. We need a little dirt in our lives for our immune systems to be strong.” These comments were not just bluster. Freegan.info as an organization discouraged participants from eating straight in front of the camera, for fear 1038 of the media’s propensity to splice together images to maximize dumpster diving’s “ick factor.” Outside the public eye, though, freegans would often spend hours debating politics and revolutionary strategy while eating di- rectly from the trash bin. My own meals with freegans in their homes, as well as glances into freegans’ refrigerators, suggested a striking willingness to eat over-the-hill and rotten food. In effect, these scavenged items were exemplars of how “the most mundane objects . . . 1⁄2can become a form of stigmata, tokens of a self-imposed exile” Ă°Hebdige 1979, p. 2Þ from the still essentially middle-class world in which freegans lived and worked. And, in a Latourian sense, these objects occasionally “acted back” in unpredictable ways: although reticent to admit it, some freegans could recount how their embodied confidence that they were conforming to humanity’s more resil- ient internal nature led them to eat food that left them sick for days. Freegans’ moral habitus of relating to physical objects could help main- tain boundaries when the more conventional aspects of their lives threat- ened to erase them. From 2007 to 2009, freegan.info operated a bicycle workshop in the cramped basement space of an anarchist “infoshop” in a low- income neighborhood of Bedford-Stuyvesant. Leslie, a college-educated “rad- ical social ecologist” in her early 30s who was one of the shop’s main volun- teers, described how her first visit to the space was “exhilarating” because, for the first time in her life, she realized that she could “build and create things and figure out how to do stuff, solve problems, use tools.” Rob, who had a degree in computer science from an elite private university, offered a similar assessment of how the skills he learned in the workshop—skills his classmates lacked—brought him closer to human nature. “Bike repair really got me into working with my hands,” he explained, “which is, like, so critical to being a human being—to be able to manipulate your environment and physical things. You don’t get that in school.” For Rob, the bikes that came out of that space were materializations of freegan values. Through problem solving and careful repair, decaying discarded parts became bikes that could provide sustainable transportation for decades. But the bikes were also markers of difference. In a rapidly gentrifying neighborhood that freegans saw as full of “hipsters” riding “expensive fixed gears,” freegan bikes were almost ostentatiously worn looking and ugly. Some particularly unconventional activities for which freegans them- selves had little explanation made sense as projects that developed the hab- itus and the physical environment in a morally affirming way. After one freegan feast in Jeff’s apartment, eight of us stayed around to watch Ryan conduct a “skillshare” for the group. Ryan removed a handful of yucca leaves from his backpack and placed them on the floor. He demonstrated how to scrape the flesh off the leaves, which isolated the internal fibers. These, he explained, could be woven into rope. After half an hour, Ryan had created a drawstring for his hat, while the rest had only a few sloppy, short strands Making the City Second Nature 1039 American Journal of Sociology of fibers to show for their efforts. Nonetheless, the group was so enthralled by the event that, immediately after, they began discussing plans for simi- lar training in canning and preserving fruit, sewing clothes, and making wine. The moment was one of Durkheimian Ă°1965, p. 236Þ “collective ef- fervescence,” in which the social affirmation of freegans’ distinctive moral identities was amplified with palpable markers. These same objects could act as markers of moral difference in self-evidently “nonfreegan” situations. Freegans could—and did—ride their salvaged bikes to work or take dumpster-dived food to potlucks with nonfreegan friends, giving a moral tinge to otherwise problematic situations. Barbara described writing her lesson plans on the back sides of paper that she pulled from other teachers’ recycling Ă°or wasteÞ bins, a practice she readily noted set her apart despite their shared participation in paid employment. But Barbara’s “quirks” could have unintended consequences: she recounted that once, after sitting down with her dumpster-dived lunch, a colleague stood up and walked away, announcing, “I will sit here with my clean food.” Here, the “waste” Barbara at other times used to draw moral bound- aries evoked them when she had not intended to, providing an unintended “mold” for interaction Ă°see Jerolmack and Tavory 2014Þ. While for free- gans objects recovered from the garbage could set them apart as moral, for nonfreegans they could invoke “pollution rules” that made them “wicked object1⁄2s of moral reprobation” Ă°Douglas 1966, p. 170Þ. Freegans could thus not seamlessly “enlist” the physical world Ă°see Latour 2005Þ. Indeed, the use of these objects as moral markers could give freegans a sense of place in the urban environment even as it deepened their sense of being out of place in their social milieu. Moral Reminders in the Urban Frontier Finally, physical objects functioned as “moral reminders” for freegans’ moral motivations, including those developed or shared within the group, outside the group context. Like so many other self-identified freegans, Lola, an itin- erant art student who had come to New York in the summer of 2008, claimed to see the city as the antithesis of morality, averring, “I think that the urban culture is what I’m opposed to.” And, like other freegans, she also offered proof that she could turn the harshly unnatural city into a nat- ural urban frontier. Referencing her bike, she told me, “Bicycling is such a freeing feeling. You’re in direct contact with nature. The physical aspect of it is amazing. It feels to me like breaking through some kind of invisible barrier. . . . You can’t fall asleep on a fixed gear 1⁄2bicycle. You can’t just ig- nore things that are going on. You can’t just look up at the stars; it’s actu- ally being in contact and being directly involved with what is happening.” 1040 To Lola, nature was something with which she could be in “direct contact” in the city, found not by “look1⁄2ing up at the stars” but by engaging with her more immediate, built environs. Lola expressed particular pride at her fixed-gear bike: she built it her- self, which to her meant that “I know every part of it and understand why and how everything works.” As with becoming an “expert” on food, under- standing the material properties of her bicycle was crucial to Lola’s moral identity as someone living a more natural life than other urban denizens. More than that, though, her bicycle seemed to function as a personal totem, a ward keeping the immoral forces of the city at bay. In the summer of 2008, Lola spent a stint house-sitting a luxurious apartment in the Upper West Side. She invited me over, and I noted that she had crammed her bicycle into a tiny corner of her bedroom rather than leaving it elsewhere in the capacious apartment. She confided, “It felt really weird to stay here, so I brought my bike into my bedroom with me, just as a reminder.” Here was a moment when the clash between values and environment threatened to make her feel quite literally out of place, until Lola reworked that place in a small but tangible way. All freegans juggled tensions between their political ideals and everyday lives, but these contradictions were particularly acute for Ryan. Despite helping Jeff and David organize an “antitechnology” conference in 2009, Ryan had a degree in computer science and was working 40 hours a week in Connecticut programming touch-screen computers that, in his own words, “made it easier for rich people to watch TV.” That he was not just an or- dinary college-educated computer programmer, though, was inscribed on his person. When Ryan showed up at one freegan meeting in midsummer, he was wearing a backpack that he had built out of bicycle tire inner tubes and was clad in sandals he put together from a discarded fire hose. At- tached to his backpack was a trowel he told me he used to dig up edible plants he finds in long bicycle trips, one of which brought him to some of the most remote regions of northern Canada. He emphasized the impor- tance of his sensuous relationship to the materials: “When I buy something I really need, I don’t feel like I own it. I’m afraid to sew it, patch it up. This backpack, I can feel it. I know what’s wrong with it; I know what’s right with it. If something’s not working, I can cut it up and make it work for me in a new way. It’s all about ownership. . . . Once you make some- thing, you can control exactly what it’s going to do.” When I pressed Ryan as to why these skills were so important, he demurred: “I don’t know where exactly my learning is going towards.” A comment he made more infor- mally, though, was telling: “I came straight from work,” motioning to his backpack and shoes, suggesting that he had worn them to his rich clients’ houses. While, in such contexts, Ryan probably could not raise his “anti- Making the City Second Nature 1041 American Journal of Sociology civilization” beliefs, his evident skill in dealing with physical objects re- minded him that he was, in his own mind, more a rugged frontiersman than an urban professional. This was not the only time I saw freegans draw on practices toward material objects to remind themselves and others of their moral commit- ments in moments when these self-conceptions felt threatened. One De- cember evening, I attended a freegan feast in Madison’s Brooklyn flat, which she had purchased after quitting her corporate job. I noted my sur- prise that Madison’s building had a doorman; she replied, “I know, I didn’t feel great about it either, but look at what I did with it.” She then walked me around the flat showing how nearly every item of furniture had been taken “right off the street.” Analogously, Barbara once confessed to me something she had been hiding from the group: that she had recently taken a flight for a vacation. “Have you ever dumpster dived a plane?” she whispered, before taking from her backpack complementary food, napkins, and utensils she had acquired while walking past the first-class seating area. She did not show the items to others in the group. Instead, as she suggested, she recov- ered them because the objects themselves reminded her of an opportunity to actualize her moral motivations at an unexpected moment. Although some uses of physical objects as reminders were deliberate, ma- terials could call on freegans to put their environment back in its moral place when they were not intending to do so. One cold winter night, we ap- proached the back side of a Food Emporium, where, from a distance, it was clear there was a larger than usual amount of food. As we walked up, Bar- bara exclaimed, “Oh my god, this is going to be outrageous.” It was: the store was evidently destocking, and so large quantities of unexpired, non- perishable goods were on the sidewalk. This night’s event was supposed to be a “trash trailblaze”—where the group would quickly investigate new potential spots and then move on—but the group lingered long after every- one had taken what they could carry. When I asked Madison why we stayed, she opined, “It’s like an elephant graveyard. Right now, we’re just mourning the food.” Although it was ultimately store employees who put the waste on the curb and freegans who decided to imbue the waste with symbolic meaning, it was the wasted objects themselves that redirected freegan behavior. At other moments, these reminders had a more positive valence. In con- trast to a modern industrial food system built on standardization and pre- dictability, freegans embraced the unscripted moments of dumpster diving, averring that “it’s always unpredictable; that’s part of the adventure of it!” Reflecting Fine’s Ă°1998, p. 49Þ conclusion that “meaningful experiences of nature must include uncertainty,” I witnessed firsthand the excitement that emerged whenever there was a rare find, like a box of tempeh or a pome- granate—their unexpected appearances potent reminders that freegans were 1042 not shopping or even growing food, but doing something they saw as fun- damentally more natural. Waste could capture freegans’ energy even when not with freegan.info. Although food is wasted at predictable places and times, other items free- gans need to find in order to avoid spending money—clothes, toiletries, and appliances, to name a few—appear more stochastically. The “dumpster eye,” as one described it, was at times only at the margins of freegan conscious- ness Ă°see Tavory 2010, p. 56Þ, but the right garbage could unexpectedly bring it to the forefront, breaking down barriers between when they were or were not acting on their freegan moral motivations. When I began to dumpster dive more myself, I realized that traversing the city on foot—often regardless of my intentions—took much longer than it had previously, as I zigzagged across streets in order to examine any garbage that looked re- motely promising. Some admitted that their practice of freeganism bordered on hoarding, because they felt a strong compulsion to “rescue” only marginally useful items. Observed one freegan, “In my apartment, we have all sorts of things lying around, because you never know when you’re going to need to build this or fix that. You just keep everything.” This ethos of “making do and getting by,” many freegans claimed, harkened not just to prehistoric foragers but, more recently, to homesteaders on the American frontier. But living out these values could be taxing: “I get tired of trying to save the world,” sighed Barbara, after spending an hour trying to find someone to take a shoe rack she had found on the sidewalk. Objects demanded freegans’ time and attention in other ways as well. While in the previous section I noted how building bikes from discarded parts was part of what helped freegans “mark” themselves as living more naturally, they were also a source of constant frustration. Salvaged bikes were constantly breaking down and needing new scavenged parts, which themselves would not last long. Similarly, the implacable materiality of food—namely, the fact that it perishes, and if it has been “rescued” from a dumpster, it perishes quickly—often led freegans to spend significant time paring moldy fruit, recooking and transforming old vegetables, or redis- tributing excess bread. Although on a purely rational level freegans knew that “rewasting” food had no additional negative environmental impact, they nonetheless exhorted themselves—often in private—to “not waste the waste.” This embodied set of practices reworked freegans’ world in a way they sensed as natural yet threatened to remind freegans of the very “unnaturalness” as these objects returned to a wasted state. CONCLUSION: MATERIALS, NATURE, AND MORALITY Although freeganism as a political movement is an intrinsically urban phe- nomenon, the social dimensions of city life—finding a place to live, working, 1043 Making the City Second Nature American Journal of Sociology and interacting with others—posed substantial barriers to individual free- gans acting on moral motivations with which their identities were closely bound. Freegan.info as a group provided ongoing reinforcements of free- gans’ moral motivations—much as the Durkheim-inspired conclusions of literatures on social movement “free spaces” and subcultures would suggest— but it only infrequently provided them with a social environment aligned with them. Nevertheless, freegans were able to achieve a sense of their place in the city, one that made living morally frequently unremarked and second nature. They did so through a habitus that both drew on and reconstructed the physical environment in line with their frequently unarticulated and var- ied conceptions of “nature.” While freeganism is no doubt an idiosyncratic movement, these findings have implications for studies on materiality, na- ture, and morality. Material objects can play a significant, and distinctive, role in social life. As recent work has shown, objects are not mere bearers of cultural mean- ings but can actively reshape those meanings Ă°Latour 2005; McDonnell 2010; Jerolmack and Tavory 2014Þ. I have added the assertion that material ob- jects—or, more generally, the nonsocial—can be the ends of moral life. In truth, “bringing materiality back in”—to evoke a sociological cliché—is consistent with common sense. Although “waste” is not a common object of moral concern, it is nonetheless arguable that significant moral action is di- rected toward nonhuman entities, such as “gods” or “nations” Ă°see Cerulo 2009Þ. Physical representations of those entities, such as idols or flags, can call forth powerful moral commitments. Yet the moments when objects proved uncooperative—when bikes broke down, food rotted, or others interpreted waste in a radically different fashion—also speak to the complexities, limits, and risks of the material world in sustaining a moral self. The three roles of objects I have demonstrated here provide a basis for further research into the extent and role of the material world in moral life. The fact that freegans made living morally seem like second nature through their interactions with waste itself has intriguing implications. On one hand, waste’s banality would seem to reaffirm Durkheim’s Ă°1965, p. 52Þ asser- tion that mundane objects—ranging from “a rock, a tree, a spring, a pebble, a piece of wood, 1⁄2or a house”—can be imbued with moral meaning. A cru- cifix around the neck could be a significant marker of moral boundaries; an old photo a potent reminder of familial commitments; a carefully sorted re- cycling bin proof of an ecological identity. Yet that freegans chose waste was not random. Waste for freegans was “polyvocal” Ă°McDonnell 2010, p. 1803Þ: at once a symbol of capitalist immorality and privately a resource for moral living. Waste evokes intensely negative emotional and moral meanings in broader Western culture Ă°see Douglas 1966; Abbott 2014Þ. In a group that set itself up in opposition to mainstream Ă°imÞmorality, using waste provided an effective way to leverage the adversity of the environment. High-end green 1044 consumption may be just a cover for elite distinction Ă°see Johnston 2008; Elliott 2013Þ, but low-end salvaging is a way of abnegating a social status perceived as immoral through contaminating oneself with negatively coded objects. These findings also bear on literatures examining the social construction of nature. Sociologists have largely moved beyond older nature-city binaries, convincingly showing that urban denizens can have meaningful experiences of nature even in a modern metropolis Ă°Wachsmuth 2012; Jerolmack 2013Þ. Some “radical constructivists” have gone further to claim that “in a funda- mental sense, there is nothing unnatural about New York City” Ă°Harvey 1996, p. 186; see also Heynen et al. 2006Þ. Yet my findings remind of an im- portant caveat: whether or not nature is “constructed” from a social scientific point of view, freegans would doubtlessly say that nature’s power as a grounding for morality stems from the fact that they perceived it as not constructed and not coming from society. Freegans, like many modern- day environmentalists and ecoconscious citizens, drew on nature as a po- tent, transcendent ideal, much as others might appeal to Christianity or socialism. Urban homesteaders, gardeners, or dumpster divers are not simply “think- ing” nature into existence, however. Nature is made through practice and interaction Ă°Fine 1998; Jerolmack 2013Þ. While these interactions are in- variably shaped by social characteristics Ă°Bell 1994; Jerolmack 2013Þ— freegans’ visions of nature, for example, reflected a distinctively Western and middle-class worldview—physical objects were also a key and indispens- able component of these constructions. Indeed, in the absence of physical referents, freegans’ construction of the city as natural would lack credibility, both to themselves and to others. By focusing on the physical material out of which nature is made, we can understand that, while nature may be socially constructed, it is not done so effortlessly or evenly. Even if freegans’ capacity to imbue the city with natural meaning supports a constructivist viewpoint, freegans implicitly understand that rendering the city natural is more dif- ficult than, say, doing the same to a rural farm. Further research should ex- amine how deploying the notoriously nebulous culture code of nature is facilitated or blocked by different physical environments. Finally, this article speaks to the resurgent sociological interest in moral- ity. I have offered an intervention into perennial debates about how moral beliefs relate to action by arguing that, although the two are rarely perfectly in sync, a moral habitus can nonetheless draw on the challenging aspects of the environment to create a context for acting on moral motivations. I do not want to imply that achieving an affirming sense of one’s moral place is inevitable or in all cases necessary; actors—including those who, like free- gans, appear to have strong moral identities—can and do live with glaring contradictions. I do, however, concur with those recent studies that suggest Making the City Second Nature 1045 American Journal of Sociology that at least some actors do have an internalized moral core and do make serious, if inconsistent, efforts to live up to it. Morality should not just be studied in terms of achieving a particular and often unattainable bar of “right” but also as part of the ongoing striving for the “good” Ă°Joas 2000, p. 168Þ. By thinking in terms of a moral habitus, we can refocus on this striving’s generativity of new practices, the formation of moral beliefs and identities through action, and the notion that living morally can be an almost subconscious second nature. Freegans had a sense they were living natu- rally but rarely could explicitly explain how. If freegans did manage to rework their physical environment in a way that gave them a sense of moral place, it came at a price. Living morally was something intrinsically desirable, yet at the same time, they recognized that morality could interfere with other things they desired, ranging from main- taining social relationships to being efficacious activists. They thus remind us that, as Durkheim Ă°1⁄21914 1973, p. 152Þ observed, “we cannot pursue moral ends without causing a split within ourselves, without offending the instincts and the penchants that are the most deeply rooted in our bodies.” The material dimensions of morality confirm that, precisely because morality is seen as coming from things outside of ourselves, making morality second nature often comes into conflict with the “first nature” of other identities or motivations. In the end, in motivating action that transforms the world, morality often presents a barrier—perhaps a physical one—to actions that would remake the world for other reasons and to other ends.
0 notes
katebushwick · 5 years
Text
The Political Lives of Dead Bodies
To ask this question exposes one to a flourishing literature on “the body,” much of it inspired by feminist theory and philosophy,11 as well as potentially to poststructuralist theories about language and “floating signifiers.” I will not take up the challenge of this literature here but will limit myself instead to some observations about bodies as symbolic vehicles that I think illuminate their presence in postsocialist politics.12 Bones and corpses, coffins and cremation urns, are material objects. Most of the time, they are indisputably there, as our senses of sight, touch, and smell can confirm. As such, a body’s materiality can be critical to its symbolic efficacy: unlike notions such as “patriotism” or “civil society,” for instance, a corpse can be moved around, displayed, and strategically located in specific places. Bodies have the advantage of concreteness that nonetheless transcends time, making past immediately present. Their “thereness” undergirded the founding and continuity of medieval monasteries, providing tangible evidence of a monastery’s property right to donated lands.13 That is, their corporeality makes them important means of localizing a claim (something they still do today, as I suggest in chapter 3). They state unequivocally, as Peter Brown notes, “Hic locus est.”14 This quality also grounded their value as relics. The example of relics, however, immediately complicates arguments based on the body’s materiality: if one added together all the relics of St. Francis of Assisi, for instance, one would get rather more than the material remains of one dead man. So it is not a relic’s actual derivation from a specific body that makes it effective but people’s belief in that derivation. In short, the significance of corpses has less to do with their concreteness than with how people think about them. A dead body is meaningful not in itself but through culturally established relations to death and through the way a specific dead person’s importance is (variously) construed.15 Therefore, I turn to the properties of corpses that make them, in LĂ©viStrauss’s words, “good to think” as symbols. Bodies—especially those of political leaders—have served in many times and places worldwide as symbols of political order. Literature in both historiography and anthropology is rife with instances of a king’s death calling into question the survival of the polity. More generally, political transformation is often symbolized through manipulating bodies (cutting off the head of the king, removing communist leaders from mausoleums). We, too, exhibit this conception, in idioms such as “the body politic.” A body’s symbolic effectiveness does not depend on its standing for one particular thing, however, for among the most important properties of bodies, especially dead ones, is their ambiguity, multivocality, or polysemy. Remains are concrete, yet protean; they do not have a single meaning but are open to many different readings. Because corpses suggest the lived lives of complex human beings, they can be evaluated from many angles and assigned perhaps contradictory virtues, vices, and intentions. While alive, these bodies produced complex behaviors subject to much debate that produces further ambiguity. As with all human beings, one’s assessment of them depends on one’s disposition, the context one places them in (brave or cowardly compared with whom, for instance), the selection one makes from their behaviors in order to outline their “story,” and so on. Dead people come with a curriculum vitae or rĂ©sumé—several possible rĂ©sumĂ©s, depending on which aspect of their life is being considered. They lend themselves to analogy with other people’s rĂ©sumĂ©s. That is, they encourage identification with their life story, from several possible vantage points. Their complexity makes it fairly easy to discern different sets of emphasis, extract different stories, and thus rewrite history. Dead bodies have another great advantage as symbols: they don’t talk much on their own (though they did once). Words can be put into their mouths—often quite ambiguous words—or their own actual words can be ambiguated by quoting them out of context. It is thus easier to rewrite history with dead people than with other kinds of symbols that are speechless. Yet because they have a single name and a single body, they present the illusion of having only one significance. Fortifying that illusion is their materiality, which implies their having a single meaning that is solidly “grounded,” even though in fact they have no such single meaning. Different people can invoke corpses as symbols, thinking those corpses mean the same thing to all present, whereas in fact they may mean different things to each. All that is shared is everyone’s recognition of this dead person as somehow important. In other words, what gives a dead body symbolic effectiveness in politics is precisely its ambiguity, its capacity to evoke a variety of understandings.16 Let me give an example. On June ,ïœčïœč, a quarter of a million Hungarians assembled in downtown Budapest for the reburial of Imre Nagy, Hungary’s communist prime minister at the time of the ïœč revolution.17 For his attempts to reform socialism he had been hanged in ïœč, along with four members of his government, and buried with them in unmarked graves, without coffins, facedown. From the Hungarian point of view, this is a pretty ignominious end.18 Yet now he and those executed with him were reburied, faceup in coffins, with full honors and with tens of thousands in attendance. Anyone watching Hungarian television on that June  would have seen a huge, solemn festivity, carefully orchestrated, with many foreign dignitaries as well as three Communist Party leaders standing near the coffins (the Communist Party of Hungary had not yet itself become a corpse). The occasion definitely looked official (in fact it was organized privately), and it rewrote the history—given only one official meaning for forty years— of Nagy’s relation to the Hungarian people. Although the media presented a unified image of him, there was no consensus on what Nagy’s reburied corpse in fact meant. Susan Gal, analyzing the political rhetoric around the event, finds five distinct clusters of imagery, some of it associated with specific political parties or groups:20 () nationalist images emphasizing national unity around a hero of the nation (nationalist parties soon found these very handy); (ïœČ) religious images (which could be combined with the nationalist ones) emphasizing rebirth, reconciliation, and forgiveness, and presenting Nagy as a martyr rather than a hero; (3) various images of him as a communist, as the first reform communist, and as a true man of the people, his reburial symbolizing the triumph of a humane socialist option and the death of a cruel Stalinist one; () generational images, presenting him as the symbol of the younger generation whose life chances had been lost with his execution (this group would soon become the Party of Young Democrats); and () images associated with the ideas of truth, conscience, and rehabilitation, so that his reburial signified clearing one’s name and telling the story of one ’s persecution—an opportunity to rewrite one ’s personal history. (That some people presented communist Prime Minister Nagy as an anticommunist hero shows just how complex his significances could be.) Perhaps attendance at Nagy’s funeral was so large, then, because he brought together diverse segments of the population, all resonating differently to various aspects of his life. And perhaps so many political formations were able to participate because all could legitimate a claim of some kind through him, even though the claims themselves varied greatly.21 This, it seems to me, is the mark of a good political symbol: it has legitimating effects not because everyone agrees on its meaning but because it compels interest despite (because of?) divergent views of what it means. Aside from their evident materiality and their surfeit of ambiguity, dead bodies have an additional advantage as symbols: they evoke the awe, uncertainty, and fear associated with “cosmic” concerns, such as the meaning of life and death.22 For human beings, death is the quintessential cosmic issue, one that brings us all face to face with ultimate questions about what it means to be—and to stop being—human, about where we have come from and where we are going. For this reason, corpses lend themselves particularly well to politics in times of major upheaval, such as the postsocialist period. The revised status of religious institutions in postsocialist Eastern Europe reinforces that connection, for religions have long specialized in dealing with ultimate questions. Moreover, religions monopolize the practices associated with death, including both formal notions of burial and the “folk superstitions” that all the major faiths so skillfully integrated into their rituals. Except in the socialist period, East Europeans over two millennia have associated death with religious practices. A religious reburial nourishes the dead person both with these religious associations and with the rejection of “atheist” communism. Politics around a reburied corpse thus benefits from the aura of sanctity the corpse is presumed to bear, and from the implicit suggestion that a reburial (re)sacralizes the political order represented by those who carry it out. Their sacred associations contribute to another quality of dead bodies as symbols: their connection with affect, a significant problem for social analysis. Anthropologists have long asked, Wherein lies the efficacy of symbols? How do they engage emotions?23 The same question troubles other social sciences as well: Why do some things and not others work emotionally in the political realm? It is asked particularly about symbols used to evoke national identifications; Benedict Anderson, for instance, inquires why national meanings command such deep emotional responses and why people are “ready to die for these inventions.”24 The link of dead bodies to the sacred and the cosmic—to the feelings of awe aroused by contact with death—seems clearly part of their symbolic efficacy. One might imagine that another affective dimension to corpses is their being not just any old symbol: unlike a tomato can or a dead bird, they were once human beings with lives that are to be valued. They are heavy symbols because people cared about them when they were alive, and identify with them. This explanation works best for contemporary deaths, such as the Yugoslav ones I discuss in chapter 3. Many political corpses, however, were known and loved in life by only a small circle of people; or— like Serbia’s Prince Lazar or Romania’s bishop Inochentie Micu (whose case I examine in the next chapter)—they lived so long ago that any feelings they arouse can have nothing to do with them as loved individuals. Therefore I find it insufficient to explain their emotional efficacy merely by their having been human beings. Perhaps more to the point is their ineluctable self-referentiality as symbols: because all people have bodies, any manipulation of a corpse directly enables one’s identification with it through one’s own body, thereby tapping into one’s reservoirs of feeling. In addition (or as a result), such manipulations may mobilize preexisting affect by evoking one’s own personal losses or one’s identification with specific aspects of the dead person’s biography. This possibility increases wherever national ideologies emphasize ideas about suffering and victimhood, as do nearly all in Eastern Europe.25 These kinds of emotional effects are likely enhanced when death’s “ultimate questions,” fear, awe, and personal identifications are experienced in public settings—for example, mass reburials like those of Imre Nagy or the Yugoslav skeletons from World War II. Finally, I believe the strong affective dimension of dead-body politics also stems from ideas about kinship and proper burial. Kinship notions are powerful organizers of feeling in all human societies; other social forms (such as national ideologies) that harness kinship idioms profit from their power. Ideas about proper burial often tie kinship to cosmic questions concerning order in the universe, as well. I will further elaborate on this suggestion later in this chapter and in chapter 3. Dead bodies, I have argued, have properties that make them particularly effective political symbols. They are thus excellent means for accumulating something essential to political transformation: symbolic capital.26 (Given the shortage of investment capital in postsocialist countries and the difficulties of economic reform, perhaps the symbolic variety takes on special significance!) The fall of communist parties devalued much of what had served as political or symbolic capital, opening a wide field for competition in which success depends on finding and accumulating new capital resources. Dead bodies, in short, can be a site of political profit. In saying this, I am partly talking about the process of establishing political legitimacy, but by emphasizing symbolic capital I mean to keep at the forefront of my discussion the symbolic elements of that process. R E O R D E R I N G WO R L D S O F M E A N I N G In considering the symbolic properties of corpses, I have returned repeatedly to their “cosmic” dimension.27 I do so because I believe this emphasis suits what I observed earlier about the significance of the events of 1989: they mark an epochal shift in the international system, one whose effects pose fundamental challenges to people ’s hitherto meaningful existence. This is true worldwide, but especially in the former socialist bloc. All human beings act within certain culturally shaped background expectations and understandings, often not conscious, about what “reality” is.28 One might call these their sense of cosmic order, or their general understanding of their place in the universe.29 By this I mean, for instance, ideas about where people in general and our people in particular came from; who are the most important kinds of people, and how one should behave with them; what makes conduct moral or immoral; what are the essential attributes of a “person”; what is time, and how does it flow (or not); and so on. Following current anthropological wisdom, however, I do not see these cosmic conceptions strictly as “ideas,” in the cognitive realm alone. Rather, they are inseparable from action in the world—they are beliefs and ideas materialized in action. This is one way (the way I prefer) of defining culture. Unfortunately, nearly all nonanthropologists understand “culture” as cognition, ideas—a meaning I want to avoid.30 Hence, instead of using “culture,” I speak of “worlds of meaning” or simply “worlds” (though not in the sense of “lifeworld” that is specific to phenomenology and the recent work of JĂŒrgen Habermas). “World,” as I intend it, seeks to capture a combination of “worldview” and associated action-in-the-world, people’s sense of a meaningful universe in which they also act. Their ideas and their action constantly influence one another in a dynamic way. In moments of major transformation, people may find that new forms of action are more productive than the ones they are used to, or that older forms make sense in a different way, or that ideals they could only aspire to before are now realizable. Such moments lead to reconfiguring one’s world; the process can be individual and collective, and it is often driven by the activities of would-be elites (in competition with one another). Students of the demise of Soviet-style party-states have tended to pose the problems of postsocialist transformation as creating markets, making private property, and constructing democracy. This frame permits two things: one can absorb the postsocialist examples into a worldwide “transition to democracy,” and one can emphasize technical solutions to the difficulties encountered (“shock therapy,” writing constitutions, electionmanagement consulting, training people in new ways of bookkeeping, etc.). I believe the postsocialist change is much bigger. It is a problem of reorganization on a cosmic scale, and it involves the redefinition of virtually everything, including morality, social relations, and basic meanings. It means a reordering of people’s entire meaningful worlds.31 Although my phrasing may seem exaggerated, without this perspective I doubt that we can grasp the magnitude of what 1989 has meant for those living through it: a rupture in their worlds of meaning, their sense of cosmic order. The end of Party rule was a great shock to people living in the former socialist countries. This was not because everyone had internalized the Communist Party’s own cosmology and organization of things: far from it. The history of Party rule throughout the region was a long struggle between what Party leaders wanted and what everyone else was prepared to live with. Practices, expectations, and beliefs quite antithetical to the Party’s dictates jostled with those the Party promoted. Nevertheless, daily life proceeded within or against certain constraints, opportunities, and rules of the game that the political system had established, and these formed a set of background expectations framing people’s lives. The events of 1989 disrupted these background expectations in ways that many people in the region found disorienting (even if some of them also found therein new opportunities). They could no longer be sure what to say in what contexts, how to conduct politics with more than one political party, how to make a living in the absence of socialist subsidies and against spiraling inflation, and so on. They found their leisurely sense of time’s passage wholly unsuited to the sudden crunch of tasks they had to do. Moreover, their accustomed relations with other people became suddenly tense. Quarrels over property, for example, severed long-amicable bonds between siblings and neighbors; new possibilities for enrichment altered friendships; and increasing numbers of parents saw their plans for security and retirement evaporate as more of their children headed abroad. In these circumstances, people of all kinds could no longer count on their previous grasp of how the world works. Whether consciously or not, they became open to reconsidering (either on their own or with the help of political, cultural, and religious elites) their social relations and their worlds of meaning. This is what I mean when I speak of reordering meaningful worlds. I believe deadbody politics plays a part in that process, and that to examine it will clarify my project of animating the study of politics. My conceptualization here resonates with Durkheim’s, particularly the Durkheim of the Elementary Forms of the Religious Life, which is among other things a treatise on the possibilities for moral regeneration in human societies.32 The resemblance is not fortuitous. First, Durkheim wrote during a time of great moral ferment in France; his work aimed expressly to comment upon that ferment and contribute to quieting it. His situation then reminds one of the ïœčïœčs postsocialist situation. Second (and for that very reason), some scholars consider Durkheim the only major theorist apt for thinking about political and moral renewal.33 Although I gladly second him in that endeavor, and although some of my proposals in this book (such as the theme of proper burial) hint at a Durkheimian reflex, I part company with him in regard to the conscience collective; I look not for shared mentalities but for conflict among groups over social meanings. Reordering worlds can consist of almost anything—that’s what a “world” means. To reorder worlds of meaning implicates all realms of activity: social relations, political ideas and behavior, worldviews, economic action. Far more domains of life might be included under this rubric than I have time to explore, and dead bodies can serve as loci for struggling over new meanings in any of them. For my purposes in this book, I will emphasize their role in the following areas: struggles to endow authority and politics with sacrality or a “sacred” dimension; contests over what might make the postsocialist order a moral one; competing politicizations of space and time; and reassessments of identities (especially national ones) and social relations. I discuss a fifth possible domain central to postsocialist transformation—property relations—together with the others, for it enters into all of them. Yet another domain that figures centrally in Eastern Europe’s transformation but cannot be treated here is the obverse of death, namely [re]birth. The politics of abortion, for instance, has agitated nearly all postsocialist countries, as pro-natalist nationalists strive for demographic renewal of their nations following what they see to be socialism’s “murderous” abortion policies.34 In each of these domains, dead bodies serve as sites of political conflict related to the process of reordering the meaningful universe. The conflicts involve elites of many kinds and the populations they seek to influence, in the altered balance of power that characterizes the period since 1989. I will explain what I intend by these rubrics, briefly for the first three and at greater length for the fourth. Authority, Politics, and the Sacred The meaningful worlds of human beings generally include sets of values concerning authority—values like the monarch’s divinity, orderly bureaucratic procedure, a leader’s charisma, full democratic participation, the scientific laws of progress, and so on. Like Weber, we can speak of different ways of acknowledging authority as modes of legitimation, and in considering social change we can ask how one group of legitimating values gives way to another. Unlike Weber, who tended to see the sacred as part of only some modes of authority, I (and many other anthropologists) would hold that authority always has a “sacred” component, even if it is reduced merely to holding “as sacred” certain secular values. This was certainly true of socialist regimes, which sought assiduously to sacralize themselves as guardians of secular values, especially the scientific laws of historical progress. Because their language omitted notions of the sacred, however, both outsiders and their own populations tended to view them as lacking a sacred dimension.35 Part of reordering meaningful worlds since 1989, then, is to sacralize authority and politics in new ways. A ready means of presenting the postsocialist order as something different from before has been to reinsert expressly sacred values into political discourse. In many cases, this has meant a new relation between religion and the state, along with a renewal of religious faith.36 Reestablishing faith or relations with a church enables both political parties and individuals to symbolize their anticommunism and their return to precommunist values. This replaces the kind of sacredness that undergirded the authority of communist parties and serves to sacralize politics in new ways. In chapter ïœČ I describe a conflict that has arisen around the connection of church with politics in Romania (and other Orthodox countries). Among the conflict’s many facets are struggles over the sacralization of politics, and reburying a dead body is part of them. Moral Order Use of religious idioms may also be part of remaking the world as a moral place. Because communist parties proclaimed themselves custodians of a particular moral order, the supersession of communism reopens concepts of political morality, both for politicians and others who want to claim it, and for ordinary citizens concerned with the behavior of those they live among. In the first few years following ïœčïœč, the route to new moral orders passed chiefly through stigmatizing the communist one: all who presented themselves either as opposed to communism or as its victims were ipso facto making a moral claim. Many of these claims led to attempts at assessing blame or accountability and at achieving revenge, compensation, or restitution. Depending on who organizes and executes the process, the moral order implied in pursuing accountability can strengthen a new government, garner international support for a party to a dispute, or restore dignity to individual victims and their families. Society’s members may see enforcing accountability as part of moral “purification”: the guilty are no longer shielded, the victims can tell of their suffering, and the punishment purifies a public space that the guilty had made impure. Alternatively, the moral outcome may be seen as lying not in purification but in compensation for wrongs acknowledged. Foremost among the means for this was the question of restoring private property ownership, as something morally essential to a new anticommunist order. Efforts to establish accountability thus served to draw up a moral balance sheet, to settle accounts, as a condition of making the postsocialist order a moral one. Assessing blame and demanding accountability can occur at many sites, one of them being dead bodies. (In chapter 3, I discuss a particularly stark instance of this, former Yugoslavia, where rival exhumations produced reciprocal charges of genocide and acts of revenge that fueled the breakup of the Yugoslav state.) Another form of “accounting” that implicates dead bodies involves efforts to determine “historical truth,” which many accuse socialism of having suppressed. An example is the reburial of Imre Nagy, mentioned above, which sought to reestablish historical truth about Nagy’s place in Hungarian history, as part of creating a new moral universe. His example leads us to an additional means of reordering worlds, namely, giving new values to space and time. Reconfiguring Space and Time As scholars ranging from Durkheim to Elias to Leach have argued, what we call space and time are social constructs.37 All human societies show characteristic ways of conceptualizing and organizing them; any one society may contain multiple ways, perhaps differentiated by activity or social group.38 When I speak of how space and time can be resignified, I have in mind two distinct possibilities: the more modest one of changing how space and time are marked or punctuated, and the more momentous one of transforming spatiality and temporality themselves. Socialism attempted both, the latter by imposing entirely new rules on the uses of space and creating temporalities that were arrhythmic and apocalyptic instead of the cyclical and linear rhythms they displaced.39 I will leave that subject to chapter 3 and will briefly discuss changes in temporal and spatial punctuation now. We might think of both space and time using the metaphor of a geological landscape. Any landscape contains more potential landmarks than are noted by those who pass through it. When I speak of “punctuating” or “marking” space and time, I mean highlighting a specific set of landmarks—using this rock or that hill (or date, or event) as a point of reference, instead of some other rock or hill (or date or event), or some other feature altogether, such as a railway crossing. Influencing the kinds of features selected are such things as one’s position relative to them (a rock is a useful landmark only from a certain angle or distance), cultural factors (some groups find trees more meaningful than rocks), local economies (hunter-gatherers will notice items a traveling salesman will miss), and so on. If we put our landscape on “fast forward,” the landscape itself transforms, hills and mountains rising up or subsiding while valleys are etched and floras change type. The constantly changing relief presents still other possibilities for establishing landmarks. I think of such spatiotemporal landmarks as aspects of people’s meaningful worlds; modifying the landmarks is part of reordering those worlds. For example, as I observed in the introduction, among the most common ways in which political regimes mark space are by placing particular statues in particular places and by renaming landmarks such as streets, public squares, and buildings. These provide contour to landscapes, socializing them and saturating them with specific political values: they signify space in specific ways. Raising and tearing down statues gives new values to space (resignifies it), just as does renaming streets and buildings. Another form of resignifying space comes from changes in property ownership, which may require adding border stones and other markers to differentiate landscapes that socialism had homogenized. Where the political change includes creating entire nation-states, as in ex-Yugoslavia and parts of the Soviet Union, resignifying space extends further: to marking territories as “ours” and setting firm international borders to distinguish “ours” from “theirs.” The location of those borders is part of the politics of space, and dead bodies have been active in it. As for time, among the usual ways of altering its political values are by creating wholly new calendars, as in the French Revolution (whose first casualties included clocks themselves40); by establishing holidays to punctuate time differently; by promoting activities that have new work rhythms or time discipline; and by giving new contours to the “past” through revising genealogies and rewriting history.41 Since 1989, the last of these has been very prominent in “overcoming” the socialist past and (as some people see it) returning to a “normal” history. I view this historical revision, too, as an aspect of reordering worlds, and one important means of doing it has been to reposition dead bodies. National Identities and Social Relations The worlds of meaning that human beings inhabit include characteristic organizations of what we call “identities.”42 In the contemporary United States, people are thought to hold several identities, the most commonly mentioned being class, occupation, race, gender, and ethnic identity; in other times and places, these would have been less salient than kin-based identifications, or rank in a system of feudal estates. Especially prominent in the East European region have been national identifications. Contrary to popular opinion, I and others have argued that socialism did not suppress these identifications but reinforced them in specific ways.43 They remain prominent in the postsocialist period, as groups seek to reorganize their interrelations following the demise of their putative identities as “socialist men,” now superseded by “anticommunist” as a basic political identification. Sharp conflict around national identities has arisen above all from the dissolution of the Yugoslav and Soviet federations, as new nation-states take their place. Conflicts to (re)define national identities implicate contests over time and space, for statues and revised histories often celebrate specific sites and dates as national. I find it helpful to assimilate national identities into the larger category of social relations within which I think they belong: kinship. In my view, the identities produced in nation-building processes do not displace those based in kinship but—as any inspection of national rhetorics will confirm—reinforce and are parasitic upon them. National ideologies are saturated with kinship metaphors: fatherland and motherland, sons of the nation and their brothers, mothers of these worthy sons, and occasionally daughters. Many national ideologies present their nations as large, mostly patrilineal kinship (descent) groups that celebrate founders, great politicians, and cultural figures as not just heroes but veritable “progenitors,” forefathers—that is, as ancestors. Think of George Washington, “Father of His Country,” and AtatĂŒrk, “Father Turk.” (I say “patrilineal” because, as numerous scholars have observed, nearly all the “ancestors” recognized in national ideologies are male.44) Nationalism is thus a kind of ancestor worship, a system of patrilineal kinship, in which national heroes occupy the place of clan elders in defining a nation as a noble lineage. This view is not original with me. It appears in the work of anthropologists Edmund Leach, David Schneider, and Meyer Fortes,45 and in Benedict Anderson’s suggestion that we treat nationalism “as if it belonged with ‘kinship’ and ‘religion,’ rather than with ‘liberalism’ or ‘fascism.’ ”46 Given this view, the work of contesting national histories and repositioning temporal landmarks implies far more than merely “restoring truth”: it challenges the entire national genealogy. This happens quite visibly in reburying a dead body, an act that inserts the dead person differently as an ancestor (more central or more peripheral) within the lineage of honored forebears. My focus on corpses enables me to push this argument even further and to speak of the proper burials of ancestors, which include revering them as cultural treasures.  Any human community consists not only of those now living in it but also, potentially, of both ancestors and anticipated descendants. In a wry statement by a Montenegrin poet we see part of this nicely: “We Montenegrins are a small population even if you count our dead.” Different human groupings place different emphases on these three segments of possible community—dead, living, and yetunborn. Imperial China, for example, is renowned for having made ancestors into real actors in the world of the living, while in other societies ancestors are crucial points of reference for the living but inhabit their own world (though they may enter ours on occasion). Pro-natalist nationalist ideologies, by contrast, are preoccupied with descendants, connected to ancestors in an endless chain through time. In many human communities, to set up right relations between living human communities and their ancestors depends critically on proper burial.47 Because the living not only mourn their dead but also fear them as sources of possible harm, special efforts are made to propitiate them by burying them properly. The literature of anthropology contains many examples of burial practices designed to set relations with dead ancestors on the right path, so that the human community—which includes both dead and living—will be in harmony. Gillian Feeley-Harnik writes of such ancestor practices in Madagascar: “Ancestors are made from remembering them. Remembering creates a difference between the deadliness of corpses and the fruitfulness of ancestors. The ancestors respond by blessing their descendants with fertility and prosperity.”48 Their harmonious coexistence is about more than just getting along: it is part of an entire cosmology, part of maintaining order in the universe. All human groups have ideas and practices concerning what constitutes a “good death,” how dead people should be treated, and what will happen if they are not properly cared for. In what direction should the feet of the corpse be pointed? Who should wash it, and how should it be dressed? Can one say the name of the deceased person or not? How much time should elapse before burial? Is alcohol allowed at the wake? May the body be cremated without killing the person’s chances for resurrection? What things must be said at the funeral? What kinds of gifts should be exchanged, and with whom? If one of these things is not done correctly, what will happen? Proper burials have myriad rules and requirements, and these are of great moment, for they affect the relations of both living and dead to the universe that all inhabit. Southern, Central, and Eastern Europe offer many examples of such conceptual worlds.49 Although specific beliefs and practices vary DEAD BODIES ANIMATE THE STUDY OF POLITICS ïœČ widely across the region, for illustrative purposes they display sufficient commonalities to be treated together. What goes into a proper burial? Kligman, Lampland, and RĂ©v report50 from contemporary Transylvanian and Hungarian ethnography that villagers there believe the soul of the deceased person watches the funeral, and if it is dissatisfied, it will return and punish the living by creating havoc, often in the form of illness. Was enough money thrown into the coffin? Were the burial clothes fine and comfortable? Was the deceased’s favorite pipe put into the casket? If the person died unmarried, was a wedding also performed at his funeral? Various parts of the funeral ritual (the orientation of the body as it leaves the house, the reciprocal asking of forgiveness between living and dead, etc.) aim specifically to prevent a disgruntled soul from coming back. The possibilities for mayhem are much graver if the deceased had no burial at all. In addition, for months and years after the funeral these villagers offer regular prayers and ritual meals to propitiate the dead and keep them quiet, believing that a well-fed, contented soul will protect its earthly kin.51 One still finds ritual practices of this kind, for instance, in Transylvania and the former Yugoslavia. Every year a week after Easter, villagers go to the graves of kin in the cemetery, bearing special food cooked for the occasion; they sit on the graves and eat, offering the food to their dead.52 For these people it is not enough that the dead be properly buried: the living must keep feeding their dead kin so as to ensure the ancestors’ blessing and continued goodwill, which are essential to a well-ordered universe.53 From research in the Polish/Ukrainian borderland, Oltenia (Romania), and elsewhere we learn that a dead person who does not receive a proper burial has a number of options.54 He may become a “walking dead man,” annoy his family members, try to sleep with his wife, and seek to inflict retribution on those who wronged him. Or he may become a vampire. (These job choices are the preserve chiefly of males; unhappy dead females take on other forms.) One way or another, he makes the lives of his earthly relatives and neighbors unpleasant; they must either give him a proper burial (if he had none) or (if already buried) dig him up and cut off his head or drive a stake through his heart. Concern for the well-being of ancestors and other dead is thus crucial to peaceful living and to an orderly universe; proper burial helps to ensure these. DEAD BODIES ANIMATE THE STUDY OF POLITICS ïœŽïœł The idea that properly treated ancestors become protective spirits (or even saints) is found from Russia westward into Hungary, as is fear that a vengeful spirit will torment the living unless suitably placated. Such notions easily acquire deeper religious significance. Tumarkin describes, for instance, the link between the souls of ancestors and saints: in a Russian peasant house, icons often hang opposite the hearth, where the ancestors’ souls are thought to reside. Russian Christianity absorbed forms of ancestor worship, which became an important part of cults of the saints; indeed, Russian peasants have long understood saints to be their adored forefathers who sacrificed themselves for future generations. “To light a candle for the saints,” Tumarkin observes, “was to enter into spiritual discourse with the protective spirits of the past.”55 Ideas about proper burial figure even in present-day dead-body politics. An example is the debates around whether to remove Lenin’s mummy from its mausoleum in Moscow’s Red Square. Like the corpse of Imre Nagy, Lenin’s has been the object of much politicking. Although the idea of removing him and burying him somewhere else56 is not new, starting in  it was proposed and debated with increasing vigor. (The debate was briefly sidetracked by a report in Forbes magazine, also carried on U.S. TV programs such as ABC’s Evening News, that Lenin was to be sold for hard currency at international auction.57) Having initially opposed the idea, Yeltsin later changed his mind, suggesting in ïœčïœč3 and again that Lenin be removed from Red Square for burial.58 Then came the attacks on the statues of Tsar Nicholas and Peter the Great, fatal in the former instance; both were motivated, as I said, by opposition to Lenin’s burial. The Russian Orthodox Church came out on the side of burying Lenin but refrained from stating whether the church would bury him as a “Christian.” Meanwhile, the Duma voted to denounce the project for his removal, and the question of who (Yeltsin by presidential decree, the Federal Assembly, or the people by referendum) should make the final decision was tossed around like a hot potato. A poll taken in June showed clearly who favored burial and who did not:  percent supported the idea, and 3ïœČ percent opposed it; the latter were concentrated among supporters of Communist Party leader Gennady Zyuganov and some nationalists.59 One could say a great deal more on the politics behind Lenin’s mummy (as does Vladislav Todorov, in a lengthy and often hilarious discussion60). DEAD BODIES ANIMATE THE STUDY OF POLITICS  Market forces also have their effect. The embalmers who own the secret formula for Lenin reportedly took on after-hours work, catering to the fashions of newly wealthy Russians wanting to be embalmed; this moonlighting gives them another source of income, now that state funds for tending Lenin’s mummy have dried up, and subsidizes their continuing to work on him.61 But also important to determining Lenin’s fate are ideas about what makes for a proper burial. Their relevance comes from the decidedly religious underpinnings of the Lenin cult, and from notions about the divine origin of the authority of the tsars (to whom Lenin was often compared).62 An embalmed and not-buried Lenin offends Russian Orthodox sensibility, according to which every dead person should be interred, with very specific rites.63 For Russians, as for others discussed above, if someone is not buried or is buried improperly (or if abnormal people are given a “normal” burial), then bad things will happen.64 Because an unburied body is a source of things not being quite right in the cosmos, this is in itself sufficient reason to place Lenin firmly in the soil. But the debate is complicated by another set of beliefs, one having to do with saints. In Russian Orthodox doctrine, a dead person is revealed to be a saint not only through miracles but also because the corpse does not putrefy. As is true in many parts of the world,65 it used to be common Orthodox practice to exhume the dead after a certain time (three, five, or seven years was customary), wash the bones, and rebury them with a special liturgy. This ritual is still performed in some areas, including rural Greece.66 If upon digging up a Russian corpse one found that it had not decayed, its preservation was a clear sign of sainthood.67 Even though the incorruptibility of Lenin’s corpse is a human achievement, he is still touched by these associations: dead people whose bodies have not decayed are holy.68 From the religious point of view, then, one can see that Lenin’s mummy should be buried, lest bad things happen, and at the same time that it should not be buried but be exposed under glass, as befits a saint. In either case, the rationale has not just religious backing but roots in ideas about ongoing relations between the living and their dead. The only group excluded from arguments of this kind is the Communist Party, but it has ingeniously exploited other aspects of popular belief. In the parliamentary debate over what to do with Lenin, one of the communist participants reminded his DEAD BODIES ANIMATE THE STUDY OF POLITICS  audience that in ïœč Russian archaeologists had dug up the body of Tamerlane, about whom it was said that anyone who disturbed his grave would be cursed. Shortly thereafter, the Nazis overran the Soviet Union. The deputy concluded by asking what might happen if they now disturbed Lenin’s casket to bury him!69 All these different and contradictory views about reburial are available for use in a political contest that I believe is enriched by including them, to enchant the kind of political analysis we might do on Lenin’s corpse.70 I should clarify my aims in making these points about “proper burial”: I both am and am not making an argument about the continuity of older beliefs and practices. Given that years of official atheism and relentless modernization have eroded many beliefs recorded in earlier ethnographic work, I would be foolish to presume continuity. Nonetheless, as Gail Kligman’s wonderful book The Wedding of the Dead shows clearly for northern Romania in the ïœčs, popular ideas such as those I have described were not erased during the socialist period.71 Even Moscow intellectuals who think themselves beyond such “superstitions” can feel that there is something uncomfortably out of order about Lenin’s unburied corpse.72 But we should think about these seeming continuities carefully. Some practices that appear to be constant may actually have changed: for example, Andreesco and Bacou describe the modifications that distinguish burial practices in Oltenia (southern Romania) today from those of decades ago.73 Assuming the trappings of modernity may mean that people no longer feed their ancestors, but they may still think it important to recognize them. More important, however, is that some “traditional” practices are in fact reinforced (if not, indeed, invented, in Hobsbawm and Ranger’s famous formulation74) by their present setting. Andreesco and Bacou indicate that far from suppressing older burial practices, Romanian socialism amplified some of them.75 One reason might be that because religious burial violated official atheism, to bury one ’s dead properly was a form of resistance to official religious policy. A similar point emerges from a ïœčïœč article in the New York Times, which reported that in Serbia as of the ïœčs, practices involving hospitality and feasting in connection with the dead increased, as villagers began building entire houses on the graves of their relatives. These often lavish structures, with a coffin in the basement and DEAD BODIES ANIMATE THE STUDY OF POLITICS  regular feasting above, “so the spirit of the deceased has something to eat and drink,” had less to do with tradition than with competitive displays among neighbors and against the Party elite.76 Thus, by invoking older beliefs and practices, I am not affirming unbroken continuity; the practices may be rejuvenated, attenuated, or simply invoked in discourse. What is most important about them is that those changes or invocations refer to practices that have a history (or histories). That history makes available numerous associations derived from earlier, precommunist times, forming a broader cultural system that shapes the possibilities for present political action. Political transformation may give “traditional” ideas new urgency—for example, proper burial and harmonious relations among kin may be especially powerful politically for those living through postsocialist times that have wrought such havoc on social relations among kinsmen, owing to conflicts over property restitution (which implicates kin above all others77). Ideas about proper burial, then, even if no longer held in a form identical to ideas from the past, enter into the penumbra of meanings that politicians and others can draw upon, alter, and intensify. These ideas and practices thereby inflect what can be done with dead-body symbolism.78 The great stability of mortuary practices, mentioned earlier, lends further credence to this claim. I have one final point to make about proper burial. The point is specific to the cases of famous dead, such as BartĂłk, the heart of Bulgaria’s Tsar Boris, and Romanian bishop Inochentie Micu (see chapter ïœČ), who have returned from abroad. And I believe it applies to such cases not just in Eastern Europe but elsewhere as well. Even when ideas about vampires and the undead have gone out of style, one common rule about proper burial still in force is that our “sons” must be buried on “our” soil, lest we be plagued by misfortune arising from the soul’s continued distress. The notion of repossessing “our” dead is common worldwide, as is evident from customs of warfare that return dead soldiers to their home countries. (Think of the ongoing preoccupation, in U.S. politics, with MIAs from the Korean and Vietnamese wars.) In such cases reburial at home may be presented simply as a matter DEAD BODIES ANIMATE THE STUDY OF POLITICS  of proper rest for the deceased, the idea that it prevents misfortune remaining at best implicit. We see this with a home-bound skeleton of a very different sort, that of the Sioux chief Long Wolf, brought back in September ïœčïœč from London (where he had been “stranded” for  years) to his ancestral burial grounds in South Dakota. One of the Sioux who traveled to London to retrieve him observed after the funeral, “It means he’s set free. He’ll be among his own people. His bones will remain with us. The spirit remains with the bones, and the bones will finally be at rest among his own.”79 What interests me in cases like this one and similar postsocialist examples is their perhaps unexpected link with national identities, the subject with which I began this section. That link is through the contemporary vogue, worldwide, for the return of cultural property or “heritage,” an increasingly important part of building modern national identities. Over recent decades we have grown accustomed to peoples and countries, especially former colonies, petitioning to retrieve items of their cultural heritage or patrimony, often held by former colonial powers. Even Winniethe-Pooh, Piglet, et al. have entered into the corpus of contested objects.80 Efforts to define or redefine national identities seem increasingly to involve the notion that the “health” of a people is greatest when it has all its valued things at hand, rather than lying in museums or improper graves elsewhere. Perhaps the cases best known to residents of the United States involve the repatriation of Native American heritage—meaning both sacred objects and ancestral bones. The very word “repatriation” is eloquent: valued objects and remains are returning to the father- or homeland, where they should be. In her fascinating book The Return of Cultural Treasures, Jeanette Greenfield observes that in the nineteenth century, cultural property of many kinds was “centralized,” brought from its places of origin into museums in the major colonial centers.81 We are now witnessing the opposite movement, as more and more museums are forced or volunteer to return their treasures to the places whence these were taken. Not every relic or object that moves is part of this aspect of postcolonialism, and not all bodies and objects are equally worth retrieving. The ones that are, however, are usually the bodies of persons thought to have contributed something DEAD BODIES ANIMATE THE STUDY OF POLITICS  special to their national history or culture. Adapting Greenfield, I would call them “cultural treasures.” In many parts of the world it seems to have become very important to bring “our” treasures—whether they are valued objects or physical remains—back home where they “belong.” The imagery of possession so often used inclines me to assimilate them to a worldwide concern with property rights—in this case, rights to cultural property. This argument suggests that repatriating dead bodies in the postsocialist period is part of refurbishing (and fighting over) national identities by bringing “our cultural treasures” home for a proper burial—a burial that binds people to their national territories in an orderly universe.82 These repatriations refurbish national identities by “nationalizing” symbolic capital that had entered global circuits, thus affirming the individuality of East European nation-states too long seen from without as barely distinguishable clones of international Soviet-style communism. Where the repatriates are world-famous, they may bring world respect, countering the arrogance of foreigners inclined to say, for instance, “Who would have thought that Romania, of all places, could produce cultural geniuses like Ionesco, Enescu, Eliade, and BrĂąncusži!” This outcome is especially likely where the dead person himself has requested the homecoming (usually in a will), as is true of a number of the repatriated corpses. Perhaps the more respectable image they bring thereby will help their countries to be judged “European” and, thus, worthy of EU membership. No matter whence the impetus for repatriations—from families of the deceased wanting royalties (as with BartĂłk; see introduction, note ïœłïœ·), from wills, or from governing parties hoping to consolidate a reputation as guardians of the national heritage—they draw wider notice and enhance the nation’s global image. It is as if repatriating these cultural treasures and giving them proper burial localizes part of the symbolic capital they contain, just as postsocialist economies seek to attach themselves to international circuits in ways that will enable them to hold onto some of the profits for themselves. As I suggested at the beginning of this chapter, the corporeality of dead bodies facilitates such localizing claims. Their reburial participates in reordering meaningful worlds that are simultaneously conceptual, political, and economic. 
the ordering principles of daily life and the basic rules of the game in Soviet-bloc politics ceased to hold. The result was a high level of political conflict and disagreement as newly forming groups with vulnerable constituencies jockeyed for advantage in new political fields. An always fragile balance of political forces now underwent a profound shift, a shift so momentous that it warranted truly cosmic imagery and raised all manner of culturally deep concerns. What is the order of our world now that the Communist Party has fallen? Whom do we wish to recognize as our ancestors, now that Marx, Lenin, and local communists are out, and what genealogies do we wish to rewrite? How should we position ourselves relative to other people—who, that is, are our kin and trusted associates? How can we reset our moral compass? Who is to blame for what has happened, and how should they be punished? Trying to resolve questions of this kind is what it means to reorder meaningful worlds. I have emphasized here the following aspects of that process: endowing postsocialist politics with a sense of the sacred, working toward a new moral order, assessing blame and seeking compensation, resignifying spatial and temporal landmarks and international borders, seeking modes of national self-affirmation and of connection with ancestors. Given all this, I think it is not too much to speak of reordering worlds of meaning as what is at stake in reburying the dead. CONCLUSION Let me recapitulate the arguments I have been making. My aims in the book as a whole are the descriptive one of presenting some material about the political “lives” of dead bodies in Eastern Europe and the former Soviet Union, and the analytical one of showing how we might think about that material within an enchanted, enlivened sense of politics. I see dead bodies as one of many vehicles through which people in postsocialist societies reconfigure their worlds of meaning in the wake of what I (and, I believe, they) regard as a profoundly disorienting change in their surroundings. The widespread disorientation offered tremendous opportunity to people seeking power, as well; the challenge for them was to form new political arenas, invent new rules of the game, and build new political identifications, all in fierce competition with other would-be elites. None DEAD BODIES ANIMATE THE STUDY OF POLITICS  of these outcomes, however, could simply be imposed. Only an alchemy mixing new political strategies with meanings already available would produce alternative political arrangements. I have suggested that the meanings already available included ideas about kinship, history, proper burial, and national identity; about authority, morality, space, and time. All these are important sites of new meaning-creation, by means of which political opportunists and disoriented citizens alike strive to reorder their meaningful worlds; moreover, dead bodies connect with all of them. Not every theme I have raised is relevant to every politicized corpse: different themes illuminate different cases, as I try to show in my handling of the cases I discuss in chapters ïœČ and ïœł. There is no uniform interpretation of the political lives of dead bodies. My aim in this chapter has been to suggest a variety of ways for thinking about dead-body politics, to offer a loose framework for approaching examples whose details vary. Only sometimes will we clarify the meaning of one or another case through ideas about proper burial, for example, or through looking at the multiple rĂ©sumĂ©s of their lives, as in the case of Nagy. Many things make Nagy’s case unique in comparison with other reburials.83To understand any given case, one might find it helpful to ask what in present and past contexts gives what multiplicity of meanings to the rĂ©sumĂ© of that particular corpse: How does his complex biography make him a good instrument for revising history? What in his manifold activities encourages identification from a variety of people? Answering such questions will often, but not always, elucidate why some dead bodies rather than others become useful political symbols in transitional moments. Why, you might inquire, do I go to such lengths to interpret dead bodies? Why isn’t it sufficient to see them simply as part of legitimating postsocialist polities?84 What is the payoff of all my talk about “meaningful worlds” and ancestor worship and burial practices, especially given my reluctance to see such practices as having continuity throughout communist rule? I believe I am in part discussing processes of legitimation, attempting to state more precisely what goes into them. But many of the reburials I discuss were initiated not by political leaders eager to establish new legitimacies but by humbler people hoping to rectify their worlds. Moreover, to label an event “legitimating” does not end the inquiry; it invites us to ask how that event legitimates what, and at whose initiative. In DEAD BODIES ANIMATE THE STUDY OF POLITICS  trying to explain why and how dead bodies work in postsocialist politics, I have presented legitimation as a process that employs symbols; in speaking of dead bodies as unusually ambiguous, protean symbols, I have pointed to the multiple possibilities lodged in a given corpse-qua-symbol that make it unusually effective in politics; and in discussing ancestors and burial rites, I have stressed that these symbols have histories, often deep ones, that further multiply the associations they provide as resources for creating meaning and legitimacy in moments of political contention. Thus my argument throughout this book concerns how we might think of legitimation in less rationalistic and more suitably “cosmic” terms, showing it as rich, complex, and disputatious processes of political meaning-creation—that is, as politics animated. Is anything in these processes specific to the postsocialist context, distinguishing its many instances from uses of dead bodies elsewhere? I see three ways of answering this question in the affirmative. First, although corpses can be effective political symbols anywhere, they are pressed into the service of political issues specific to a given polity. For postsocialism, this means issues such as property restitution, political pluralization, religious renewal, and national conflicts tied to building nation-states. Such issues are found in other contexts, too, but in most postsocialist ones they occur simultaneously. This is an obvious argument for the specificity of postsocialist dead bodies, but not a strong one. Second, dead bodies— inherently yoking past with present—are especially useful and effective symbols for revising the past. To be sure, political transformation often involves such revision: indeed, communist parties revised pasts extensively. In Eastern Europe, however, rewriting history has been perhaps unusually necessary because of powerful pressures to create political identities based expressly on rejecting the immediate past. The pressures came not just from popular revulsion with communism but also from desires to persuade Western audiences to contribute the aid and investment essential to reconstruction. The revisionist histories that corpses and bones embodied were therefore central to dramatizing the end of Communist Party rule. Finally, I believe dead bodies are uncommonly lively in the former socialist bloc because of the vastness of the transformations there that make bodies worth fighting over, annexing, and resignifying. The speciDEAD BODIES ANIMATE THE STUDY OF POLITICS ïœČ ficity of postsocialist corpses lies in the magnitude of the change that has animated them. The axis mundi has shifted; whole fields of the past await the plowshare of revisionist pens, as well as the tears of those whose dead lie there insufficiently mourned. A change so momentous and far-reaching requires especially heavy, effective symbols, symbols such as dead bodies. I am suggesting, then, that the specificity of postsocialist deadbody politics, compared with examples from elsewhere, is a matter not of kind but of degree. The remaining two chapters treat specific cases with the tools I think best suited to them from those I have mentioned. The two chapters are organized very differently: one in the manner of a chronological narrative and the other more like a network of ideas that double back on themselves; the differences in organization are part of the message I hope to convey by the end of the book. In both chapters I strive to bring in the delights of anthropology, too often ignored in the literature on postsocialism: a respect for wide variability on a small scale; close attention to how these particulars intersect with contemporary global processes—how everyday and large-scale forces intersect in particular skeletons in the wake of communism’s collapse; and ideas about ancestors, about “proper burial,” about the cosmos, morality, and blame, about time and space, and about death and rebirth. I hope the result will demonstrate how we might enchant our sense of the political and enliven our understanding of politics in the postsocialist world.
0 notes
katebushwick · 5 years
Text
What is an icon? Icons are symbolic condensations (Freud, 1949: 51). They root generic, social meanings in a specific and ‘material’ form. They allow the abstraction of morality to be subsumed, to be made invisible, by aesthetic shape. Meaning is made iconically visible as something beautiful, sublime, ugly, even as the banal appearance of mundane ‘material life’.
Iconic consciousness occurs when an aesthetically shaped materiality signifies social value. Contact with this aesthetic surface, whether by sight, smell, taste, sound or touch, provides a sensual experience that transmits meaning. The iconic is about experience, not communication. To be iconically conscious is to understand without knowing, or at least without knowing that one knows. It is to understand by feeling, by contact, by the ‘evidence of the senses’ rather than the mind.
Iconicity depends on feeling consciousness. George Herbert Mead once wrote that the ‘content of consciousness is feeling’. He described this as the ‘fund of unexplored social organization which enables us to act more surely’, pointing to its nondiscursive quality as allowing subjectivity to mediate impersonal modernity. ‘We go to strange cities and move about unknown men’, he suggested, ‘without perhaps presenting to ourselves the ideas of one of them, and yet’, he continued, we ‘successfully recognize and respond to each attitude and gesture which our passing intercourse involves’ (Mead, 2001: 67). Mead protests that such a feeling consciousness ‘is not sensuous’, but he protests too much, betraying how deeply resistant modern moralists are to the aesthetic moment in modern life. For Mead social feelings can only be located in ‘mind’, not in the feelings of the heart or the sensations of the body. These are best left not to social theorists and philosophers, but to aesthetes.
The surface, or form, of a material object is a magnet, a vacuum cleaner that sucks the feeling viewer into meaning. For thinkers who do concern themselves with feeling consciousness, these surfaces, in their beauty, sublimity, or ugly banality, are themselves the principal objects of fascination. They resist the interplay between surface and depth, how aesthetic surfaces allow transitions to social meaning.
With icons, the signifier (an idea) is made material (a thing). The signified is no longer only in the mind, something thought of, but something experienced, something felt, in the heart and the body. The idea becomes an object in time and space, a thing. More precisely, it seems to be a thing. For, as aesthetic shapes, things are the middles of semiotic process. Insofar as the thing becomes invested with social meaning, it becomes archtypical. As something, it is transformed into a signifier, setting off a semiosis that subsumes every thing into meaning and every meaning into thing.
The Status of the Material
The theory of iconic consciousness poses itself resolutely against the materialism that continues to pervade modern thought, in the highest realms and in the everyday. Materialism reduces materiality to things, ignoring the aesthetic construction of material surfaces and their experience via feeling consciousness. This reduction is deeply rooted in the relentless utilitarianism of everyday life, which insists on the concrete, on the practical, efficient, and useful. The counterparts in theoretical reflection to such everyday consciousness are concepts such as realism, practice, information, utility, cost
and benefit, cognition and truth. It is not easy to dislodge such deeply misguided yet socially productive beliefs, but the effort must be made.
Even as we are ruthlessly critical of materialism, however, we should learn to be energetically enthusiastic about materiality. For the 20th century, understanding nonmaterial structures of meaning was an extraordinary accomplishment. Resisting the hegemony of modern practical consciousness, Durkheim initiated the project of analyti- cally separating meaning from social structure. To give culture its autonomy is to learn to recognize, with Ricoeur, the yearnings of the soul, and, with Dilthey, the continuing vitality of the spirit (Dilthey, 1976; Durkheim, 1911; Ricoeur, 1976). Today, at the beginning of the 21st century, we must try to understand how meaning, soul, and spirit manifest themselves through materiality.
Saussure rightly insisted that the sound of language, in itself, carries no meaning. How sound connects with concepts is arbitrary. Pure sound is only a signified; its meaning is determined by internally organized signifiers, self-regulating relations of concepts. But this insight should not obscure the significance of sound. Words, after all, are sounds of meaning. Phonetics matters. It also has autonomy. The science that Jacobson called poetics concerns the internal sounds and rhythms of speaking and hear- ing, and how they affect the construal of meaning. We must be able not only to think but to hear and feel speech – to make music.1 Otherwise, we would not have these rather ugly sense organs sticking out on either side of our head!
There is more than mind. The meanings of the things we see are invisible to the naked eye, but the visual is not unimportant for that. Can we ignore the sensuousness of sight, the patterns of line, curve, and symmetry, the shadings of light and dark, the vividness of color? The textures of touch, the odors of smell, the compulsions of taste? The evolution of the humanoid brain’s neo-cortex enabled extended memory and reflexive thought, the ability to think and interpret that set off the human race. But the other mid- and hind- brains remain, and so does the autonomic system. We retain our more primitive capaci- ties, though these five senses may, in some part, be less developed in human beings. We are human, but, as Nietzsche suggests, we are also ‘all too human’.
After inventing the realist philosophy of science, Rom Harre ́ has turned his back upon it, condemning its materialism as a reduction that overlooks the invisible strands of meaning that mark not only science but even the supposed materialism of economic life (Harre ́, 2002).2 Harre ́ calls ‘stuff’ the objects that occupy space and time, denying, now, that such merely material things can act in an independent way. Every piece of stuff belongs to a category, ‘an ephemeral attribute of a flow of symbolic interactions among active people competent in the conventions of a certain cultural milieu’. A material object ‘is transformed from a piece of stuff into a social object’, Harre ́ asserts, only ‘by its embedment in a narrative’. It is by such ‘narrative binding’ that ‘bits of coloured cloth become flags [and] clothes become uniforms’.
All this is deeply true, providing new and powerful ammunition against the obses- sively practical, realist consciousness of modern thought and times. Still, I wonder whether these materialities are, in fact, merely what Rom Harre ́ calls ‘affordances’ that ‘constrain the uses to which such things can be put in the local narratives’? The sensuous surface of things seems more important than simply a means to the end of meaning. Is it not the sensuous surface of stuff that allows us to see, hear, and touch their narrative
bindings? For Harre ́, the geography of the Nile valley was merely an ‘indirect source’ of Pharonic social order. It could not be the direct source, he insists, because cultural structures have autonomy. I would suggest, to the contrary, that it was the overflowing physicality of the Nile that allowed the complex metaphysics of Pharonic Egypt to be sensuously experienced. Culture gives material things ‘magic powers’, Harre ́ believes, which are ‘not an effect of the physical properties of the thing’. Yet is not materiality at the center of enchantment? Is it not the ‘illusion’ that physical things do, in fact, have character and agency that makes their symbolic power seem magical and extraordinary rather than real and mundane? Stuff matters. What would Mozart’s opera have been without its magic flute? Without seeing it, hearing it, knowing it was always there?
To recover the material is not to recover the thing-in-itself, but, rather, the texture of a thing’s aesthetic surface, for it is through aesthetic surface that things are experienced. The philosophical case against idealism has it that there is a something implacable about a chair. We cannot wish the chair away or walk through it. It exists in time and space. But is it not this particular chair we see, touch, and feel, and that remains in our minds as a materiality? The chair is not a chair as such. It is a particularly formed chair.
The Status of the Aesthetic
The theory of iconic consciousness recovers the aesthetic within everyday life, against the notion that art either is, or must be, radically separated if rationality or morality is to be sustained. That it must be separated is famously taken to have been Kant’s argument about the normative structure of modernity: when we are in the world of the beautiful or the sublime, we can sustain neither the distance nor the disinterest demanded by objectivity.3 Universalism, whether in scientific or moral criticism, depends on the view from nowhere (Nagel, 1987). The empirical possibility for sustaining such distantiation is undermined if actors ‘fall’ into the objects they observe (because they feel them) if, when they face putatively separated objects, actors feel as if they are not separated from them, but that the objects are becoming subjectified themselves.
Weber made Kant historical with his argument for disenchantment. Under the conditions of moral, religious, and technological rationalization, every value sphere, including the aesthetic, is sliced and diced, cast out on its own. Magic has forsaken the modern world. But if there is iconic consciousness, then, while totemism may have been transformed and radically pluralized, it has hardly been effaced. Bits of stuff still seem magical, and not only because they are placed inside of stories. The material surfaces of things are experienced aesthetically. It is materiality that allows feeling consciousness to be connected to things.
The argument for feeling consciousness, for a cultural materiality, creates middle ground between Derrida and the romantic early Marx, who wistfully spoke of the ‘sensuous object’ overcoming the materialism that marked alienation in capitalist society. Attacking the philosophy of presence, Derrida pointed to absence, demoting the visible and material to the status of signifieds linked to invisible signifiers. But if presence can indeed be known only in relation to absence, how else can absence be known except by experiencing presence? In Bill Brown’s thing philosophy, he protests Derrida’s absence, arguing on behalf of a ‘sensuous or metaphysical presence’. Martin
Seel likewise insists on the importance of ‘appearing’. If poststructuralism demands contextualization, Seel writes, then the aesthetic creates decontextualization, an effect of appearance (Brown, 2001; Seel, 2005).
These briefs for an aesthetics of things are powerful, but they are also one-sided. They develop not just an argument for aesthetic recovery but for aesthetic redemption, not just for the aesthetic but for aestheticism. They demand not just for the aesthetic surface to be given full citizenship alongside the moral depth, but for the aesthetic as an alternative vision. In doing so, they paradoxically reinstate the separate spheres argument they are so fervently fighting against. Standing firmly on the ground of this division, Martin Jay (2003) warns against bringing the aesthetic back into everyday life. Haunted, like every Habermasian exponent of critical theory, by the specters of Heidegger and Nazism, Jay associates aesthetic consciousness with reaction and irrationality.
Hans Ulrich Gumbrecht is Jay’s perfect foil. Severely rejecting Kant and joyfully embracing Heidegger, Gumbrecht condemns ‘meaning effects’ as ratiocinative and conceptual. He champions ‘presence effects’, not only because they reveal the granular texture of materiality, but because they provoke a ‘crisis’ vis-a`-vis the ugly and routine banality of the modern world. To experience the aesthetic corporeality of things allows ‘intimate’ feelings that are normally ‘inaccessible to us’. Reconnected to the ‘ground’ of earth, we experience the ‘unconcealment of Being’, beyond doing and having. For Gumbrecht, the aesthetic is a defamiliarization process. It is Proust’s hypnotically arresting Madeline, the awesome gates to Shinto temples, the shockingly ‘beautiful run, pitch, throw, or jump’ that creates the ‘moment of intensity’ in the midst of a game, the ‘special feeling’ that allows us to step outside a merely instrumental or ‘interested’ position (Gumbrecht, 2006a, 2006b).
These contemporary longings are, contra Jay, not dangerous. Gumbrecht and his fellow postmodern aesthetes carefully acknowledge the competing worlds of democracy, law, and the morally abstract. The problem is empirical, not moral. Even as surface and depth must be analytically separated, they need also to be empirically intertwined. Presence and absence may inform antagonistic philosophical perspectives, but they are not antithetical in the empirical sense. The thrills and fears experienced by feeling consciousness are not the product of aesthetic surface alone; they are informed by the meaning structures that lie beneath. Gumbrecht asks whether the aesthetic is ‘a switch between different actual frames’ or ‘a switch towards the awareness of a pre-existing frame’s character’. We would answer that it is the latter: it is conscious awareness that changes, not the actual frame. Aesthetic experience is always there, even when we don’t focus upon it. So is the moral experience that it conceals and makes visible at the same time, even if we are not morally self-conscious in any way.
It is the purpose of art to make the aesthetic dimension explicit, to bring it into our conscious minds so that we experience it knowingly and reflect upon it. The availability of such specifically ‘aesthetic’ experience is limited. Not everybody can stand before Giacometti’s Standing Woman and know what they are seeing. It takes an artistic edu- cation. But if the experience of art is limited, the surface experience of aesthetic things is not. Men, or boys for that matter, don’t need an artistic education to skillfully rank passing women on the proverbial ten-point scale. Nor do modern women have any problem evaluating the hotness of some guy.
Alexander 15
In both everyday aesthetics and high art there is the same interplay of the unique and the general, the contingent and the a priori. Surfaces are specific and idiosyncratic in their object reference. We see this fashion model, not some other one. That we see this ‘hotty’ and that ‘babe’, and not any others, is, of course, the very point of such desig- nations. At the same time, such aesthetic representations are generic, connecting us to shared meanings, to culture structures. This model is a specific type of fashion model, a version, a specification of the more general form; so are the sexy man and woman particular examples of their species.
An object’s aesthetic power inserts the general into the specific, making the abstract concrete in a compelling and original way. In high culture, this is the challenge for the artist. In the world of the everyday, it is the challenge for the designer, and also for the lay person, the bricoleur who assembles his or her objects, laying them out or putting them on, as in DIY, or ‘do it yourself’.
Consider the following conversation between two women who encounter each other on the street outside a salon:
‘I love the way you’ve done your hair.’ ‘I like this new style, don’t you?’ ‘You’ve done a great job with it.’ ‘I found a new product.’
‘Where did you find it?’ ‘In Elle, and there it was, in the front window of the salon.’ ‘Well, your hair looks good. That product’s special. I’ve never seen that color in your
hair. You look amazing!’ If the artist gets the combination right, his object becomes great art. It can be hung in
any home or museum, anywhere, at any time. It has achieved a ‘surplus of meaning’; uniquely compelling in the here and now, it can also be compelling in the later and faraway.4 When the designer and lay person get it right, their surface assemblages draw us into the discourse of society. It is not just the form that excites, but the experiences of meaning that forms carry.5
The Status of the Moral
It is difficult to get surface and depth right, to embrace aesthetic sensation without antagonism to structured meaning, to give culture its autonomy without sloughing off material form. If aestheticism exhibits the first fallacy, moralism manifests the second.
Emile Durkheim was the founding father, with Max Weber, of cultural social science, though, as we have seen, he sharply disagreed with the German thinker’s idea that a radical epistemological break marked the transition from tradition to modernity. Durkheim devoted his major cultural work, The Elementary Forms of Religious Life (1911), to examining the symbolic classifications and rituals of ancient totemic religion, but he avowed this research offered profound insights into the secular symbolic forces of the present day. Symbols of the sacred-good and profane-evil, he asserted, continue to structure modern life, providing the moral glue that informs collective rituals and sustains social solidarity.
As I have mentioned earlier in this essay, these late-Durkheimian ideas inspired the outpouring of social semiotic and cultural-sociological research that increasingly marked 20th-century intellectual life. Only rarely, however, has this line of thinking reached into the realm of material things. The reasons have been, in some part, the topic of this essay. They have to do with the empirical and philosophical ambiguities of materialism and ideality, and, indeed, of the very notion of modernity itself. These ambiguities and lim- itations were manifest in Durkheim’s own writings. He generally resisted exploring the relation between ‘religion’ and ‘material’ life. He tended to write off primitive economic activity as nonsocial and the modern as egotistical. Almost always, he wanted to get beyond what he regarded as the merely visible, material shell of things to the invisible, the spiritual and moral kernel underneath.
This makes it all the more important to recover a brief moment in Durkheim’s later writing that can be read in a strikingly different way. At one of the axial points in Elementary Forms, while addressing the origins of mana – the spiritual force mani- fested in sacred totems – Durkheim becomes remarkably interested in the totem as material form. He takes notice of how the wooden surfaces of totems are formed and shaped, observing that ‘totemism places figurative representations in the first rank of the things it considers sacred’ (1911: 190). When he describes ‘what the totem amounts to’, he emphasizes ‘the tangible form in which that intangible substance [mana] is represented’ (p. 201). Throughout this discussion, Durkheim writes of ‘material sub- stances’ (p. 204) and ‘tangible intermediaries’ (p. 232). Because morality is abstract, and can be ‘imagined only with difficulty’, we can ‘comprehend’ spiritual feelings ‘only in connection with a concrete object’ (p. 232). Material culture is particularly marked in totemism. When the Crow people affirm that they ‘are crows’, it is because they believe that mana has the ‘outward form of the crow’ (p. 201). It is only this outward form of the totem that ‘is available to the senses’ (p. 222). They ‘attach themselves’ to this ‘concrete object’, and display it everywhere, ‘engraved on the cult implements, on the sides of the rocks, on shields’ (p. 222).
In these passages on totemism, Durkheim opens the door to culture in its material form. ‘Emblematizing’ is easier, he writes, when symbols are ‘inscribed on things that are durable’ (p. 232). The transfer between moral depth and material surface ‘is much more complete and more pronounced’, he suggests, ‘whenever the symbol is something simple [and] well-defined’ (p. 222). Durable, simple, and well-defined – this is as close as Durkheim gets to exploring the aesthetic construction of surface, the material texture of feeling that allows deep meaning structures to be experienced in a sensuous way. While Durkheim is clearly aware of feeling consciousness and aesthetic surface, it is also clear that he has little understanding of how they actually work. He has opened the door, but he has barely stepped inside.
If Durkheim had stepped inside, he would have discovered that philosophers of aesthetics had already decorated the room. In 1835, when Alexander Baumgarten created this new branch of philosophy, he defined it as ‘a science of how things are to be known by the senses’ (cited in Guyer, 2005: 3). In the preceding decades there had been increasing excitement about the artistic, as not only the realm of the beautiful but the sublime. The notion of the sublime had been around since the Roman Longinus, and in the 17th century Boileau applied it to the lofty style of rhetoric and poetry. British
Alexander 17
thinkers, however, took the idea in a new direction, emphasizing the wild, the emotive, and the darkly transcendental as an antidote to the suffocating restrictions of French neoclassicism.6 This expansion triggered a new interest in the sensuous pleasures pro- vided by contact with forms. For Shaftesbury, ‘the beautiful, the fair, the comely, were never in the matter but in the art and design, never in the body itself but in the form of forming power’ (Cooper, 1999, cited in Guyer, 2005: 11). For Hutcheson, the experience of form is ‘justly called a Sense’ because ‘pleasure is different from any Knowledge of Principles, Proportions, Causes, or the Usefulness of the Object’ (1973, cited in Guyer, 2005: 23).
It was vital for these new aesthetic philosophers to connect sensible form to moral depth. They argued that aesthetic feeling is binary and that ‘beautiful’ and ‘sublime’ provide sensual homologies with moral ideas. In fact, these aesthetic sensibilities are often presented as moral expressions themselves. Forms are beautiful when lines, shapes, colors, and light are pleasing and attractive, the ‘qualities in bodies’, as Burke writes, that ‘cause love, or some passion similar to it’ (1990: 83). The aesthetic sense of the beautiful, in other words, calls out moral feelings for the sacred-good. Its moral antithesis, the evil-profane, is animated by the aesthetic sublime, which Burke describes as ‘whatever [is] fitted in any sort to excite the ideas of pain, and danger [or] is in any sort terrible’ (p. 36). The beautiful is about romance and sympathy, the sublime about tragedy and deceit. The beautiful is small, quiet, soft, round, and propor- tionate; the sublime is vast, loud, hard, angular, and unbalanced.7 Every moral binary is attached to a pairing in aesthetic life.
With his critical investigations at the end of the 18th century, Kant is supposed to have straightened all this aesthetic sentimentality out, to have separated sharply, and once and for all, the sense of form from the substantive commitments of reason and morality, finally giving to each independent sphere what it is due. What Kant actually seems to have done, however, is quite different. He defined the aesthetic in such a distinctive and particular manner that it could be closely rewound with the rational and moral again.8
Kant does, of course, emphasize that what pleases the senses is pure form, that shapes signify nothing by themselves, that the ‘determining ground is the feeling of the subject and not a concept of an object’ (2000: section 17:116). If this were not the case, if forms were actually dependent on a determinate concept, then the significance of the aesthetic, its independent function, would be greatly reduced. Then, the meaning of the artistic object could be known before it was experienced, and the very point of experiencing would be lost. It is precisely because it is not, in fact, regulated by a determinate concept that the aesthetic imagination involves free play.
But to freely experience forms as pleasurable, Kant is at pains also to suggest, is to recall the self-determining autonomy that distinguishes judgments of a different, more rational and moral kind. For this reason, aesthetic judgment actually allows us to expe- rience core features of these other domains, and in a powerful manner that they could never have articulated themselves. It is the quality of avoiding determination by rational thought or moral understanding, not absolute dissociation from them, that makes an experience aesthetic, the very freedom from a priori determination that, subsequent to the aesthetic experience, allows greater conceptual and moral development in turn. In Anthropology from a Pragmatic Point of View, Kant puts the connection very simply:
‘The entire use of the beautiful arts is that they present moral propositions of reason in their full glory and powerfully support them’ (cited in Guyer, 2005: 181).
The aesthetic-cum-moral binary of beautiful and sublime has continued to inform the philosophy of aesthetics pretty much up to the present day.9 True, as Arthur Danto emphatically suggests, the emergence of such 20th-century practices as abstract, surreal, pop, banal, conceptual, and performative art has demonstrated that beautiful and sublime do not capture the extraordinary range of possible aesthetic products.10 Yet, contra Danto, these binary categories continue to provide the fundamental categories of sensu- ous experience, as either homologies or antinomies of moral evaluation, even as the referents of that experience radically change.11
What is more challenging to this understanding of high art is the status of the aesthetic everyday. How can morality and rationality be connected with the aesthetic amidst the conventions and typifications that mark mundane experience, where there is neither the free play of sensuous interpretation nor the ascetic autonomy of self-determination?
I have addressed this question in earlier sections of this essay, and other writings as well (see Alexander, 2008). The challenge I would like to address here, however, is whether an answer can be provided in the context of aesthetic philosophy itself. One might, for example, have recourse to Schopenhauer’s (1966) anti-Kantian brief for con- templation over reason, his stoic suggestion that only a ‘will-less knowing’ can slough off the burdens of reason and individuality that distort and alienate the modern world.
Yet, while Schopenhauer is certainly right to suggest that an aesthetic attitude can permeate modern life, his world-rejecting aestheticism misses how central aesthetic experience is to everyday moral and cognitive modes. We do not need to give up on self, reason, morality, or society to gain access to mundane sensuous experience. It is already there. Everyday experience is iconic, which means that self, reason, morality, and society are continuously defined in aesthetic, deeply experiential ways.12
For a powerful example of just how this trick is turned, we can, in fact, return to Kant, not to his systematic late treatise but to Observations on the Feeling of the Beautiful and the Sublime, the precritical work he published 25 years before. Kant here confronts the aesthetic as, in his words, an ‘observer’ rather than a ‘philosopher’. In a casual and lively style that addresses aesthetic representations of such everyday matters as sex, gender, nation, civilization, and race, the essay reveals how conventional morality is enabled by aesthetic experience and legitimated by the binary discourse of the beautiful and the sublime. Rather than analyzing how the sensual surface elides moral depth, this early Kantian discourse exemplifies it, and sometimes in altogether disturbing ways.
As in the later work, in this youthful writing Kant also pays homage to sense expe- rience as independent and significant, asserting ‘it does not matter so much what the understanding comprehends, but what the feeling senses’ (1960: 72, emphasis added). Confronting Edmund Burke, whose Philosophical Enquiry had appeared only a few years before, Kant declares his ambition, in the very first sentence of Observations, to relate the binaries of aesthetic experience to actors’ subjective ‘dispositions to be moved’ rather than to ‘the nature of external things’ (p. 45). His topic is to be ‘the feeling of the sublime and that of the beautiful’ (p. 46), not the nature of beautiful or sublime objects themselves.
Alexander 19
At first, Kant seems faithful to this promise: ‘The sight of a mountain whose snow-covered peak rises about the clouds, the description of a raging storm, or Milton’s portrayal of the infernal kingdom’, he writes, ‘arouse enjoyment but with horror; on the other hand, the sight of flower-strewn meadows, valleys with winding brooks and covered with grazing flocks, the description of Elysium, or Homer’s portrayal of the girdle of Venus, also occasion a pleasant sensation but one that is joyous and smiling’ (p. 47, emphasis added). In this passage, active constructions of everyday sense percep- tion trigger artistic portrayals of that experience, which make use of the categories of beautiful and sublime. Immediately after this, however, Kant loses his way, presenting beautiful and sublime as actual characteristics of objects themselves. He discovers the structural qualities of the aesthetic and connects them to moral forms, but he does so in an essentializing way.
Tall oaks and lonely shadows in a sacred grove are sublime; flower beds, low hedges and trees trimmed in figures are beautiful. Night is sublime, day is beautiful [and] the shining day stimulates busy fervor and a feeling of gaiety. The sublime moves, the beautiful charms. The mien of a man who is undergoing the full feeling of the sublime is earnest, sometimes rigid and astonished. On the other hand the lively sensation of the beautiful proclaims itself through shining cheerfulness in the eyes, through smiling features, and often through audible mirth. (p. 47, emphasis added)
The moral connection is maintained, but the autonomy of its aesthetic construction has disappeared. Aesthetic form is reduced to a reflection of moral quality. ‘Sublime attributes stimulate esteem’, Kant writes, ‘but beautiful ones, love’ (p. 51). ‘Friendship has mainly the character of the sublime’, he maintains, while ‘love between the sexes, that of the beautiful’ (p. 52). The aesthetic surface now has the effect simply of natur- alizing moral qualities. It allows them to be experienced sensuously, to be felt as if they were physical, real, and true: ‘Dark coloring and black eyes are more closely related to the sublime, blue eyes and blonde coloring to the beautiful’ (p. 54).
What’s so interesting about this reduction of the aesthetic to the moral is that it provides a classical demonstration of essentialism, of how surface and depth are inter- twined in everyday social life, not only in Kant’s times but in our own today. The ideal- typical representation of this everyday essentialism – and from our contemporary point of view, its moral and political nadir – is the confident manner in which Kant employs surface/depth to reproduce the gender and racial stereotypes of his day. He waxes elo- quently about how the moral qualities of women allow them to be ‘known by the mark of the beautiful’ – ‘her figure in general is finer, her features more delicate and gentler, and her mien more engaging and more expressive’ (p. 76, emphasis added). The binary qualities of the aesthetic, in other words, are here discovered as ingrained moral quali- ties. Women, Kant writes, naturally ‘prefer the beautiful to the useful’, a ‘strong inborn feeling for all that is beautiful’. From ‘very early they have a modest manner about them- selves [and] know how to give themselves a fine demeanor ... at an age when our well- bred male youth is still unruly, clumsy, and confused’ (p. 77). Affirming that ‘the moral composition makes itself discernible in the mien or facial features’, Kant declares ‘she whose features show qualities of beauty is agreeable’ and ‘in her face she portrays a tender feeling and a benevolent heart’ (p. 87, original emphasis). That women are thought beautiful is not due to aesthetic and moral convention. They are beautiful because they are, well, women! Kant jokes that if a woman goes against her nature, trying to appropriate the ‘diligent, fundamental, and deep understanding’ of men, then she ‘might as well even have a beard’ (p. 78).
But there is something serious at stake. If the physical is a sure sign of the moral underneath, then not only gender but racial profiling is naturally the order of the day. Kant makes the extraordinary claim that, outside of Europe, the ability to identify the beautiful with the feminine does get lost. ‘If we examine the relations of the sexes in [other] parts of the world’, he declares, ‘we find that the European alone has found the secret of decorating with so many flowers the sensual charm’. In contrast with the European man’s ‘very decorous’ construction of women, ‘the inhabitant of the Orient is of a very false taste’. Because he has ‘no sense of the morally beautiful’, the Oriental ‘thrives on all sorts of amorous grotesqueries’. Kant asks: ‘In the land of the black, what better can one expect?’ (pp. 112–13).
This simple question reveals the normative risk in the interpenetration of aesthetic surface and moral depth. On the one hand, as critical thinkers we must beware of assuming that a ‘look’ naturally expresses anything. On the other hand, even if we now clearly understood that it does not, iconic consciousness inevitably makes it seem that way.
The Status of the Real
It has been in order to confront this moral and political ambiguity that modern critical thinkers have asserted the primacy of the real. One way of combating the moralistic fallacy, on the one hand, and the aesthetic fallacy, on the other, has been to declare that icons are neither. It is to say that they are real. This is not Kant’s modernity but an empiricist one, a prototypically modern alternative to moralism and aestheticism that begins with Locke and reaches its take-off point in the 19th century, with the birth of photography and the emergence and simultaneous self-critique of capitalism in its indus- trial form. The dialectical relationship between surface and depth is here supplanted not by disbalance but by displacement. Scientific truth can be substituted for moral and aes- thetic claims. Neither is now necessary, for we now have access to the thing in itself. It is this realist claim that lurks beneath Peirce’s theory of iconic as compared with symbolic meaning and, in the 20th century, the persistent claim for the denotative rather than con- notative status, not only of photography but film, as in Andre ́ Bazin’s argument that the ‘ontology’ of cinema is realism (Peirce, 1931–58).13
It was in much the same spirit that Marx argued for the fetishistic character of commodities in capitalist societies. The product is not valued for its use but for the possessive desire it stimulates, a desire fed by wish-fulfilling fantasy and hope. Fetishism camouflages the ‘real’ meaning of commodities, Marx insisted, a meaning which is actually exchange value and, more deeply, the exploitative relations of production.
Only after critical social science discovers this reality can a new economy be established that will produce goods only according to their use value. In the last two decades these claims have been empirically confronted by powerful investigations
Alexander 21
demonstrating how capitalism actually sustains ‘decommodification’, for example: Kopytoff’s (1986) demonstration of singularization; Miller’s (1987, 1998) research on shopping as gift-giving; Campbell’s (1987) historical archeology linking consumerism to romanticism and hedonism; and the ethnographies of Czikszentmihalyi and Rochberg-Halton (1981: 55–89), and Woodward (2003), documenting the noninstru- mental meanings that attach to things in the home.
The Status of the Spiritual
This realist critique of surface/depth has the unintended effect of seeming to give credence to materialism as an anti-aesthetic and anti-moral form, whether through social realism, social engineering, or socialism. A more far-reaching strain of this critical tradition attacks the very orientation to the material object itself. The repulsion for ‘indulgent’ materialism, for putting faith in to external objects, for ‘mindless’ consump- tion or, indeed, for consumption tout court has permeated the axial age civilizations, motivating a demand for world-withdrawal, whether in an ascetic or a mystical form. The claim is that man loses himself when he makes idols, that humans must seek the divine not in material forms but in the abstract spirit, the only pathway by which they will find the divine, not outside, but within themselves.
To make the iconic the enemy of the spirit is to engage not in iconicism but in iconoclasm, the breaking of idols, a practice that extends from the ancient Jews to the Puritans who made the modern world. William Mitchell has suggested that it was Charles de Brosse’s Du Culte des dieux fetiches that introduced the horror of the fetish into Western accounts of primitive totem religion. Totemism was ‘more ancient than idolatry properly so called’, De Brosse asserted, because it was the most ‘savage and coarse, worshipping stones, vegetables, and animals’. Fetish-worshippers were people in whom ‘the memory of Divine Revelations’ had been ‘entirely extinguished.’ (Mitchell, 1986: 190, 193). How far is this from the anti-consumption movement today?
0 notes
katebushwick · 5 years
Text
ATTUNING TO THE CHEMOSPHERE ïżŒ
In this article I argue that these affective processes of attending to the minute aberrations of the body and atmosphere are the primary means of discerning protracted and low-level encounters with do- mestic chemicals. Further, the tracking of small changes to body and atmosphere across time and space can accumulate into a process I call the “chemical sublime,” which elevates minor enfeebling encounters into events that stir ethical consid- eration and potential intervention. The chemical sublime is both an experience and a practice that emerges out of late industrial material ecologies, one that inverts dominant conceptions of the sublime that hang heavy with Enlightenment- era baggage. In contrast to the long-prevailing formulation, the chemical sublime does not quell spectacular material threats with the transcendence of immaterial reason, thereby affirming human distinction and existing social orders. Rather, in the process I document, indistinct and distributed harms are sublimated into an embodied apprehension of human vulnerability to and entanglements with ordi- nary toxicity, provoking reflection, disquiet, and contestation.
At room temperature, the formaldehyde-based adhesives that hold together the plywood walls, particleboard subfloors, hardboard cabinetry, and carpet back- ings of the average American home slowly exhale chemical vapors into interior breathing space. Without a cracked window, an opened door, or other forms of air exchange, these silent and invisible microemissions accrue within the envelope of the home. Houseplants slowly filter out a fraction of the ambient chemical load as they absorb toxicants and assimilate benign formaldehyde metabolites into regular cellular function. A host of microorganisms that inhabit the soil surround- ing plant roots avail themselves of formaldehyde vapors as a source of life-sus- taining carbon (Kim et al. 2008).
The respiration of avian, feline, canine, and human inhabitants also removes formaldehyde from the air. Yet as formaldehyde vapors enter these bodies they are absorbed by the mucus membranes of the nasopharynx and lungs, bind to
369
370
CULTURAL ANTHROPOLOGY 30:3
DNA and proteins, disrupt cellular functions, and are quickly dismantled. In the process of metabolism formic acid is produced, yielding the possibility of acid- base imbalance and a range of systemic effects (ATSDR 2014). These slight bio- chemical impressions, which at first appear simply meaningless or puzzling, ac- cumulate in the bodies of the exposed and reorient them to the molecular constituents of the air and the domestic infrastructure from which such chemicals emanate. It is through the articulation of these small corrosive happenings that residents reckon with how their homes are decomposing into them as they de- compose in their homes.
The somatic work of the chemically concerned is enmeshed with an appre- hension of their own bodies that is simultaneously sensuous and epistemological, referred to herein as “bodily knowledge” and situated within a process of “bodily reasoning” that tempers not just what one knows but what one becomes with or is estranged from. Sustained bodily reasoning gives rise to the chemical sublime, and together they offer a response to Kim Fortun’s (2012) call for ways to differently know and reimage our ongoing late industrial present, which is marked by deteriorating sociotechnical systems and economic, climatic, and infrastructural instability.
The domestically exposed attune to their own effects and affects as a means of further discerning the barely perceptible constituents of their environment. This is not a practice confined to the “deviant agents” of those afflicted by multiple chemical sensitivity (Alaimo 2010, chapter 5; Murphy 2006, 173; Kroll-Smith and Floyd 1997, 10) or of those with diagnosed pathophysiologies like asthma. Rather, these molecular and relational appreciations arise from a somatic suscep- tibility and epistemic capacity common to human life—and often informed by nonhuman life.1 By definition toxics bear “a potency that can directly implicate the vulnerability of a living body” (Chen 2012, 203), and it is by virtue of this very capacity to be chemically wounded, even minutely so, that bodies bear revelatory power.
This article unfolds across increasing durations of atmospheric formaldehyde exposure. The tip of the iceberg is my own encounter with exposures in the field. Much of this ethnography was conducted through the haze of indoor-air-quality- induced befuddlement. During the first hour spent in houses with suspected in- door air-quality issues, I would slowly develop an ache in the back of my eyes, which would with time spread throughout my skull. I repeatedly found myself struggling to resist a physical desire to expedite interviews as my mind felt in- creasingly woolly, my focus slipped, and my lines of inquiry lost their direction.
ATTUNING TO THE CHEMOSPHERE
Time and the flow of my thoughts became viscous.2 My energy would bottom out, but my eventual sleep was wracked with restlessness.3
The spaces to which I was supposed to be most attuned were the spaces in which I felt most cognitively unhinged. Yet as much as ethnography is “a method of being at risk in the face of practices and discourses into which one inquires” (Haraway 1997, 190), it is also a method of understanding how sheltered the ethnographer is even within such exposures. A molecule of formaldehyde does not strike my lungs in the same way it does those who have endured months or years of exposure—for whom its effects are biochemically magnified and se- miotically enflamed. While my exposures may have intimated the costs of appre- hending chemical others, my impairments proved ephemeral and the stakes of my somatic cognizance comparatively negligible. To indulge in a “radical empir- ical” impulse (Jackson 1989), to gesture toward the evidentiary potential of my own body, would be to distract from all of the privileges of research that make my own exposure anomalous within the highly patterned landscape of domestic exposures across the United States. Almost all of my work took place in manu- factured homes, a mainstay of low-to-moderate-income homeownership, which harbor four times the ambient formaldehyde of conventional site-built homes (COEHHA 2001).
My interlocutors who resided in factory-built housing could be variously classified as elderly, poor, disabled, tenuously employed, or Native. In these cases formaldehyde concentrations were both indicators and agents of social abandon- ment and precarity. As will become evident in the first ethnographic section, new homes, newly renovated homes, and tightly sealed so-called green homes also cultivate elevated formaldehyde levels, as the biopolitical circuits that expose some in the name of sheltering others are not without their leakages (Murphy 2006, 111).
I begin by situating this article in the space between theoretical work on affect and phenomenological studies of environmental exposures. In the following section I unfold the specific affects of a domestic chemical assessment scientist, an analysis that contributes to a growing literature on the body as part of the existential, pedagogical, and ethical grounds of cultures of science (Masco 2004; Myers 2008; Helmreich 2009). My purview then widens to discuss the larger sensorium of corporeal domestic air-quality perception and the instrumental use of sensitized bodies to identify the sources of domestic chemical exposure. Across authoritative and questioned bodies, companion species and humans, I ask: In what ways do diffuse sensory practices generate knowledge of, attention to, and
371
372
CULTURAL ANTHROPOLOGY 30:3
engagements with everyday materials? How can expanding the avenues and tem- porality of sensing yield an appreciation of what many of us are abbreviating from our own sense of the world?
ATTUNING TO THE CHEMOSPHERE
Formaldehyde is a nearly ubiquitous chemical in the domestic environment. It seeps from the very engineered woods that give much of contemporary do- mestic space its comfort, security, and affordability. The chemical holds the colors of upholstered furniture, adds strength to insulation, and enhances the texture of cosmetics in addition to its less deliberate environmental presence as a residue of incomplete combustion (from automobiles to cigarettes). The substance suffuses the economy to such an extent that an industry trade association asserts, “the production and use of formaldehyde accounts for five percent of the U.S. gross national product—about $500 billion per year” (ACC 2013). Just as in the case of major financial institutions, the chemical’s bonds are so diverse and far-reaching that the potential toxicity of formaldehyde is too big to face head on. Not only the practical and procedural conventions of science yield difficulties in capturing the harms of chronic low-level exposure. Governmental regulators, stakeholders in chemical economies, and unwitting discursive allies—such as those advancing the pharmaceuticalization of environmental illness etiologies—also actively un- know its injury through a protean array of technical, methodological, and legal maneuvers (Shapiro 2014).
Formaldehyde is not only synthesized at industrial scales; trace amounts of the chemical, as a metabolic by-product, are produced on a cellular level by all organic life forms. Formaldehyde’s presence in late industrial domestic ecologies is neither reducible to a natural and endogenous element of carbon-based life, as industry would have it, nor is it an absolute toxin—a completely alien agent leached from modernity’s amenities and trespassing into virgin bodies. Thus bodily knowledge of ambient formaldehyde concentrations translates into recognition of a substance that is always already part of the chemical makeup of bodies, but whose specific concentrations indicate how desires for shelter, the solutions to housing demand posed by industrial capitalism,4 and toxic atmospheres are em- broiled in a complex give-and-take.
As a starting point, my focus on the embodied apprehension of residential formaldehyde vapors documents the ways in which bodies become, in the words of the cultural philosopher Peter Sloterdijk (2009, 99), “differently-attuned, dif- ferently-enveloped, and differently–air conditioned” by way of mundane chemicals and the atmospheres they animate. Beyond chronicling how bodies are ma- terially and affectively caught up in the breathing spaces of the built environment, I seek to ethnographically elucidate the “somatic modes of attention” that render minute exposures knowable (Csordas 1993). As Lauren Berlant (2011, 15) has noted, “bodies are continuously busy judging their environments and responding to the atmospheres in which they find themselves” (see also Latour 2004, 206). Bodies are sites for both actively absorbing the world and being put into motion by its constituent medley of humans and nonhumans. The apprehension of domestic toxins is a matter of life and slow death, mediated by patho-logical bodily processes. Kathleen Stewart (2005, 1024) has written incisively on this dialectic of bodily harm and bodily knowledge: “The body consumes and is consumed. Like one big pressure point, it is the place  untitled where outside forces come to roost.” The various processes of corporeal judging, numbing, sensitizing, absorbing, attending, consuming, and responding are part and parcel of the pervasive bodily practices that Stewart (2011) encapsulates in the phrase “atmospheric attunement” (see also Anderson 2009; Choy 2012). Such attunements, in relation to the case at hand, facilitate becoming with and orienting toward the molecular constituents of domestic chemospheres (Ahmed 2006; Har- away 2007), without a necessary knowledge of exactly what chemicals they are attuning to. Like learning to become sensitive to environmental change, becoming un- affected too requires work. That my fieldwork was predominated by women’s accounts not only resulted from the feminization of body care, domestic care, health care–seeking, and self-monitoring for bodily dysfunctions (Murphy 2006, 173; Ore 2011, 281). It not only results from the likely increased exposure to domestic chemicals encountered in the course of many of these labors. The ab- sence of men from my fieldwork stems from their active indifference to slight somatic abnormalities. A majority of the men I spoke with consigned bodily decay to the unavoidable process of aging as a means of rejecting the possibility that their bodies were permeable or vulnerable to chemical harm, thus also rejecting threats to masculine self-images (Waldman 2012, 130–33). In this way the at- tunement to and denial of toxicity constitutes and is constituted by normative gender roles. The question at hand is not who is becoming affected, but how. Phenom- enological studies of pollution, environment, and well-being primarily direct their analytical attention to olfaction (Auyero and Swistun 2009; Brant 2008; Fletcher 2005; Jackson 2011; Reno 2011). These studies bring into crisp relief the intimate place-making and place-disrupting capacity of smells and highlight the way in which we often take displeasing scents as the primary indicators of environmental contamination. Yet the respiration of airborne chemicals does not end at the nose. The diffuse embodiment of inhaled, and especially chronically inhaled, chemicals as they seep deep into bodies and spur cascades of minor and often-latent disrup- tions remain largely uninvestigated ethnographically.5 Smells, whether off-putting or alluring, are most pronounced at the crossings of thresholds and then, over time, recede from perception as they become incorporated into new sensorial norms. As one’s scent sensitivity down-regulates in a process of olfactory adap- tation, ongoing and low-level exposures become ordinary and perceptually un- detectable (Dalton and Wysocki 1996)—if such exposures even crested scent detection thresholds in the first instance.6 ATTUNING TO THE CHEMOSPHERE Although many episodic exposure events—from landfills to hydrocarbon- extraction activities—are announced by pungent odors, the limits of what Joshua Reno (2011) refers to as “olfactory epistemology” are often viscerally clear to the chronically exposed, as a middle-aged woman in Detroit facing persistent indus- trial emissions announced in an Air Pollution Community Forum in June 2013: “The state DEQ [Department of Environmental Quality] says ‘we depend on you and your smell to tell us when something is in the air,’ but the thing is, after a while that stuff wears you down and your senses stop working anymore. I know that in my body there’s some of that [pollution] in my system. It mess up your mind, it mess up your whole system.” Over time her olfactory perception of contamination dulled, while alterations in the quality of her thoughts and slight systemic aberrations continued to signal exposure. Following two years of eth- nographic fieldwork on chronic domestic chemical exposures throughout a dozen U.S. states,7 I have come to the conclusion that such microscopic encounters are most readily sensed by less nameable and more diffuse sensory practices. Bodies are often embroiled in sensing the world well before cognition catches wind of protracted chemical encounters. This argument runs counter to a pioneering analysis of women’s “exposure experiences” in which the authors assert that “in the case of household pollutants and chemical body burden, science has been the primary means through which embodied and indoor pollution have been ‘discovered’” (Altman et al. 2008, 419). Beyond scents and science, I claim that the attuned body is the primary substrate of domestic formaldehyde exposure discovery. Bodies are sensors that indicate the presence of toxicants and, in some cases, specify their atmospheric concen- tration with uncanny precision. The empirical matter that fills this article is in- tended to challenge the confidence that we often place in our own ability to know when we have sensed something and when we have not. Exposures slowly and invisibly emanating from the formaldehyde-based en- gineered woods that give form to domestic space require an attentiveness to how human bodies reveal imperceptible chemical exposures with their own subclinical wounding. In these affective spaces, “at the very limit of the phenomenal” (Clough 2009, 51), the somatic precedes and then is entangled with the rational, a mingling of mind and body that bucks the standard psychosomatic dismissal of low-level chemical complaints in which mental factors cause or aggravate bodily issues. My account draws on a deep phenomenology of bodily formaldehyde detection that focuses on visceral and indeterminate sensorial facilities, rather than on mere smell. The latter may serve as an intimation of a wide variety of exposures, but 375 376 CULTURAL ANTHROPOLOGY 30:3 it is not the epistemic basis for chemical knowledge of everyday, ongoing, and low-level intoxication. BODY METER In February 2011, Linda Kincaid responded by email to a call for participants for my study of the experiences of domestic chemical exposure. An environmental activist had forwarded the call to what she refers to as her “formaldehyde list.” The list comprises a broad array of individuals interested in formaldehyde, many of whom have personally felt its effects—from former FEMA trailer residents,8 to consumers concerned about the broad range of products made with formal- dehyde and, evidently, industrial hygienists. Linda has worked as an industrial hygienist—a scientific profession charged with the responsibility of assessing, con- trolling, and communicating environmental hazards—since 1991 and holds a mas- ter’s degree in public health from the University of California, Berkeley. The immediacy of her interest in domestic formaldehyde was derived not only from the elevated chemical levels registered by her monitoring equipment in the homes of her residential clients, but further by her own symptoms of exposure, which maintained a grip on her after returning from the field. Before meeting in person in suburban Los Angeles to attend one of her formaldehyde home inspections and to learn to use a real-time formaldehyde meter, we spoke at length on the phone. Linda had only become interested in domestic formaldehyde exposure in the past few years. When she received her first phone call from a family that suspected their home was making them sick, she reacted with skepticism. “What are you talking about?,” she thought to herself, but a quick literature review soon revealed that common domestic formaldehyde levels could give rise to the reported symptoms. Linda’s attention was piqued. As a pet project, she began to amass a small arsenal of portable real-time for- maldehyde meters. Yet the vast majority of her work continued to be for the semiconductor and solar industries, and the irregular flow of clients with resi- dential concerns could not sate Linda’s blooming curiosity about the magnitude of domestic chemical contamination. After developers swiftly rejected her offers to test new subdivisions for free, she saw clandestine testing of open houses as her only option for gauging the prevalence of elevated residential formaldehyde. She set out to new unoccupied homes by herself on free weekends, with the intake hose of her Interscan 4160 formaldehyde meter timidly cresting the lip of her purse:9 ATTUNING TO THE CHEMOSPHERE It was really kind of a lark. Can I find elevated formaldehyde in homes? Is it going to be one in ten? . . . Within a few weeks I came to realize that there was a problem here. There is a huge problem here. I was getting the kinds of concentrations that they found in the FEMA trailers, and these are not trailers; these are high-end Silicon Valley homes. And I started noticing that homes in one city in particular had seriously raised formaldehyde as compared to others. . . . Every house I went into had really pretty high formaldehyde, and I would have a headache and have trouble sleeping that night and toss and turn all night long. I’d be exhausted the next day, and when I did other communities it seemed that the for- maldehyde wasn’t as high and I didn’t have those responses to the same degree or maybe not at all. As Linda began to log higher levels with her formaldehyde meter, she also began to log increased levels within her body. Her symptoms signaled elevated chemical levels as clearly as the LCD readouts of her assessment technologies. In embodying the invisible gas, she utilized not one of the standard human sensory faculties but a calibrated, yet diffuse, awareness to aberration. She attuned to the irregular physical state of her neurochemistry. Appraisals of her clients’ homes would often turn back to her own body. When I asked about the curious symptom of intensified dreams that her clients reported,10 her first reaction was to describe her own corroborating experience: “And those were one of my symptoms too; it doesn’t seem to happen to every- body. It absolutely is one of my symptoms. It is guaranteed. If I am in a house with 50–70 ppb [parts per billion] formaldehyde, I will have the utterly weird, bizarre, freaky terrifying nightmares and that is very consistent. It is not something that happens to me normally, so when it does happen it really stands out.” Linda highlights her symptoms after merely an hour of exposure, bearing corporeal witness to long-term low-level chemical exposure disorders that have been his- torically disqualified as (female) psychogenic illness (Murphy 2006). Her repeated experiences in combination with her monitoring equipment lend credence to individual and isolated complaints on the scale of reproducible and scientifically observed phenomena. It is guaranteed. Despite the short duration of Linda’s exposures, she can surmise formal- dehyde levels with extreme precision. In the above quotation, she asserts that she can sequence the onset of exposure symptoms down to a margin of error of about twenty parts per billion. In liquid terms, that is roughly equivalent to determining 377 378 CULTURAL ANTHROPOLOGY 30:3 the difference between fifty and seventy drops of formaldehyde diluted in a small railroad tanker or 250 chemical drums. In temporal terms, such accuracy is com- parable to a margin of error of a minute when measuring durations over the course of a century. At first blush, the exactitude of her body-meter-air attunement appears to border on the uncanny, if not the impossible. The ability to discern such infini- tesimally small differences in atmospheric concentration does not derive from a supernatural capacity on Linda’s part. Rather, such perceptivity results from a mundane monitoring of both repeated bodily irregularities and the levels of for- maldehyde found by her meter. These practices are born out of standard scientific method, professional curiosity, everyday corporeal awareness, and openness to being affected. Linda’s embodied awareness to biochemical aberration does not lie beyond the realm of toxicological plausibility.11 As “the exact mechanism of action of formaldehyde toxicity is not clear” (ATSDR 2014, 5), the aspect of this process that remains inexplicable relates to the limits of toxicological knowledge, and not a mythic extrasensory perception. Operating in tandem with her real-time formaldehyde meters, Linda’s body viscerally logged the chemical exposures of the houses she visited. Over time, she calibrated an understanding of toxic effects to the outputs of her instrumen- tation, a process of indwelling both the indoor atmosphere and the meter. Sci- entific instrument and soma evaluated their immediate surroundings in accord. It is through this environmental and technical incorporation that Linda dilates her being-in-the-world (Merleau-Ponty 2012) and harnesses the relational-cum-epi- stemic utility of her body to understand the potentials of domestic chemical exposure, a process I have alluded to with the phrase “bodily reasoning.” THE CHEMICAL SUBLIME Writing on an antithetical technoaesthetic encounter—the first detonations of nuclear weapons in the deserts of New Mexico—Joseph Masco (2004, 4) observed that “the weapon scientist’s body [was] the most important register of the power of the bomb.” The irradiation, shock wave, and ensuing firestorm of humankind’s most lethal weaponry evoked reverence and bodily fear in onlooking male scientists as some were knocked to the ground, flash-blinded, or felt the blast bore into their being. For weapons scientists, the modest or ephemeral bodily traumas of the bomb’s destructive might were, in a slightly masochistic fashion, the pleasures of a successful experiment. In the shadow of the world’s first mushroom cloud, Masco posits, these bittersweet affects melted into a “nu- ATTUNING TO THE CHEMOSPHERE clear sublime.” This highly specific version of the sublime propelled some scientists into nuclear disarmament campaigns, while others reveled in a feeling that ap- proached divinity. Sublime is not simply an adjective or noun denoting a characteristic or state of grandeur or awe. In chemistry, sublime is also a verb, invoked when substances transform from a solid directly to a gas, bypassing the intermediate liquid form. Formaldehyde used in the fabrication of pressed woods, for instance, slowly sub- limates at temperatures above 􏰁2 􏰗F. In contrast to the spectacular, brutal, and lightning-fast sensorial pummeling that afflicted early nuclear weapons scientists, a multitude of diminutive formaldehyde plumes drifted into Linda’s lungs at the sedate speed of chemical off-gassing and regular human breathing. The constituent effects of what could be summarized as the chemical sublime were often subtle and crept into Linda’s consciousness at a snail’s pace. The cognitive force of her discovery was not “directly proportional to the danger involved in the experiential event” as Masco (2004, 3) avers, reading Immanuel Kant (2000). Formaldehyde’s presence in domestic space was not signaled by overwhelming sensory stimuli, but rather indicated by a thickening veil of indis- tinction as perceptual faculties became occluded. The interference of air-quality- induced illness is received as a phenomenological transmission of its own right (Fortun 2003, 186). The sensorial noise of illness is the signal of domestic chemical exposure and the bodywork employed to apprehend the qualities of indoor air. The magnitude of the issue of domestic chemical exposure revealed itself in piecemeal fashion—gleaned from the repeated toxic encounters of an attuned body, rather than patently imposed by a singular event like a mushroom cloud erupting into the stratosphere and tossing scientists to the ground. For Linda, the prevalence of elevated formaldehyde gradually accumulated into a technical and embodied awareness of residential chemical exposure that dwarfed her by its scale. Within a few weeks I came to realize that there was a problem here. There is a huge problem here. The form of the chemical sublime highlights the gendered assump- tions undergirding Masco’s and Kant’s privileging of sublimity’s correlation with public, spectacular, and violent events over the profundity and density of wide- spread private, indistinct, chronic, and fragmented phenomena.12 The velocity of the epochal nuclear sublime is diametrically opposed to that of the mundane chemical sublime, yet they maintain a common substrate of experience—the bodies of scientist witnesses. Linda’s body was a vital register of both the chemicals that suffused domestic space and their specific concentration. The chemical process of sublimation, the elevation of state from solid to vapor, 379 380 CULTURAL ANTHROPOLOGY 30:3 is mirrored by Linda’s somatic process of epistemic elevation, of corporeally validating her clients’ symptoms and heightening her own bodily analytics. If bodily reasoning is the dynamic process through which knowledge of individual spaces of chronic exposure is somatically attained, the chemical sublime is the accrual of bodily reasoning to the point of articulating the patterned practices and infrastructures that distribute pockets of exposure across space. It is the traversing of a threshold of chemical awareness whereby the irritations of one’s immediate environment become agitations to apprehend and attenuate the effects of vast toxic infrastructures. The chemical sublime thus exerts what Mel Chen (2012, 211) calls the “queer productivity of toxicity and toxins” that demands additional forms of labor. Linda approached the City Council of San Jose, California, in the summer of 2009 as its members were on the verge of passing a building ordinance that required new homes to be certified as “green” by sealing them more tightly, a measure that would likely result in higher domestic formaldehyde levels.13 Linda proposed an addendum requiring green homes to be tested and meet indoor air- quality standards. She offered to render those services for free to demonstrate that she held no financial conflicts of interest. Her proposal was met by a smear campaign financed by the Formaldehyde Council, an industry-funded interest group, which commissioned scientific assaults on her findings. Linda’s assertions about widespread domestic toxicity put her “at risk for future litigation,”14 as systems of commercial asset protection transformed her effort to mitigate systemic exposure risks into legal, scientific status, and financial risks on an individual level. Her data were then ignored and her motion scrapped. The formaldehyde levels logged by Linda’s instrumentation were well in excess of government-recommended thresholds, yet her findings failed to crest prevailing thresholds of significance. Why the visceral pull of the chemical sublime does not translate to a resounding ethical call—why Linda’s assertions were so easily rebuffed—is not only the result of industry’s mobilization of law, science, and capital. We must also look to how the sublime has brokered relations between exposure and the status quo since at least the dawn of the Enlightenment. While the full history extends well beyond the scope of this article, it will suffice to texture the chemical sublime by digging deeper into how it diverges from the Kantian root of Masco’s nuclear sublime. In Kant’s (2000) conception, the immensity or might of the sublime first overwhelms our imaginative capacity or indicates the fragility of the human body, yielding a sense of helplessness and distress. This diminutive feeling is then coun- ATTUNING TO THE CHEMOSPHERE tered and ultimately overcome by reassuring one’s self of the power of the mind, by the belief that reason sets humanity apart and above the physical world. The internal tumult and sensuous displeasure is elevated into the delight and superi- ority of reason. Quintessential of the Enlightenment project, Kant’s sublime out- lines a process by which intellectual mastery dominates the threats of the material world and indicates humanity’s continued progression. As the critical theorist Gene Ray (2004, 10) asserts, “the ideological function of the aesthetic category of the sublime within Kant’s critical system is anxiously bound up with . . . deep metaphysical optimism.” The optimism of the sublime serves to affirm existing power orders—to justify the optimist credo of “whatever is, is right”—even in the face of mass calamity, such as the great Lisbon earthquake of 1755 that fascinated Kant and haunts his analytic of the sublime. The chemical sublime is sharply distinct from Kant’s formulation of the sublime in at least four ways: the form (space, time, and intensity) of exposure, the relation between the supersensible (mind) and the sensible (matter), orien- tational movement (from without to within or vice versa), and political reckoning. Unlike in the case of Kant, who relished the sublime while collecting reports on the great Lisbon earthquake from his East Prussian home, the objects of the chemical sublime cannot be held at a distance. As the practice of bodily reasoning makes clear, the material transformations of the body are inseparable from intel- lectual processes of molecular deduction. An extended absorption of toxicants is not a situation that can be transcended by way of a feeling of rational control. The sublimation of toxic bodily reasoning does not form part of a mental mastery over perceived threats—intellectually closing off their danger. Rather, it consti- tutes a sensuous reasoning that indicates how open our bodies are and amplifies— rather than extinguishes—the tensions, agitations, and dissident potentiality of large-scale hazards. It is the coalescing of underrecognized disturbances rather than a compensation for those that overtly disturb—the beginning of a confron- tation, not its resolution. As unfathomably common industrial chemicals warp, distort, and decay hu- man and nonhuman bodies alike, they corrode the optimism and anthropocentrism of the Enlightenment. Instead of “transforming the worst into the best” (Lyotard 1988, 41) as a foil of human triumph, the chemical sublime is a condensation of vaporous displeasures and a way of being deeply moved by the latent toxicity of industrial human progress. Although Linda’s attempt to effect change has ended in a way that is well recited within the contemporary history of toxic contamination (Boudia and Jas 381 382 CULTURAL ANTHROPOLOGY 30:3 2014), the way it began makes for a less recited story. It is a story that bears on how the chemical sublime can attend to the decentralized crises of the contem- porary moment and that gives rise to the potentiality of living otherwise. BODIES OF EVIDENCE The chemically aware body is not only borne out of profession and curiosity as in Linda Kincaid’s case. More often than not, bodily knowledge of chemical others derives from the necessity of cohabiting with toxins, as was the case with Harriett McFeely and her husband, Dick, who live in a modular home on the outskirts of a small town in Nebraska. In the spring of 2011, I traveled to stay and speak with the McFeelys, who claim to have endured more than two decades of domestic formaldehyde exposure. Before Harriett got access to free formaldehyde tests from the Sierra Club, and before formaldehyde had been introduced to her as a possible perpetrator, she was near the end of her rope. In twenty years of inhabitation, she had slowly developed constant diarrhea, a runny nose, fatigue, severe eye irritation, double (occasionally triple) vision, the need to read with one eye shut, headaches, a sense of taste that skewed toward metallic or simply “strange,” and numerous other symptoms.15 With resurgent exasperation she recounted her dogs getting sick and dying one after the other, while her and her husband’s health steadily dete- riorated. Her doctor received her complaints with skepticism and an implied diagnosis of hypochondria: “They couldn’t find out what’s wrong in my body, so they thought I was crazy. That’s the only answer.” Harriett first began suspecting the house as the source of her family’s col- lective illnesses in 2002 when she left home for five days and her vision cleared and other symptoms subsided. Again in 2007 she left the house for three days and her ailments abated. She then ruled out domestic radon exposure, carbon dioxide, sewer gas, black mold, and water contamination.16 Her last-ditch attempt to ascertain the etiology of her family’s illnesses was to invite a friend of a friend, named Nancy Shoemaker, who suffered from multiple chemical sensitivities. Har- riett hoped that Nancy would use her chemical susceptibility to pick up where her own bodily knowledge left off by divining the specific source of their health issues within the home. Nancy, who spoke with delicate and slightly nervous poise, had developed chemical sensitivity at an early age, while attending beauty school in Nebraska. Nearly every morning when sterilizing the styling utensils, Nancy would lose consciousness and collapse. She had to drop out and readjust her dream of be- ATTUNING TO THE CHEMOSPHERE coming a beautician. Nancy did not think much of her fainting spells until years later when she moved to Florida, where she and her husband took up residency in a trailer. After moving into the trailer, her sensitivities dramatically escalated, but not only at home. A whiff of cologne on the street or shaking hands with someone wearing a transparent Band-Aid could be enough to wilt Nancy to the ground. Her body became jarringly attuned to the vast chemical infusion of the world around her. As a result of these continual chemical encounters, she learned to move through the world with caution. When barefoot at home she would cross sections of linoleum with circumspection, unsure of the daily caprice of her sensitivities. Her corporeal vulnerability to chemical vapors or direct contact is not spread uniformly throughout her body. As a high-frequency exposure site, an extra- sensitive area in the center of Nancy’s palm became more acutely affected with time. Nancy took advantage of the embodied insights of her palm and tacitly honed its reactivity. She now uses her palm to assess the hazard of the various materials and spaces that she encounters in daily life. As she spoke, her gaze turned down to her hands, and she ran her right index finger in circles around the area on her left hand. “If I put something on that sensitive spot or touch something with that sensitive spot, I can tell if I can handle it at that time or not.” To manage anxiety about her emergent reactivity, Nancy developed a deeper literacy of the chemical world by way of a deeper literacy of her own body. “I know about formaldehyde and I’d never done anything like [what I did] with Harriett,” she explained, “but I knew how formaldehyde affected me.” She averred an amassing of somatic knowledge about formaldehyde via years of enduring its effects and affects—through dozens of fainting spells, bouts of wooziness, ener- vating weakness, and daily somatic tests of the material things that populate her world. It was with the sensitive spot in her hand that Nancy began to assess the chemical constitution of Harriett’s home, as an alternative to expensive and in- accessible scientific instrumentation. Sitting in her small and immaculate assisted- living apartment, Nancy recounted the process: “And so I went into the different rooms and I tested the carpet and doors. . . . I went into the kitchen, and I just grabbed hold to open the cabinet or something. I don’t think I touched it very long . . . .” At that point in the story, Nancy lost consciousness. Harriett observed Nancy clutch her stomach and let out a groan. The color dropped from Nancy’s face as she dropped to the floor and began to seize. Harriett’s Boston Terrier, 383 384 CULTURAL ANTHROPOLOGY 30:3 Bowser, ran into the room to investigate the commotion and curled into a fit of seizing as he approached Nancy. The two lay there next to each other on the carpet, gripped by spasms, for a few moments before Harriett and her husband dragged Nancy outside. Bowser continued to convulse in the kitchen. The dog came to within an hour but remained disoriented, running into the furniture, walls, and doors. Nancy gradually regained her composure over the course of half an hour. After she felt well enough, she went on her way, confident that she had found at least one source of the McFeelys’ suffering. As unnerving as the experience was, Harriett also felt relieved that Nancy had validated her suspicion that chemicals were quietly emanating from her home. With an affirmative nod Harriett em- phasized the instrumentality and accuracy of Nancy’s body: “In my opinion, that lady is like a human Geiger counter.” Of course Harriett, and all exposed and affected bodies, also bears this capacity to make manifest the chemical world, albeit in less eventful ways. Some bodies exclaim while others speak in hushed tones. In domestic chemical exposures, bodies are both the means of apprehension and the site of damage. Bodies uncover invisible toxins with their wounding. Humans and their nonhuman companions serve as their own canaries in the un- witting coal mines of residential America. A month after Nancy’s visit, Harriett’s fifth dog in twenty years had to be put to sleep after he became wracked with near-constant seizures. As of June 2015, the McFeelys have lost two more dogs to similar ailments. Like Linda, Harriett felt the pull of the chemical sublime. She felt the attrition in her own body and monitored the bodily ailments of her dogs and her husband. In line with what the sociologist Phil Brown (1997) has called “popular epidemiology,” or the lay appropriation of expert means of environmental health assessment (see also Murphy 2006, 62), Harriett sought to comprehend the sys- temic nature of such exposures. Harriett wrote letters to the editors of news- papers in five or six nearby towns. Her short notes, published in 2008, read: “Modular home owners, have you had any health problems? Have your indoor pets had any mysterious illnesses? Please write or call me.” Phone calls began rolling in, one after another. Harriett began to systematically survey respondents. She asked those who called her how long they had been living in their home and what their symptoms were. She surveyed thirty individuals from thirteen different households throughout Nebraska. Respondents supplied thirty-two different symptoms that they perceived to be correlated to the occupation of their modular home, ranging from unusual thirst to cancer. Harriett further inquired about ATTUNING TO THE CHEMOSPHERE Figure 2. A photocopied entry of the records kept by Harriett McFeely, showing photos of Bowser the dog and notes. Bowser’s body and disposition index the presence of otherwise-invisible chemicals. indoor pet health and recorded the symptomatology of fifteen animals in seven households. She was able to garner funds for formaldehyde test kits from the Sierra Club and tested respondents’ homes. Seven of the thirteen homes tested had levels of formaldehyde in excess of the World Health Organization’s maxi- mum recommended exposure for half an hour—81 parts per billion. Harriett 385 Figure 3. The dog owned by the McFeelys at the time of the author’s visit to the site of Nancy’s seizure. Hastings, Nebraska, April 2011. Photo by Nicholas Shapiro. mails copies of her data, adorned with a row of skulls and crossbones along the spreadsheet’s bottom border, to anyone who may be able to help. Harriett made her husband promise that a thorough autopsy would be per- formed on her if she were to “drop dead” before him. Shifting her stone-faced gaze over to me, she asserted with certainty that the decomposition of their dogs’ bodies served as a herald of her and her husband’s future. “I would bet you a hundred thousand dollars that if they did an autopsy on us today, I would bet money that it is exactly like the dogs.’” Harriett implies that their domestic exposures have reduced her and her husband to the walking dead, that a post- mortem examination could rightfully be performed on them at any time. A grim suggestion, perhaps, but one that is representative of many of the persevering residents of potentially chemically contaminated homes. As evinced by Harriett’s perceived imminent autopsy, sustained chemical exposures beckon death, but they also render death ambiguous. She takes the logic of bodily reasoning to its con- clusion: if wounding intimates the source of harm, then death will surely disclose its ultimate truth. Coming to corporeally comprehend one’s environment does not always have consequences as severe as in Harriett’s case. Residents of potentially contaminated homes I met across the United States gradually became aware of minor departures 386 CULTURAL ANTHROPOLOGY 30:3 ATTUNING TO THE CHEMOSPHERE from their normal sense of taste, sense of balance, clarity of thought, memory, durability of skin, or frequency of contracting colds. Occasionally, inhabitants did not claim even the slightest deviation from their typical physical state. They only recognized atmospheric irritation as an altogether-indistinct feeling. As one North Dakota man noted, “Something about the air in here doesn’t seem quite right.” Or as a woman living on a reservation in the Northwest observed, “in the middle of the day it gets weird air and I open the doors.” While slightly suboptimal health or simply off-putting auras were predominant among my research participants, many suffered from more debilitating illnesses. In these spaces where enduring and knowing are coterminous, the feeling of living death seeped into the margins of life for those with even minimal symptoms. TOWARD A LATE INDUSTRIAL SUBLIME The average American home maintains indoor formaldehyde levels capable of inducing irritation (Hun et al. 2010). Chronically absorbing this chemical is not a process relegated to the lower classes or precarious, even if such populations do bear dramatically higher burdens. To somatically apprehend formaldehyde exposure means to begin apprehending the costs of late industrial infrastructures, economies, and standards of living. It sets in motion an appreciation that the molecular cohabitants who physically hold our world together also encourage our unraveling. Becoming a “pupil of the air” (Sloterdijk 2009, 84) is to attune to the aerosolized material culture and more-than-human semiotics (Kohn 2007) within which one is immersed. Focusing on slight sensations and dysfunctions reorients discussions of chemical phenomenology from its current emphasis on episodic olfactory events to an apprehension of the irritating chemical background noise of everyday life. Ambient formaldehyde makes itself known to mammalian life through minor effects and affects that the exposed can accumulate, over repeated incidents, into an embodied awareness of the scale of chemical saturation, beyond the individual pocket of air we call home. I theorize this string of intimate sensations as amount- ing to a chemical sublime, which can “aggregate life diagonal to hegemonic ways of life” (Povinelli 2011, 30) and give rise to attempts at living otherwise. The chemical sublime does not merely refigure a form of the sublime in philosophical discourse but poses an alternative schema of eventfulness or call to action, one that expands dominant ideas of catastrophe and the disturbing. The chemical sublime is perhaps just one instantiation of an emergent late industrial sublime that reckons with the temporally and spatially dispersed residues of contemporary  political orders, including climate change (Morton 2013), biodiversity loss (Yusoff 2013), extractive labor practices, and social abandonment (Povinelli 2011), among others. Yet with formaldehyde production and consumption infrastructure largely locked in, and without the capacity for networking the atomized populations charged by the chemical sublime, decamping from spaces conditioned by un- countable formaldehyde microemissions is, at a societal level, not an option. Such pleas are either actively disqualified, as is the case with Linda, or they passively languish without authoritative clout, as with Harriett. Beyond instrumentalizing viscera, such attunements to encounters between airs and bodies constitute the openings through which to grapple with the composition of our world and with the untold caustic ecologies that remain largely insensible to the human.
0 notes
katebushwick · 5 years
Text
The second mode of apprehension, nonauthentic and shocking, is exemplified by a YouTube user with the handle omgtkseth, posting in the comments field of a Getty Museum (2012) film clip that explained the research behind the Gods in Color project. In response to a question from handle MILITARcz about why the statues struck some people as unrealistic or kitschy, omgtkseth wrote: Because of the materials, perhaps. Im [sic] a little familiar with canvas painting, and one notices how certain colors are unachievable depending on your paint type. Perhaps the colors are based on natural paintings made with fruits and flowers of the area. But I agree they arent [sic] very pretty. They are very bright and many colors are mixed. Like a nerdy girl dressing with rainbow leggings and so on. . . . I would have used a soft copper for the scales. Having demonstrated his or her credibility in painting and pigments, omgtkseth focuses on the implausibility of the materials used to produce the Gods in Color statues. Omgtkseth knows that “certain colors are unachievable,” particularly if the colors are derived from plant and other nature-given matter. The statues are also shocking (“arent [sic] very pretty . . . very bright and many colors are mixed”). The painted statues, according to this viewer, are nonauthentic (because the way in which the pigments were produced is suspect) and shocking (garish and too bright). Within the basic modes of apprehension there was a range of opinion about the color, from generally positive through neutral/curious, skeptical/negative, and hostile. The strongest theme to emerge, however, was ambivalence. Many respondents were caught between appreciating the science behind the polychrome and disliking the finished product. This position is especially evident in the comments by readers who selectively endorsed the polychrome—in other words, they liked some of the painted pieces but not all of them. Reader “flapperlife,” writing in an online chatroom about the Gods in Color show, confessed: I think the sculptures showing the human anatomy look better without color. I feel like the color would take away the full effect of the presentation of the body. [T]hat being said, I think some of the other sculptures are equally as lovely with color. The lion sculpture really stood out to me as something that color enhances.12 Flapperlife accepts the color per se but does not think it works on all of the pieces. There are limits, it would seem, to the acceptable reach of colored paints. “Scholars give us antiquity—the colorized version,” went one headline, but the issue was not just that antiquity had been given a splash of paint (Gewertz 2007). The intensity of the color, and its alien presence in cherished aesthetic terrain, proved too much for some. “Wrong, Wrong, Wrong”: Painted Marble The Gods in Color caused a sensation when it toured the United States and Europe.13 The exhibit featured painted casts based on Greek and Roman figures: gods, mythical characters, heroes, statesmen, and ordinary people. The statues provided a stark contrast with popular images of pristinely white, marble classical sculptures (Figure 1).14 The director of the National Archaeological Museum of Athens, which hosted Gods in Color in early 2007, summarized his guests’ reactions thus: “Some [visitors] like it, because they did not know [about the color] and it was a discovery. Some are disappointed. [They] have said to me personally, ‘You have completely ruined the image we had of antiquity.’”15 On YouTube, a film clip of Vinzenz Brinkmann demonstrating his UV raking light technique generated this comment from a viewer: “Interesting, but I’m really glad that paint went away over the ages!” (Getty Museum 2012). And a reviewer of the exhibition’s installation at the Sackler Museum in Cambridge, Massachusetts, from late 2007 through early 2008, confessed that although he was fascinated by the objects, “All this color feels wrong, wrong, wrong” (Cook 2007). What were they upset about? We can consider, as a starting point, the portrait of Augustus from Prima Porta (Figure 2). The original statue was discovered in Rome in 1863 near the ruins of a villa associated with Livia, the Emperor Augustus’s wife (Liverani 2004). Scholars identified the statue as a portrait of Augustus, the first emperor of Rome, on the basis of stylistic features and iconographic clues, including the Eros and dolphin at his feet (both serving as visual reminders of Augustus’s immortal ancestress Venus) (Fittschen 1991; Hausmann 1981; Schmaltz 1981). The Gods in Color version presents a festively adorned man with bright red lips. The eyes are carefully painted and framed with eyebrows and long eyelashes. Pigment free, the emperor is the model of omnipotence, severity, and divine detachment. Under colored paints, he is simultaneously human and alien, awesome and vulnerable. Each of the pieces in the Gods in Color exhibition was a provocation to received wisdom about the classical aesthetic. Even so, there was something especially jarring about the Prima Porta Augustus. Here was one of the most recognizable images of Western antiquity—the face of the young Empire, harbinger of eternal Rome—utterly transformed (Zanker 1998). As generations of young classical archaeologists have been taught, the Prima Porta Augustus exemplifies the imperator’s masculine virtue through its “neatly arranged hair and with facial features . . . classically calm” (Pollini 2012:175). But the classical calm dissipates beneath the painter’s brush. The red and blue paints on the breastplate turn an arcane set of mythical images into a graphic novel whose characters leap off the surface (on the ambiguity of the meaning of the images, see Squire 2013). According to the team of art historians, classical archaeologists, and chemists who worked on the exhibit, this is how the Prima Porta Augustus originally looked: “Everything irrespective of function was colored in the same lively colors. The sculptor conceived the three dimensional form which he chiseled out of the stone always with a view to the coloring” (Brinkmann 2007:29; see also Brinkmann and Koch-Brinkmann 2010; Kader 2009). Critics, nonetheless, saw red. Scholarly fights over painted marble have a long history (Bradley 2009). In the late eighteenth century, German art historian J. J. Winckelmann popularized the idea that Greek and Roman marble sculptures were intended to be white. Winckelmann linked white marble with purity and natural form, the very definition of beauty in the Enlightenment period. His moralistic teachings on marble statuary found material justification in later scholarship. In ancient Greece and Rome, marble was a luxury material that was expensive and difficult to obtain. Covering white marble sculpture with colorful paints would therefore have been a symbolically destructive act. But even as Winckelmann’s ideas were taking root among European scholars and connoisseurs, contrary evidence mounted. Fresh archaeological discoveries showed that Greek and Roman sculptures, along with temples, had in fact been finished with bright paints.16 By 1835, when the British Museum established a special committee on the question of color in classical sculpture, the debate over painted marble raged across the continent (Jenkins and Middleton 1988).17 At stake was the moral basis not just of classical culture but also of nineteenth-century Europeans’ pretenses to classicizing progress (Hamilakis 2007; Hoock 2010; Rose-Greenland 2013). It is now generally accepted that Greek and Roman marble statues and public buildings were finished in such a way that their surfaces were polychromatic. Scholars interpret this finishing treatment, moreover, as having been integral to the meaning of the objects (Palagia 2006; Walter-Karydi 2007). But while many scholars agree that metal attachments, fabric, floral crowns, and targeted (but limited) paints were used to dramatize the visual impact of the statues and buildings, what remains controversial is the specific idea that marble was painted (Bradley 2009, 2014:189–90). The Gods in Color exhibit revealed the rawness of this controversy. Despite the painstaking efforts of the statues’ makers to support the color reconstruction with scientific and archaeological evidence, they failed to convince their audience to believe what they were seeing. Color as Material At the heart of the Gods in Color project were two tasks: identifying paint traces, or “ghosts,” (Harvard University Art Museums curator Susanne Ebbinghaus, quoted in Reed 2007) on the surface of Greek and Roman marble statues and recreating the original paints using the same ingredients, ratios, tools, and techniques of the ancient ateliers. To achieve these tasks, the men and women involved in the project used sophisticated equipment—a point repeatedly stressed in the Gods in Color catalogue, museum didactic boards, and affiliated scholarly publications. In their essay “On the Reconstruction of Antique Polychromy Techniques,” published in the edited volume that served as a scholarly complement to the exhibition catalogue, Brinkmann and Koch-Brinkmann (2010) wrote, About four years ago, our efforts in this area [color reconstruction] reached a new scientific and technical height: thanks to the non-contact analyses made possible by UV-VIS absorption spectroscopy and X-ray fluorescence measurement, our understanding of the pigments has become substantially greater and more specific. . . . We can meanwhile adjust the colourants employed for the reconstruction to correspond quite precisely to those of the antique original. (P. 115) Reproducing the colors quite precisely required a degree of ocular precision of which machines are capable but the human eye is not: Due to the fact that UV-VIS absorption spectroscopy is a physical and optical method of measurement, not only can the respective colourant be identified, but the shade exactly defined in a chromatic diagram. As a result, it is possible to determine the hue independently of the individual sensory impression. (Brinkmann, Koch-Brinkmann, and Piening 2010:200) Accompanying this passage are a series of color photographs showing UV-VIS spectrum readings, chromatic diagrams on the x-y axis, and small piles of granular yellow ochre earths at various stages of shading after being burnt. The scientific work behind the statues appears bulletproof: Machines take the readings, locate the color traces, and guide the process of recreating colorants. “Individual sensory impression” is not up to this task but is in fact something to be overcome by technology. In saying so, the researchers offer a preemptive response to critics: Whatever personal taste might dictate, we know we got the science right. For the Gods in Color statues to be faithful re-creations, one more step was required. After the colors were correctly seen, they had to be correctly made: The preparation of the natural pigments through a process of grinding and washing is the prerequisite for attaining paints of a quality and vividness so great that [the sculptures’ subtle] characteristics are still visible today. Especially in the case of the yellow and red ochres, the preparation requires tremendous effort, since the procedure has to be repeated fifteen to twenty times . . . (Brinkmann and Koch-Brinkmann 2010:126) The researchers avoided modern, synthetic paints to the extent possible and drew attention to their effort to recreate the pigments using original, organic materials. At the Institute for Archaeology at Georg-August University in Göttingen, two glass cases displayed the tools and materials of the pigment re-creation. The accompanying labels named the substances (“azurite,” “ochre”) and their places of origin. The unlabeled tools were meant to speak for themselves: raw-wood pestles and mixers and the finished, ground product in small glass dishes. This arrangement of tools stressed the traditional nature of the practices within the precise techniques of science. From the perspective of the Gods in Color researchers, creating the paints properly was the core achievement because it was the only guarantee for knowing the statues properly. The paints were the material of consequence, the true object of scrutiny through the UV-VIS readings and chemical analysis. Why did viewers fail to notice? Ulrike Koch-Brinkmann blamed what she called “Marmorweiss,” a socially constructed sensibility about marble and how it should look, behave, and be thought about.18 Marmorweiss translates into English as “white marble,” but it can also be a word play on “marble wisdom” or “marble sense.” For centuries, marble statues were adorned with paint, jewelry, metal attachments, floral garlands, and clothing. The unpainted version that we know today is an accident of entropy legitimated by Western civic values. With their brightly painted surfaces, the Gods in Color statues challenged this accomplishment. The punch line was that the statues were made of plaster, not marble. Plaster casts of marble statues have been made and displayed for centuries. They allow lifelike replicas to travel widely for study and artistic experimentation (Frederiksen and Marchand 2010). The Gods in Color scientists bracketed the plaster and prioritized the pigments, imagining the end products as historically accurate study pieces. Skeptics, on the other hand, struggled to decouple color from material. Marble Sense and the Impact of Time Why is a particular image of antiquity so important to us—in this case, the one that centers on white marble? There are two aspects to this question. One concerns the lionization of select historical moments. A robust scholarly literature argues that specific events or epochs are narrativized and sometimes romanticized to explain contemporary social structures and project for ourselves an edifying imagined past (Anderson [1983] 1991; Goody 2006; Halbwachs [1950] 1980; Sewell 1996). The second aspect of the question concerns the evolution of the white marble statue as the most visible and iconic symbol of Western civilization. That aspect has received less scholarly attention. Answering it, I argue, begins with material historiography. Statues of people and gods were an everyday sight in ancient Greece and Rome. Their abundance prompted ancient writers to describe statues as the “other population” of Greece and Rome.19 More than mere backdrop elements, statues had important social functions. Sculpted gods directed worship in temples and formed the centerpiece of religious processions. Commemorative statues recalled the deeds of war heroes, philosophers, statesmen, and civic benefactors. Imperial portraits served as a constant reminder of the emperor’s authority. They were, in short, an active, vibrant component of social life. Marble is an ideal material for statue carving. Although statues had long been made from a range of materials (wood, bronze, limestone, and terracotta), by the first century ad, marble had become “the great prestige building material of its time” (Ward-Perkins 1992:23). Materially, marble is softer and less friable than limestone and tufa, making marble amenable to carving cascades of drapery folds and curly hair. Marble is also luminous, a product of its crystalline calcium carbonate content. The crystalline calcium is what gives the surface a glow and makes the sculpture seem alive. With this particular combination of physical and visual qualities, marble was used to create the most important, expensive, and elite statues, reliefs, and monuments. Its preeminence is summed up in the emperor Augustus’s claim that he found Rome a city made of common bricks (latericius) and left it, at his death, a city of marble.20 Augustus granted access to the vast network of imperial marble quarries cautiously and with nepotistic priorities. This example set the pattern for subsequent emperors who treated marble as a prized resource available only through the beneficence of the imperator (Fant 1988:150). The power of marble lay not just in what it stood for (wealth) but also in what it could do materially. The literary record provides rich examples of intense encounters between fleshand-blood people and their marble counterparts. Pliny, the Roman historian, describes the destruction of the late emperor Domitian’s portrait statues: [Domitian’s] countless golden statues, in a heap of rubble and ruin, were offered as fitting sacrifice to the public joy. It was a delight to smash those arrogant faces to pieces in the dust, to threaten them with the sword, and savagely attack them with axes, as if blood and pain would follow every single blow. (Pliny, Panegyricus 52.4–5. Reprinted and translated in Varner 2004:112–3) The mutilation of the statues, writes Varner, “represents the collective destruction of the emperor himself in effigy” (Varner 2004:113). What brought the statues to life, and what led to their death, was color. Pliny describes Domitian’s statues as golden (aureae), meaning they were carved in marble and covered with a thin layer of gold leaf. Pliny’s passage allows us to imagine the contrasting visual impact of the statues as they gleamed in the hot, bright Mediterranean sun, with the charred gold leaf scorching the face in the fire. By Pliny’s time, white marble stood for purity, homogeneity, excellence, and authenticity. These qualities connected directly with the mos maiorum, the mores of the ancestors, which served as the moral absolute among the Romans (Jockey 2013:77). Subsequent imperial iconography alternated colors and whiteness to signal different aspects of the subject (Bradley 2009). But a new moral weight was assigned to white marble. As painted marble statues lost their color due to natural processes of fading, unpainted marble became the default material state. Natural fading does not produce stark-white surfaces. It produces, rather, dulled shades of the original hues. For this reason, the project of whitening developed its own expertise—scrubbing away the surface impurities, discrediting textual evidence of painted marble, and producing replica study casts using white plaster. The work of lightening and whitening was naturalized, and the whiteness of classical marble statues became a convenient fact for a range of nineteenth-century scholarly arguments. Whiteness signaled humanism, civilizational progress, and moral purity (McClintock 1995). Twentieth-century Color Science and the Correct Polychrome The link between color and cultural values was systematized in early twentieth-century color theory. The most prominent of the American color systems, the one founded by Albert Henry Munsell (1858–1918), pushed beyond the latent chromophobia of classicizing whiteness. On the contrary, Munsellians believed that the right color palette—that is, the careful arrangement of colors within a balanced system—could be beneficial to individual health and social welfare. For example, Munsell urged that “beginners,” including children, should avoid viewing “strong color” because “extreme red, yellow, and blue are discordant. (They ‘shriek’ and ‘swear.’)” (Rossi 2011:4, quoting Munsell 1906). The potential of color to modify group behavior appealed to a broad set of actors in the penal system, government, and private industry, and the Munsell system became a near-ubiquitous technology for standardizing color. In a Munsellian system, colors occurring in nature are inherently correct because perfectly balanced but manufactured colors are wrong for being used in unbalanced combinations and proportions. The same principle applies to Greek and Roman art, which provided a useful teaching example for Munsellian theory since Greek and Roman art predated synthetic, chemical-based dyes and paints. A 1924 article in Color News, an official publication of the Munsell Research Laboratory, accepted that classical sculpture was polychrome: . . . sculptured figures were very generally painted. There are only a few traces of colored pigment found today on any Greek sculpture, but it seems likely that color generally appeared in the hair, the eyes, borders and costumes, and other decorations. (Nickerson 1924:12)21 The Romans used colorful paints, too, and exceeded the ornamental limits of that medium by adding another layer of visual emphasis: Painting was evidently not ornamental enough, for they decorated their sculpture with heavier, more impressive and ornate metal work. (Nickerson 1924:14) The core issue in Munsellian color systems was not whiteness against color but rather natural subtlety against gaudy excess; the high points of ancient art are those moments when artists achieved balance within an established “hue circuit.” The proper balance of “warm” and “cool” within a circuit generated colors pleasing to viewers. In early Greek art, for example, warm reds and yellows were balanced with cool blues and greens, “resulting in a more neutral effect” (Nickerson 1924:5). Neutrality, nuance, and harmony—the very features that Munsellian scientists advocated over modern synthetic palettes—were said to be the hallmarks of Greek painting. Where evidence pointed to bright and bold colors, as in the Parthenon in Athens, the Color News writer denies that the paints were meant to be seen in any detail: “Used as it was, in the upper part of the structure, where one had to look at it from a distance, the effect might be very pleasing. Seen nearer, it might seem rather crude to users of more subtle color” (Nickerson 1924:11). In Roman architecture, by contrast, colorful paints were restricted to the private interior of temples and homes. The public palette was “nearly neutral in color” and “austere”—a decision supposedly explained by the social character of the Roman nation (Nickerson 1924:14). Color, according to Munsellian thought, is a powerful social element, and its best expression is found in balanced systems of hues and tones. No color is “wrong, wrong, wrong” on the face of it; rather, its associated colors give it meaning and logic. The Gods in Color scholars continued this tradition by attempting to correct how we perceive color. Introducing “colorized” antiquity via sculpture, they wanted viewers to accept the colors as part of a historically accurate system of pigments and material practices. Despite their airtight scientific case, the Gods in Color researchers could not dislodge cherished ideas about the culture of antiquity. In that imagined antiquity, the emperor’s new (Technicolor) clothes are a fantasy. “Kitsch”: Scholarly Criticisms I asked Koch-Brinkmann whether, among the negative responses to the exhibit, there was one particular line of criticism that was surprising. Yes, she said, the allegation that our work is ungrounded. Because, of course there were people who just didn’t understand what we were doing and who were never going to accept paint on the statues, but [specific scholars] criticized our method of paint reconstruction. . . . They could not outright deny that [paint] was there. But they hate that we made them. (Interview in Frankfurt, Germany, June 27, 2011) This line of criticism finds its full expression in a review of the exhibition and its accompanying catalogue, published by German classical archaeologist Bernhard Schmaltz. He criticized Brinkmann and Koch-Brinkmann for allowing their imaginations to intrude on rigorous application of their scientific approach (Schmaltz 2005). The crux of the problem, Schmaltz argued, was the researchers’ selective use of evidence. In his discussion of the skin color of the painted statues, for example, he insists that the selected specimens do not present the solid case that Brinkmann and his colleagues want them to. Schmaltz acknowledges that the skin portions of the original sculpture were probably altered in some way to appear lifelike. The question, however, is whether paint was used to achieve this effect and, if so, with what intensity. Doubting the representativeness of the evidence as presented in the catalogue, Schmaltz wrote: Only three examples in the catalogue are considered on their own [the rest are collapsed into groups], and in two of those cases B[rinkmann] wrongly believes that a silver skin finish is present. . . . The examples are not even representative of the range of remaining marble sculpture, as it is for example at the Acropolis. (Schmaltz 2005:26) Citing the Acropolis as a counterexample draws pointed attention to the failings of Brinkmann’s team to look carefully even at the obvious evidence; ancient sculptural evidence does not come more visible or significant than that. Further, The examples (“S.44”) for painted skin from large sculptural pieces are remarkably uneven (bemerkenswert disparat) yet for B. they are completely valid. It is completely questionable why B. didn’t undertake a systematic look at his own catalogue since the whole point to the project is to give a full look at the early classical sculptural evidence. It is a serious omission! (ein schwerwiegendes VersĂ€umnis!) (Schmaltz 2005:26) The problem is not just that the polychrome researchers cherry-picked their evidence or overlooked other cases; according to Schmaltz, they were overly bold with the paints. Even if the scientific instruments were correct in identifying traces of pigment, Schmaltz averred, the scholars failed to provide cultural justification for their particular painted reconstructions. The works were out of step with their imagined Zeitstellung, or time period (Schmaltz 2005:31). The Gods in Color researchers had prepared themselves for skepticism. In the official catalogue, Brinkmann predicted, “This project will be reproached with having ventured into the realm of fantasy”; and, further, “Responses such as ‘primitive’ or ‘kitsch’ will be heard, but this first shock has to be overcome” (Brinkmann 2007:27). Brinkmann and his colleagues remained confident that the weight of scientific evidence behind their project would overcome prejudicial viewing by helping audience members to “learn afresh to accept the coloring of statues as an art form. . . . Twenty years of research, the last ten of them with the aid of color reconstructions, were just long enough for me to master this process” (Brinkmann 2007:27). The ontological basis of Schmaltz’s criticism, however, was essentially cultural rather than epistemic. What Schmaltz did was shift the terrain from materials science—terrain on which the Gods in Color scientists were strong—to cultural history, which was more open to interpretation. He was prepared to accept polychrome statuary to an extent. But he was unconvinced of these specific reconstructions in the cultural matrix of imagined Greco-Roman antiquity. There was something about the painted casts themselves that was perceived as excessive. That same doubt was echoed by Ebbinghaus, albeit in a more positive light: “There is a big difference between this abstract notion [polychrome sculpture] and actually attempting to imagine what the sculptures might have looked like” (Reed 2007). Imagining painted statues was fine; making them into material objects was not, because materialization imposed particular colors on the imaginary landscape. To suggest that the painted statues were illogical in their Zeitstellung is to insist that they do not have a legitimate place in that imaginary landscape and, by extension, in antiquity itself. The Gods in Color scholars sketched a picture of antiquity with ancient texts and archaeological evidence. Circumlitio (Brinkmann, Primavesi, and Hollein 2010), for example, includes an image of a Pompeian wall painting showing a statue of Artemis on a base, brightly colored with yellow, purple, turquoise, and gold paints from head to toe (Figure 3). In the museum guide and on exhibition didactic boards, readers were told of an “abundance of reference to Antique statuary polychromy” (Brinkmann, 2010:15). But the same information was freighted with cautionary notes: Pompeian wall paintings and other such images cannot be treated as records of true depictions of now-missing artworks, for example, and there is the open question of why so many ancient paintings present monochrome, rather than polychrome, statues. The Gods in Color scientists were appropriately circumspect, engaging in the correct form of scholarly dialogue. For Schmaltz and for nonspecialist viewers, however, what was missing was a clear picture of what all of these colored statues meant or did in antiquity. Did those bright reds and blues seem as bold to the ancient viewers as they do to us? Can the scientists’ “paint ghosts” sustain these shocking reproductions?22 And if Augustus really wore such gaudy clothes, does this require revising his historical image? Zeitstellung is not a simple matter of having the right evidence. It is also a matter of how that evidence is presented, particularly if it conflicts with a collectively held image of history. This implies, further, that the careful efforts of the Gods in Color team were irrelevant beyond the simple effect of thrilling the modern museum audience. While the technical goals of the Gods in Color project were to master the observation and re-creation of pigments, the broader epistemic aim was to rethink the sociocultural landscape of antiquity by imagining a riot of colors among the statues. For Schmaltz, the question of kitsch was actually less important than that of fit. A failed Zeitstellung suggests a fundamental misunderstanding of the world that the painted statues were meant to recreate. The achievement of authentic paints and pigments could not overcome that misunderstanding. Extending Schmaltz’s critique beyond the specific question of evidentiary choices, the problem with the Gods in Color statues is that they are only accurate in a particular moment in ancient history—and not necessarily the one we care most about. They are accurate, moreover, in a particular contemporary moment, as the product of current scientific techniques, pigment production, and ways of seeing. Color, intended by the scientists as an empirically grounded historical corrective, became instead an unwanted aesthetic intervention. The colors made plain the fault line between technical accuracy and cultural validity. This brings us to an apparent confirmation of Fleck’s maxim that “only that which is true to culture is true to nature” (Fleck [1935] 1979:35). Discussion and Conclusion Color is grounded in deep cultural meaning. Anthropologist Victor Turner reminds us that color use is socially patterned and reflects basic life-and-death processes and emotions (Turner 1967:88–9). My discussion has highlighted one aspect of color, namely its vulnerability to competing, socially grounded systems of perception. The important point for a sociological theory of color is that even when people are past the “purity” threshold—in this case, their acceptance of a (shocking) new understanding of classical marble statues—there is another set of constraints operating on the tone and hue of the colors. As my data demonstrate, perception is constrained by collective ideas as well as intellectual training. In the Gods in Color case, the salient perceptual systems can be distinguished from each other as empirical and cultural. Both of them sought to make definitive meaning of color by interpreting its application to reworked classical pieces. The Gods in Color show was no ordinary curatorial project. It was an assertion of the central role of science in revealing historical and aesthetic truth. In an age in which science and technology hold sway over many people’s hopes for longevity and powerfully shape a vision of the future, the Gods in Color show suggested that technologically grounded empiricism is needed, too, to correct our vision of the past. The problem for the exhibition’s creators was that the statues had been overhauled to the point where they were no longer recognizable as historically authentic. They were, instead, hybrid creatures straddling Rose-Greenland 99 science and art, struggling to gain credibility (Gieryn 1999). Viewers accepted the basic idea of polychrome sculpture but were turned off by the specific colors used. The colors were too bold. Bright colors and bold color patterns may have a charming ethnic romanticism in Western fashion practice, but they signify persons and practices as nonwhite, socially transgressive, and Other.23 What the Gods in Color viewers seemed to want from their statues was classical authenticity, which is the product of acquired sense and collective wisdom rather than historically or chemically precise renditions. Authenticity, I have argued, operates on two levels. First, from the material point of view, there is the issue of historically accurate ingredients made to produce the pigments. Second, from the point of view of reception, there is the issue of cultural plausibility. Authenticity, in sum, is an outcome of our own experiences and socially cultivated understandings. It is not interchangeable with accuracy. What the case presents is a conflict between scientific and cultural authority mediated at the level of sensory perception and made visible through the addition of color. These two forms of knowledge, the scientific and the cultural, have no inherent relationship. They may be cooperative, as when the rules of science and the frisson of cultural creativity combine to produce high-end modernist cuisine (Borkenhagen 2015; Lane 2014), or, as the present case demonstrates, they may be in conflict. The nature of their relationship varies because science itself can be used in the name of tradition and authenticity (e.g., conserving ancient objects by slowing their rate of decay) or innovation (e.g., remaking the same objects to the way they “really” once were). I have argued that a sociological theory of color must engage with materiality as a multifaceted phenomenon. In the present case, the materiality is two-fold: the substance of the color itself (plants, pigments, chemicals, dyes) and the object to which the color was applied. Each of these elements has meaning and is open to contestations over credibility and meaning. “Marble sense” (Marmorweiss), Koch-Brinkmann’s term for a shared disposition toward marble and its appropriate uses, highlights the importance of material sense more generally. Marmorweiss is rooted in centuries-long processes of natural material decomposition and change as well as social forces of historical mythologizing, institutionalized aesthetics, and conflation of material, color, and norms. With any given cultural object there are several materials at play, and each material is differentially visible and significant according to audience, context, subject matter, and the qualities of the material. Just as materiality is a moving target, so is the setting in which the object is received and perceived (on socially shaped perception of time and symbols, see Zerubavel 1997). Where do we go from here? I suggest two research directions that have potential to strengthen and extend sociological theorizing with color. The first concerns the intersection of color perception and time. Every restored historical object has, simultaneously, two temporal dispositions: past and future. The (newly) painted (ancient) statue, as I have tried to demonstrate, is one object through which the tension between these dispositions came out particularly strongly. The painted statues were an affront to temporality because the rules of Western classicism were changed halfway through the game, without warning or explanation. Taste is a factor here, to be sure, since the high modernism currently in vogue is clean and tends not to be polychromatic (or, when polychrome is called for, it is used judiciously against a monochromatic background). But taste is only part of the explanation. The rules of tasteful erudition clashed with the rules of good science. Where ordinary museumgoers and classics fans saw shocking reproductions of white originals, the Gods in Color researchers looked at white statues and saw a massive historical error. The emotional aspect of this tension—the sense of losing or gaining control over a cherished mental image of a fetishized historical period—merits further development, empirically and theoretically. 100 Sociological Theory 34(2) As a second contribution, the paper calls for renewed thinking about what aesthetic knowledge is and how it operates (Chong 2013; Shapin 2012). New technological tools have opened the door to previously unimaginable feats in reconstruction and conservation work. These achievements pose difficult questions for aesthetic knowledge, primarily because people generally do not like to be told that their admiration for an artwork or building is misplaced because the artwork or building is technically false. When the Sistine Chapel paintings were restored in the 1990s, for example, some art historians severely criticized the restored versions for being garish and distracting (Beck and Daley 1995). The newly “revealed” hues were not, in short, what they preferred to associate with one of the crowning works of the Renaissance. Sociologists are beginning to think more systematically about how aesthetic knowledge is codified, imbued with authority, and negotiated. Continuing this line of enquiry is essential for claiming aesthetic knowledge as a patterned and observable facet of social life.
1 note · View note
katebushwick · 5 years
Text
The decision to truck debris from the World Trade Center site to Fresh Kills Landfill for sifting and long-term storage would have profound ramifications for search for human remains. For the City of New York and those tasked with the unbuilding of the World Trade Center, Fresh Kills was an ideal place to bring debris from the site. For many families of the missing, moving material that potentially contained the remains of their loved ones to a landfill was an affront to the dignity of the dead. They believed that the search for remains of the missing should continue undisturbed at Ground Zero. Instead, the material would be hauled more than twenty miles away to a facility that most New Yorkers, and indeed the world, associated with decaying garbage, methane fumes, rats, and scavenging birds. Over the next decade, controversies would develop about the way that the debris was initially searched for human remains, where and how to store the finest material once it was separated from larger pieces, and whether any human remains would come into contact with the ordinary household garbage that lay underneath the ground. Families would ultimately sue the City of New York for failing to respect their loved ones and provide them with a proper burial—taking their case all the way to the U.S. Supreme Court. Opening in 1948, Fresh Kills had served as the city’s main garbage dump for most of the second half of the twentieth century. The 2,200-acre landfill had just been closed in March 2001 and was awaiting conversion to a park when the World Trade Center was attacked on September 11. In the chaos of the rescue mission, it quickly became clear that workers would 2 Fresh Kills Fresh Kills 49 need a place to put the tons and tons of steel and debris that they were removing from the site in their frantic search for survivors and victims. There simply was no space anywhere on the island of Manhattan that could accommodate this volume of material. Despite its former life as one of the world’s largest garbage dumps, Fresh Kills was seen as an ideal spot to store and sort through the debris from Ground Zero. It was isolated enough that it could be cordoned off from the public and it was big enough that all of the material could be brought to one place. Later in the effort, the proximity of the World Trade Center site to the river, and the accessibility of Fresh Kills by barge, meant that debris could be brought to the landfill by water. Thus, in the early morning hours of September 12, 2001, material from the site—including pulverized concrete, steel rebar, asbestos, and other insulation materials—was loaded into uncovered trucks and brought to a cordoned-off 135-acre section of the landfill known as Hill 1/9. 1 There, it was unloaded and searched for evidence related to the attacks (particularly the flight recorders from the two planes that crashed into the towers), and valuables and contraband that were known to have been buried in the collapse of the towers (including guns, drugs, and money from various local and federal law enforcement agencies that had had offices at the towers), as well as remains and personal effects of the victims. The area was immediately reclassified as a crime scene. For the first month of the operation, NYPD detectives, assisted by sanitation workers, FBI agents, private contractors, and other volunteers, performed this sifting job by hand, using only rakes, shovels, and other basic equipment.2 As the scale and difficulty of the search became clear in late September, the U.S. Army Corps of Engineers asked the private disaster management firm Phillips and Jordan to manage the materials processing work at Fresh Kills. In mid-October, two waste recycling companies, Taylor Recycling Facility and Yannuzzi Disposal Services, were brought in to mechanize the sifting operations using giant machines normally used to separate waste for recycling. 3 In essence, the setup involved placing the debris into a series of large shaking drums that separated the material into successively smaller pieces. These pieces were then separated into debris streams of differing sizes in a way that made it easier for investigators to visually inspect all debris greater than a quarter inch. All personal effects and artifacts were examined by the FBI and sent to a “property trailer” for cleaning, indexing, and photographing. All suspected human remains were 50 Who Owns the Dead? brought to the Disaster Mortuary Operational Response Team (DMORT) forensic anthropologist who was on duty at the Fresh Kills facility. 4 Because there were many restaurants and food service units in the towers, forensic anthropologists often found themselves examining nonhuman remains and food products. Anything larger than a quarter inch that was deemed to be human was individually bagged, labeled, and sent to OCME for further processing. Bits and pieces smaller than a quarter inch, which came to be known as the “fines,” were collected and separated for long-term storage at Fresh Kills. What was done, or not done, with the fines, which undoubtedly contained some human remains in the form of bits of unidentifiable and desiccated bone reduced to ash by prolonged exposure to intense heat, also became the subject of controversy. At its peak, the Fresh Kills operation employed more than 1,500 people per day and was called the “City on the Hill” because it functioned as a standalone community with dedicated phones, water service, telecommunications, heated and air-conditioned evidence examination facilities and offices, bathroom facilities, and a large cafeteria known as the “Hilltop CafĂ©,” run by the American Red Cross and the Salvation Army. 5 The Fresh Kills recovery work was financed by $125 million from the federal government. The effort, which lasted until July 15, 2002, sifted through 1.8 million tons of debris (approximately 7,000 tons per day) and found 4,257 human remains (mostly bone fragments and small, difficult-to-identify body parts, but also arms, hands, and feet—many of them manicured, suggesting that they belonged to women; 54,000 personal effects (including jewelry, credit cards, and ID badges); $76,318.47 in cash and coin; 6 kilos of narcotics; 4,000 photos; and many airplane fragments. The team also recovered 1,358 destroyed vehicles, including 102 FDNY vehicles and 61 NYPD vehicles, and 1,195 personal vehicles. 6 All of this took place on top of fifty years of municipal garbage covered up by eighteen inches of fill dirt and giant plastic tarps. James Luongo, NYPD’s incident commander at Fresh Kills, later described the inherent unpleasantness of the site. He said that the best way to cope with the stress of the mission was to learn to be able to look at human remains, without actually seeing their humanness. He described the methane that bubbled up to the surface of the site, which was produced by rotting garbage below, and the general smell of the place: “There are days that you come up here and it stinks of methane, there are days that you come up here and it just Fresh Kills 51 stinks of death—death has a very distinct odor. You would come up here and the entire hill would just stink, it would reek, of death.”7 He also noted that city officials had to ask the U.S. Department of Agriculture to devise a plan to safely keep away the hundreds of seagulls and turkey vultures that circled the remains—eager to grab a bit of flesh, whether human or food. “It’s not your nice peaceful view of sitting along the beach, you know, watching seagulls fly by, these are nasty birds, and they were very aggressive—we knew what piles were rich in body parts by the way the seagulls descended on it, and you would have to fight the seagulls for the human remains.”8 In the end, the Department of Agriculture settled on regular blasts of fireworks to scare away the birds. These explosions were just another layer of noise on top of the rumble of heavy construction equipment and the constant beeping of trucks backing up. An Affront to Human Dignity Many families were shocked by the decision to relocate the debris to Fresh Kills. Human remains being placed on a truck or barge and then dumped at a location that once handled city garbage was horrifying for these relatives. Those in the firefighting community who disapproved of the city’s handling of the cleanup of Ground Zero argued that Fresh Kills was where the real “bagging and tagging” of victims was taking place. They argued that the rituals and prayers that provided respect and dignity for remains recovered at Ground Zero, whether uniformed service or civilian, did not exist at Fresh Kills. A group calling itself the World Trade Center Living History Project argued that it was only at the WTC site that clergy would be called to join the recovery workers for any discovery of human remains, for important moments of respect, dignity, and prayer. Faced with inhumanity, recovery workers, most importantly, transformed barbarism into civilized behavior for the nation and the world. In these few moments there was acknowledgment of our connection to these approximately 20,000 . . . nameless, faceless parts and wholes of humans who deserved respect. No such rituals existed at the Fresh Kills Dump. No clergy, no honor guard, no flag-draped ceremony existed as a dignity at the garbage dump unearthing of friends, family or complete strangers. 9 52 Who Owns the Dead? In the Living History Project’s view, the recovery of human remains at Fresh Kills was not a symbol of an effective response to the World Trade Center tragedy or proof that the systems put in place by the city were efficiently locating these remains. Rather, it was a failure to uphold the dignity of the victims and their families. Living History Project members argued that archaeologists would never consider recovering the remains of slaves or native peoples in the way that 9/11 victims were being recovered. As such, the victims of 9/11 deserved the same respect that society accorded to other dead people—even those who died decades or even hundreds of years ago. 10 The Living History Project was only onesuch voice. Families were also beginning to voice serious concerns about the conditions at the landfill. Patrick Cartier, father of an apprentice electrician who was killed in the attacks, reported upon visiting Fresh Kills that it was akin to “entering hell itself.”11 Officials associated with the Fresh Kills operation refuted such claims. Whilethey could not guaranteethat they would recoverevery singleremain, they felt that their main objective was to recover as many as possible. They took the job seriously and did not make light of their operation in any way. NYPD Chief of Detectives William Allee told a group of reporters during a tour in mid-January 2002, “This is not a garbage dump; it’s a special place. It is sacred ground to all of us. We’re doing God’s work and I feel honored to be here.”12 World Trade Center Families for Proper Burial World Trade Center Families for Proper Burial (WTCFPB) held its first meeting on December 3, 2002 and was formally chartered as a New Jersey not-for-profit corporation on November  10, 2003. 13 Kurt and Diane Horning, whose twenty-six-year-old son Matthew was killed in the attacks, founded the organization along with several other families. Matthew worked on the 95th floor of the North Tower as a database administrator for the insurance company Marsh & McLennan. He worked on a team that ensured that the company’s data could be quickly recovered in the event of a disaster or catastrophe. Matthew’s wallet and a piece of his skull bone had been recovered through the Fresh Kills operation and the Hornings believed that more of his remains were “buried amidst the debris at Section 1/9 at Fresh Kills.”14 Horning was on a tour of the Fresh Kills facility led by FBI Special Agent Richard Marx in July 2002 when it hit her that the fines, the bits of material less than a quarter inch, “contained Mat- Fresh Kills 53 thew’s and others’ human remains.” Indeed, according to her affidavit, it was Special Agent Marx who informed her that the fines contained at least some cremated remains.15 The primary goal of WTCFPB was to ensure that the fines were separated from the rest of the World Trade Center debris and were not permanently intermixed with garbage at the Fresh Kills landfill site. The group wanted this material to be placed in special containers and ideally returned to the World Trade Center site, where they would be permanently interred at the place of death. If this outcome was not possible, then they hoped that the fines would be placed in a suitable “cemetery and memorial garden which serves no other purpose than to bury our dead with respect and to provide the families and friends with a place for quiet reflection.”16 Fresh Kills Landfill, in their view, simply did not fit the bill. It was, inverting Lincoln’s description of the cemetery in Gettysburg, “unconsecrated ground.”17 WTCFPB emerged from a support group for families who had lost loved ones in the September 11 attacks. None of the core members had been politically active before September 11. They came from ordinary, middle-class families from New York and New Jersey and were not prepared for the shock and tragedy that befell them that day. Early on in their advocacy work, the Hornings and their partners spoke primarily in terms of dignity and respect for the dead, but as they became more savvy activists, and enlisted the helped of noted civil libertarian lawyer Norman Siegel, their language shifted to the more legalistic concepts of property and ownership.The core of the group consisted ofeleven families, but on December 20, 2003, WTCFPB created an online petition that asked members of the public to support the organization’s cause and to note whether one of their loved ones had perished in the 9/11 attacks. Based on the results of this survey, which approximately 62,500 people signed, WTCFPB claimed that it represented around a thousand affected families. 18 In addition to the fate of the fines, WTCFPB members were also concerned that, in the first month of the operation, debris that potentially contained human remains was “bulldozed” over the edge of the hill and mixed in with household garbage in a location in Section 1/9 known initially as “the North Field” and then “Area A.” The Department of Sanitation admitted that they did this in order to make room for newer loads of debris. 19 WTCFPB contended that this debris was “suddenly subjected to foraging by droves of seagulls attracted by the human parts contained in it.”20 Officials in charge of the Fresh Kills investigation insisted the material 54 Who Owns the Dead? was had been searched in the first few weeks of the operation, and that it was excavated and resifted at the end of the operation. WTCFPB contended that this was untrue. 21 Analysis of bone material recovered from the rooftops of buildings around Fresh Kills (i.e., bones removed by seagulls from the site or from barges and trucks arriving at the landfill) demonstrated consistently that none of them were human in origin. 22 In their complaint, WTCFPB acknowledge that Taylor and Yannuzzi did mechanically sift some of this material and “recovered numerous body parts and human remains,” but that they were neither authorized nor instructed by agents of the city to do so. 23 They claimed that approximately 414,000 tons of debris went unsifted, leaving “at a minimum hundreds of human body parts of victims of the World Trade Center” buried at Fresh Kills. In a later affidavit, James Taylor, the retired chairman of Taylor Recycling Facility, provided his own calculation, claiming that approximately 223,000 tons of debris was never sifted at Fresh Kills. 24 According to his testimony and the affidavit, Taylor believed that the material bulldozed off the side of the North Field/Area A part of Section 1/9 was never fully resifted and therefore contained human remains that had not been recovered. “This is why these family members are so unsettled and why Diane Horning doesn’t sleep at night,” he said. “It breaks my heart to know that Diane is right about the issue of September 11th human remains.”25 WTCFPB negotiated with the city over the material at Fresh Kills for many years. Horning and other family members regularly asked FBI and City of New York officials about thestatus and fate of the fines. In all of their conversations, officials at Fresh Kills reassured families that the fines were being kept separate from household waste and the rest of the debris from Ground Zero. WTCFPB claimed that Fresh Kills director Dennis Diggins assured them in August 2002 that ground asphalt millings—recycled road material—had been put down where the fines were being stored in order to provide an additional barrier between the material and the waste below the topsoil. While onsite during a July 2002 visit, FBI Special Agent Richard Marx told them that they “could request the fines [for proper burial] when the recovery effort was concluded.”26 WTCFPB families wrote to the Office of the Mayor and the Department of Sanitation after their conversation with Marx to find out what the city planned to do with the fines, and to ask that they be treated with respect and properly buried. Mayor Bloomberg’s liaison to the families of 9/11, Christy Ferer—whose husband Neil Lavin, the executive director of Fresh Kills 55 the Port Authority, was killed on 9/11—responded quickly to the families, assuring them that the material was being maintained with reverence and that one plan would be to use the debris as fill material in the redevelopment of the World Trade Center site, but that it would be costly. She noted that she was “reaching out to private money to see if we can finance its move if and when that becomes an option.” It is interesting that although the families were primarily interested in the fines, estimated to be approximately 360,000 to 480,000 tons, Ferer refers to all the debris sent to Fresh Kills, including material that most likely contained no human remains. In a clear effort to empathize with the families, she wrote, “The closing of Fresh Kills [i.e, the conclusion of the sifting operation there] has rekindled raw emotions in all of us. It is so painful to imagine the unidentified remains of our loved ones intermingled with the debris that remains at the landfill. I am sensitizing City Hall to the fact that this debris cannot be forgotten and they have to plan for its future.”27 The families, however, had reason to distrust Mayor Bloomberg. On numerous occasions, both private and public, Bloomberg expressed a lack of personal interest in funereal rights and prevailing notions of proper burial. Beginning in 2002, WTCFPB requested a meeting with Mayor Bloomberg to ask him to support their efforts to remove the fines from Fresh Kills and provide what families considered to be a more fitting resting place. According to Diane Horning and news reports, however, when Bloomberg finally met with the Hornings in April 2003, he was dismissive of their request and seemed puzzled by their continued passion about the human remains at Fresh Kills. He told the families that he had little interest in the corporal remains of humans after death. He told them he intended to leave his own body to science and that he had visited his father’s grave only once. When Kurt Horning asked him if he would be bothered if his own father’s remains were kept at Fresh Kills, he said he would not. In the end, Bloomberg refused to endorse their mission, and when the Hornings pointed out that many other politicians, including senators Schumer and Clinton, had done so, he responded to the effect that it’s very easy to get important people to write letters on your behalf, but much harder to get them to do something for you. In other words, Bloomberg felt that he was just being honest when he said he would not actually do anything to help them. 28 Bloomberg cut their meeting short in order to deliver a civic award at the halftime of a New York Knicks basketball game at Madison Square Garden. 29 56 Who Owns the Dead? Bloomberg, it turns out, was only partially right in his assessment of the intentions of politicians. WTCFPB families succeeded in working with members of the New Jersey legislature to pass a bill in December 2003 ordering the Port Authority to move the fines out of Fresh Kills, which was signed into law by New Jersey Governor James McGreevey. But because New York State never passed a similar law, none of the New Jersey law’s provisions were ever enforced. Despite WTCFPB’s efforts to sway Governor Pataki and other New York State officials, a New York version of the bill never made it through the state senate.30 Ferer, while being sensitive to the needs of families, felt strongly that they had to be balanced against the needs of businesses and residents of lower Manhattan, who were eager to get on with their work and lives. Hence many of the World Trade Center families came to believe that her role was to present the mayor’s needs to the families rather than vice versa. In an editorial in the New York Times in 2002, Ferer made a point of contrasting the “silent majority of families” who send her messages that are “moderate” in tone, mostly about wanting to get on with their lives and never return to Ground Zero, with those presumably more extreme families who were obstructionist and belonged to organized groups that made demands to the city about what should and should not happen at Ground Zero. 31 The silence of families of 9/11 victims, however, did not mean that they approved what was taking place at Fresh Kills. For many relatives I spoke with, the idea that a loved one’s remains were in the landfill was simply too horrendous to think about. Monika Iken, who has been perhaps the most important family advocate in the creation of the memorial at Ground Zero, told me in 2011 that it was an issue that she did not have the stomach to deal with. While she was deeply appreciate of the work of WTCFPB, and supported it in any way she could, she avoided thinking about the possibility that her husband Michael’s remains were there: Thank God somebody elseis dealing with that. . . . The wholeidea that he is in Staten Island and not over here [in lower Manhattan] and he is in a dump—the whole thing throws me over the edge. I cannot even conceptually put myself there. . . . I have never been there, I have never seen it, nor do I want to. I cannot handle it. So what I do is avoid it so that I can’t—my Michael is not there. That is how I look at it. 32 Fresh Kills 57 For many families whose loved ones were killed on 9/11, the pronouncements from Bloomberg and his staff suggested an insensitivity that was both inappropriate and emotionally damaging. Many family members who were not passionate about the human remains issue—because they did not believe that their loved were in the fines at Fresh Kills, because it did not matter to them, or because they simply could not handle thinking about the problem—felt that the city was treating WTCFPB families unfairly. Nikki Stern, whose husband was killed on 9/11 and who was the executive director of Families of September 11, argued that “more compassion and morerespect could have been shown to thesefamiliesearly on. The issue is not whether I personally believe that part of my husband is at Fresh Kills. . . . The issue is, there are family members who do believe that. How are they being treated, and how were they treated? People who have this belief are not crazy and should not be treated as such.”33 In 2004, Horning and other family members at the core of WTCFPB began to attend New York City Planning Commission hearings in order to monitor the city’s plans for creating a park at Fresh Kills, to voice their opposition to keeping the fines at the landfill, and to demand a new search of the material they argued had never been properly sifted. WTCFPB also sought to make it easier for families to visit Fresh Kills to pay respects to their deceased loved ones. It is unlikely that many families would have wanted to visit the Fresh Kills site, but WTCFPB felt that they had a right to do so. WTCFPB believed that it was far too difficult to get to the site. There was no public transportation to Fresh Kills, visitors had to request official permission, and an escort from the Department of Sanitation was required. Visitors were also required to sign a waiver excusing the city from liability should any injuries occur during the visit. For Horning, though, it was the starkness of the place that made the visits so difficult: “The current mourning site is not a place to bring children, the infirm or the elderly. The walk itself is difficult as the soil erosion can be as deep as four feet. There is no feeling of solace or closeness to your loved one. It is a formidable place. Some days the smell of methane is too much to bear. During visits, some of us have found old tires, carpeting, construction fill, pieces of blistered steel, glass, bolts, and fire hose, shoes, and tiling at the site. It is simply devastating.”34 WTCFPB families felt that Bloomberg and his administration saw them as a nuisance and an impediment to progress in the city’s recovery rather than an interest group with a legitimate concern. 35 For the men and women 58 Who Owns the Dead? of the law enforcement community who had devoted ten months of their lives searching for human remains at Fresh Kills, such claims and allegations were frustrating and offensive. In interviews and in his affidavit, NYPD Fresh Kills Incident Commander Luongo noted that he personally witnessed the collapse of the towers when responding to the attacks and understood the significance of the operation for the families. He stated that if any of them had any reason whatsoever to question the quality of thesearch for human remains, he would have “halted the operation immediately.”36 Luongo further said that no workers at Fresh Kills had ever complained to superiors about improper searching and that he believed that they would have done so given the number of their colleagues who had perished on 9/11. 37 He also noted that search personnel were briefed on proper procedure before the start of every shift, and that forensic experts regularly reminded workers about what to do in the event that suspected human remains were found. Luongo explained that he personally witnessed the excavation and sifting with the Taylor and Yannuzzi machines of any material searched by hand before the equipment was available. He explained that Department of Sanitation workers used GPS to make sure that they were excavating the correct material and searched below the officially recorded bottom of the debris field in order to ensure that they did not miss anything relevant. Finally, he pointed out that any material that was not subjected to the Taylor and Yannuzzi machines was simply too large to fit in it. This debris was searched carefully by hand. Therefore, there was no WTC debris that could have been screened but was not. These claims were confirmed in great detail by Dennis Diggins, director of Fresh Kills for most of the search effort and who supervised the excavation and sifting of the early debris in May and June 2002, and by Scott Orr, who led Phillips and Jordan’s operations at Fresh Kills and was stationed there full time from October to August 2002. 38 WTCFPB sought permission to take drill bore samples at their own expense, but the city did not allow them to do so. In WTCFPB’s view, these samples would have allowed them to definitively prove their case, but the city believed such testing was unnecessary given the quality control measures already in place. 39 Thus, it is impossible to know for certain which side is correct. Fresh Kills 59 Beginning in early 2002, the Hornings and other families concerned about the fines at Fresh Kills heard rumors that the fine material less than a quarter inch was only separated temporarily and that the Department of Sanitation “apparently recombined the fines with debris from the World Trade Center and dumped them into the landfill.”40 They also claimed they had discovered that no millings had ever been laid to separate WTC debris from fifty years of municipal waste. Additionally, they took issue with the argument that it would be expensive to move the fines back to the World Trade Center site, noting that the city estimated that it would cost more to move the fines than the entire sifting effort at Fresh Kills ultimately cost. According to the concerned families, the city had “exaggerated the difficulties and cost of relocating the fines in an effort to thwart [their] efforts to ensure a proper and decent burial for their relatives.”41 Later, the families were concerned that the emphasis on speed above all else may have infected the operation at Fresh Kills the way it had created incentives to clean up Ground Zero as quickly as possible. Perhaps most fundamentally, there was a disagreement between the city and WTCFPB about the extent to which the fines ought to be considered human remains. For the city, every reasonable effort had been taken to retrieve any bit of human tissue or bone larger than a quarter inch. While it was undeniable that some fraction of the fines was human in origin because of the crushing nature of the collapse of the towers, this did not mean that all of the fines, which was mostly pulverized concrete and other building material, needed to betreated thesame way one would treat a body or a body part. At some point, city officials believed, one had to decide that enough was enough and accept that some small quantity of human remains would end up in Fresh Kills. For WTCFPB, the existence of human remains in the fines represented not just some abstract presence, but rather the potential presence of their loved ones. The fines themselves became corporeal. 42 As James Taylor, the chairman of Taylor Recycling Facility who became a vocal advocate for WTCFPB, stated in an interview with the New York Times, “You bring tears to my heart when you make me talk about this [the fines], but [are] there human beings in that powder material? Absolutely. There’s 2,749 spirits theoretically in that fines material.”43 As such, it was imperative that this material be treated reverentially and accorded the same respect that one would give to human bodies or body parts. Given the comingled 60 Who Owns the Dead? nature of the human remains that were present in the fines, WTCFPB wanted the entire volume of fine material to be removed from the landfill to prevent the permanent internment of the possible remains of their, or another family’s, loved ones in such a disrespectful environment.44 The fact that the landfill would one day be turned into a park did not comfort these families. Fresh Kills’ past life as a garbage dump could not beerased by landscaping and environmental remediation. In an e-mail message to Diane Horning, Diana Stewart, whose husband Michael was killed in the collapse of the North Tower, linked the reverential treatment of remains at Fresh Kills to an act of defiance against terrorism itself: For those like us who received partial remains, and the many who have received none, the remains at Fresh Kills require a proper, American burial, with all the dignity, ceremony and prayer which would accompany a burial if we could identity ash from ash and put a name on each. These unidentifiable remains are literally our kin, loved ones, fellow Americans. And for those who were none of these, we still embracethem equally, as our country always has, giving equal opportunity to those who come here to join our way of life, in peace. Please give all these unclaimed remains the dignity of burial, and I will be among the mourners who will stand to bear witness, express eternal gratitude to those who worked to free the remains from the rubble and, most importantly, stand in defiance of those cowardly murderers who thought these lives unworthy . . . thereby proving that in our country, every life matters.45 Thus, for Stewart, any unidentified human remains were the collective property of each and every family that lost a loved one in the WTC attacks, and that it was the duty of these families and the nation to treat each and every bit of human remains as if it belonged to a known victim of the tragedy. On September 29, 2003, Mayor Bloomberg unveiled long-term plans to turn the landfill into an expansive park and nature preserve with both recreational and natural spaces, as well as a memorial of some sort to the victims of the World Trade Center attacks.46 Implicitly acknowledging that there were human remains at the site, he suggested that a few handfuls of the fines would be brought to Ground Zero and incorporated into the Fresh Kills 61 reconstruction of the site. As plans for the site developed, it became clear that the Bloomberg administration envisioned two memorials at Fresh Kills—one involving the towers, which would be represented by two long side-by-side earthworks the length of the towers themselves, and another for the victims of the attacks, represented by a mound containing the fines. While administration officials viewed this as a dignified way to respect the dead, the families of WTCFPB disagreed. They simply did not want any human remains entombed atop five decades of New York City garbage. On March 24, 2004, the City Planning Commission held a meeting about the plan to turn Fresh Kills into a park. Several WTCFPB families attended that meeting and stated that they were against any attempt to turn the human remains into a permanent feature of this park. 47 On July 19, 2004, Jonathan Greenspun, the head of the mayor’s Community Assistance Unit, informed WTCFBP that the fines would not be moved from Fresh Kills, even though he knew this decision would disappoint the families. Greenspun did, however, welcome them to participate in the planning of a memorial at the park. Although the city claimed that no alternative sites were available to store the fines, the families contended that they knew of at least two appropriate sites: Liberty State Park in New Jersey and a nonlandfill site in the Fresh Kills area at Muldoon Avenue.48 Later they would suggest Governor’s Island as a third alternative.49 In addition to the general concerns over leaving the fines at Fresh Kills, WTCFPB families were concerned that their regular visits to Sections 1 and 9 of Fresh Kills landfill demonstrated that these areas was not being treated with reverence—rather they were littered with “chunks of steel, shoes, tires, carpets, portions of storm drains and other garbage. In addition, the area bears signs of erosion and tracks from large construction vehicles.”50 For members of the Bloomberg administration and others not involved with WTCFPB activism, such claims were “misconceptions” about what the landfill looked like more than two years after the recovery mission ended. They noted that only “clean” construction fill was used to cover the rubble from the WTC. 51 From August to October 2004, WTCFPB attended several City Planning Commission meetings, requesting that all potential human remains be recovered before the landfill was turned into a park. They also notified the commission that those in charge of the recovery effort were not cleaning up the site, but merely covering it with an additional layer of dirt to conceal the comingling of the fines with garbage and other debris. 52 62 Who Owns the Dead? Beginning with their October 2004 Notice of Claim against the city, WTCFPB changed their tactics from negotiation about dignity and respect for the dead to assertion of familial and property rights over the unidentified remains of their relatives at Fresh Kills. In a March 11, 2005 letter to Mayor Bloomberg summarizing the families’ position, attorney Normal Siegel wrote: Federal courts have recognized that next of kin have property rights in the remains of deceased relatives that are entitled to due process protection. . . . The law makes no distinction between cremated remains and an intact body. . . . What is of immediate importance is a determination of whether the City, and its representatives, has afforded [victims’ families] an appropriate level of due process protection in deciding that Fresh Kills is the only appropriate place where the remains can be permanently kept. Based on our research, we conclude that the City has systematically ignored the civil and constitutional rights of our client. 53 Siegel then went on to accuse the city of not dealing with concerned families “in good-faith” because they were not always invited to meetings about Fresh Kills or given accurate information about what was going on there, and, further, that they lied to families about the practicality of moving remains. If the city did not address the families’ concerns, grounded in the legal right to control the disposition of the remains of direct kin, the families would be forced to pursue legal action against it. 54 While some media outlets covered the possible legal action sympathetically, the New York Post used it as evidence that the families of WTCFPB were no longerengaged in a rational debate over theremains. 55 An editorial in the newspaper noted that “the city’s proposal issensitive and appropriate—not perfect, but hardly reprehensible. But the families are in no mood to compromise—they consider their demands non-negotiable. Following through with a lawsuit may prove emotionally satisfying for the families, but it’s unfair. The city has behaved both honourably and respectfully. It’s long past time to move on.”56 In another editorial, Andrea Peyser noted that “these relatives just can’t let go. And perhaps they never will. The latest fight to be waged by 9/11 families . . . threatens, I fear, to weaken their credibility in the eyes of a public that is, frankly, growing weary of them and their many complaints.”57 Fresh Kills 63 When intense discussions failed to produce an outcome that satisfied Horning and other WTCFPB families, they filed suit against the City of New York in the United States District Court for the Southern District of New York in August 2005. This court was responsible for hearing all litigation related to the World Trade Center attacks as a result of a provision in the federal Air Transportation Safety and System Stabilization Act.58 Their legal strategy rested on the claim that families of the victims had a property right in the remains of their loved ones under both federal and state law. Specifically, they argued that next of kin have the right to “take possession of the remains, to make arrangements for a proper and decent burial, and to give instructions regarding the disposal of remains.” These rights, in their view, extended from intact bodies all the way to “body parts, bone fragments, small tissue particles, and cremated remains.”59 They further claimed that their due process rights had been violated because city authorities (1) misled them about the treatment and storage of human remains at Fresh Kills, particularly in the first month of the operation; and (2) by “bulldozing topsoil over the area containing remains” the city “set in motion a process that will make the remains a permanent fixture of Fresh Kills.”60 Finally, they noted that New York State law prohibited the interment of human remains at Fresh Kills landfill. 61 The case would ultimately be heard by Judge Alvin K. Hellerstein, who presided over nearly all litigation emerging from the September 11th attacks, and who went to great lengths to empathize with those who had been killed or injured in some way in the attacks or recovery effort. 62 In the months after WTCFPB filed suit, however, Hellerstein ordered the families and the city to try to come to some agreement in order to avoid a lengthy and emotionally damaging trial. He even arranged for a series of less formal conversations to take place in his chambers in an effort to avoid turning the issue into a purely legal dispute. He recommended that the families think more symbolically about the fines, since it was unlikely that the entire amount would be transported from Fresh Kills. Perhaps, he suggested, a portion of the fines could be brought to another site as a symbolic memorial to those who had died on 9/11. For the families, however, the notion that the fines be treated symbolically was akin to accepting the idea that their loved ones were also only symbolically dead. Their loss was total, and they were not going to compromise on what they perceived as the right way to handle their loved ones. Indeed, after Hellerstein described the positive attributes of a park at Fresh 64 Who Owns the Dead? Kills in open court, Laura Walker, the wife of a victim of the attacks, stood up in the gallery and chided the judge for even suggesting that the remains be kept at the landfill. “You should be ashamed of yourself,” she told him. In keeping with hisefforts to find some way of resolving the dispute without formal litigation, Hellerstein responded to her outburst not by asking her to be removed from the courtroom, but by apologizing to her and other families for their pain. 63 Despite Hellerstein’s demeanor and desire to reach a compromise, one could not be found at this meeting or in subsequent discussions. Disappointed but undeterred, WTCFPB moved forward with collecting a wide-ranging mix of evidence for a full-scale legal case that would prove negligence at, and mismanagement of, the Fresh Kills facility. In an affidavit submitted as part of the WTCFPB suit, for example, a unionized Taylor employee at the site named Eric Beck claimed that P&J, the private contractor, constantly told workers to “keep the tonnage up” (i.e., the amount of material sifted per day) and to make the belts run as quickly as possible.64 At the same time, he said that NYPD officers examining the debris as it went down the conveyor belts were constantly asking workers to slow the process down so that they could adequately search the material that passed them. 65 In another affidavit, retired NYPD officer John Barrett, who spent a few shifts at Fresh Kills during the first six weeks of the operation (before mechanical sifting began), claimed that Sanitation Department workers took material away from him before he had had the chance to fully examine it and that NYPD officials at the site failed to properly investigate his allegations. 66 Barrett also claimed that the steel beams removed from debris for recycling were not always checked for human remains and other artifacts. In cases where they were actually searched, the location of any relevant material recovered from them was not clearly recorded. 67 This made it difficult to determine whether or not all, some, or very little of this material was actually searched. Theodore Feasor, the director of mechanical operations at Fresh Kills, was responsible for ensuring that the recovery effort had the necessary equipment. He was concerned that material recovered from the debris was not being rinsed before inspection in order to ensure that human remains and other artifacts weren’t being missed because they were difficult to see. He stated that his suggestion to do so was rejected by his supervisors at an October 2001 meeting because it would cost more and slow down the pro- Fresh Kills 65 cess. 68 He also doubted that the material deposited in North Field/Area A was ever resifted at the end of the search effort. As such, he was “absolutely convinced” that “hundreds of human body parts and human remains” had gone undiscovered.69 Most shockingly, Eric Beck claimed that he observed New York Department of Sanitation workers “taking . . . fines from the conveyor belts of our machines, loading it onto tractors, and using it to pave roads and fill potholes, dips, and ruts” at the site.70 Such a claim must have been upsetting to families concerned that these fines contained fragments of their loved ones—a possibility that was given official backing in a 2003 letter from Charles Hirsch to Diane Horning stating that he was “virtually certain that at least some human tissue is mixed in with the dirt at the Staten Island landfill.”71 Beck’s credibility, however, was at least partially undermined by his claim that the operation recovered approximately 2,000 bones per day during the first few months of the job. Given that only 4,000 bones total were recovered from Fresh Kills (compared to approximately 16,000 at Ground Zero), his claim either represents a faulty memory or a misunderstanding of what was happening at Fresh Kills during the year he was there. One reasonable explanation is that he mistook animal products for human remains. In a supplemental affidavit for the defense, Dennis Diggins vehemently denied this claim and others made by Beck, Feasor, and Barrett. He claimed that all three overstated their responsibilities at Fresh Kills and made allegations based on incomplete knowledge of the search process—in other words, that none of these witnesses had any real credibility. In particular he said the men failed to realize that not all of the material that was not sifted in the Taylor and Yannuzzi machines was steel. There were other things that simply could not be broken down into small enough pieces, and these chunks were carefully searched by hand. 72 With respect to Beck’s claim that the fines were used to pave roads and fill potholes at Fresh Kills, Diggins argued that the “accusation is such a perversion of the truth, and is so far-fetched, that it should be discounted completely without further comment. However, becauseit is so offensiveto me personally, and to others at DSNY, I feel I must respond to it.” Diggins went on to note that there were two sources of millings and crushed stone for road projects at Fresh Kills—both of which provided more than enough material to construct and maintain any and all roads necessary for the search activities at Fresh 66 Who Owns the Dead? Kills. 73 With respect to Feasor’s complaint about the failure to wash down debris with water, Diggins noted that the idea was rejected specifically because OCME stated that such wetting down would further compromise already damaged DNA and make it even more difficult to identify the source of the remains. Cost and speed were simply not issues in the decision. In an effort to dispel notions that the North Field/Field A was not resifted toward the end of the recovery effort, Diggins supplied minutes from several meetings that made explicit mention of the excavation and sifting of this area. There were also mentions of resifting material underneath the sifting equipment to ensure that nothing was left behind in these areas once the operation was complete.74 No matter how much evidence was brought to bear on the recovery operations at Fresh Kills, however, the WTCFPB families were unconvinced. In their “Statement of Undisputed Material Facts in Opposition to Defendant’s Motion for Summary Judgment,” for instance, they met each and every claim made by the city and their contractors with the statement “Plaintiffs lack information sufficient to form a belief as to the truth of these statements.” They continued to reassert the veracity of the statements made by workers who had submitted affidavits on their behalf. They neither directly responded to the counterarguments made by the defendants nor mentioned any of the evidence provided by the defendants to rebuff the claims made by Beck, Feasor, and Barrett (most obviously the meeting minutes making it clear that at least some excavations of the North Field/ Area A were taking place throughout June 2002). 75 They did request additional operational meeting minutes besides those provided by Diggins, but said they never received them.76 The WTCFPPB families were seeking something greater than facts about the recovery process—they were looking for justice for their loved ones and truth from a city government that seemed to want them to go away. Their experience of being mothers, fathers, brothers, sisters, and spouses to the dead was the issue that needed to be addressed—the concerns they had with the Fresh Kills sifting operation were secondary to, and in service of, this concern. Indeed, in explaining the “heart of the issue” that defined WTCFPB’s existence, the organization quoted from an “unnamed child” whose father had died in the World Trade Center attacks: “My Daddy is not garbage.”77 They wanted to be treated with respect and they wanted their wishes to be honored. For Bloomberg and the city, though, Fresh Kills 67 they were but one constituency—albeit a constituency that had become skilled at using the media to turn the Fresh Kills issue into a “political football.”78 Oral arguments in the case took place on February 22, 2008. Judge Hellerstein began by noting that he had tried hard to broker a negotiated agreement between the two parties—but that as a judge he simply could not meet both sides’ demands. Throughout the proceedings, and in his final ruling, he made it clear that the purpose of the law is to help society function the best that it can, not to enact perfect justice. He opened the hearing by stating, “To those who lost their loved ones in 9/11, in a very deep way there can never be justice. There can never be the return of a son or a daughter or a husband or a wife or a lover or a child.”79 On the other hand, Hellerstein noted, the people responsible for the cleanup had two jobs: to remember and respect the dead, but also to repair the hole in the city and thus enable it to continue to function. Hellerstein stated that when a compromise could not be reached, “we come back to what the court of law can do—to read a complaint, to evaluate an answer, to consider motions and briefs and legal discussions and come back to issue a ruling on which the Court is sure will satisfy neither the plaintiff nor the defendant.”80 The hearing first addressed the question of WTCFPB’s standing to bring the case and whether they actually represented the thousand-plus families that signed the petition. Hellerstein wanted to grant the organization standing to raise the question of whether they had a property right in the remains at Fresh Kills as next of kin, but in order to do so, they would have to show that the city had violated their constitutional rights. The city’s lawyer, James E. Tyrrell, Jr. vehemently challenged the standing of the group. In addition to the issue of whether WTCFPB could be definitively linked to particular remains at Fresh Kills, Tyrrell also questioned the extent to which the group actually represented the true voice of families of victims—in that the vast majority were not actually members of the organization. The thousand-plus families represented had simply signed an online petition. Tyrrell wondered whether there was an equally large number of families who “do not want the hallowed ground on which the [fines] are located [i.e., the Fresh Kills landfill] to be disturbed,” but who had not yet been surveyed. 81 He also questioned whether the court had the authority to order the City of New York to redo its sifting operation and move the fines to a new location as dictated by a small group of 68 Who Owns the Dead? families. Tyrrell further noted that, although the claims being made in this case were individual, the relief requested was collective. 82 For Tyrrell, the fact that WTCFPB did not have a large body of active members was damning, but for Siegel, the lawyer for the claimants, the organization’s size was merely a byproduct of its inability to poll all 2,749 WTC families because their contact information was tightly controlled by the city. Besides, Siegel noted, New York is a “town of big mouths” and families who did not belong to WTCFPB would speak up if they disagreed with the lawsuit. 83 Hellerstein set out three tests that needed to be passed in order for him to rule in favor of WTCFPB: “The first is the existence of a constitutionally protected property or liberty interest; second, the deprivation of the interest by the defendants; and third, that the deprivation was without due process of law.”84 He then pointed out that WTCFPB was claiming a property interest in the remains, but that there were no particularized remains in this case. “I mean, that’s a tragedy,” Hellerstein lamented, “but it is also the constitutional hurdle.”85 Siegel countered that his clients lacked a definitive property claim because of decisions made by the defendants over which the plaintiffs had no control. Hellerstein stated that there had to be actual remains, not a theoretical or speculative claim that such remains exist. Siegel repeated that the lack of identifiable property was a direct result of decisions made by the state that “shock the conscience” and therefore require redress.86 Hellerstein did not find this line of argument plausible. He noted that the city felt it had no choice but to move the debris from the World Trade Center site to Fresh Kills. As such, it could not be found guilty of violating the constitutional rights of families of victims of the attacks. For Siegel, the imperfect sorting in the first thirty-two days, combined with the claim that fines were being segregated while they actually were not, amounted to a constitutional violation. For Hellerstein, though, the city was performing a governmental function in clearing the streets of debris and sorting it out at a waste disposal facility. However inadequate the city’s conduct may have been, it did not rise to the level of a constitutional violation. 87 Whether WTCFPB had standing or not, Hellerstein seemed eager to hear their case in full, partly as an act of respect for the families and their loved ones and perhaps in an effort to end the controversy, which was becoming increasingly public. The question that needed to be answered was whether the fines were human remains, as the plaintiffs alleged, or merely Fresh Kills 69 the “undifferentiated material that slipped through a quarter-inch sieve,” as Hellerstein rather crudely put it. 88 In Hellerstein’s view, the existence of human remains in the fines, however likely it may be, was still speculation. He said, “the City regarded it as debris and the City regarded it as something that had to be dealt with and the City may well have been wrong and may well have been callous and may well have been indifferent but did they commit a constitutional violation of due process law?“89 In his argument, Tyrrell reminded Hellerstein that in order for anyone to have a property right in remains, they must identifiably belong to the person seeking control. In this case, the plaintiffs were not even asking for forensic identification of the remains. This suggested, according to Hellerstein, that they “feel there are remains in the [fines] but the remains are not identifiable.”90 Tyrrell and Hellerstein both agreed that this made the case unique—that people were asking for control over human remains that might, but did not definitively, belong to their loved ones. Further, they were doing so on behalf of collective interests rather than their own. Tyrrell also added that WTCFPB had to prove deliberate bad intent on the part of the state in order to demonstrate a constitutional violation in this case. As the oral arguments came to a close, Hellerstein signaled that it would be very difficult for him to rule in favor of the plaintiffs (he noted that the “chances to dress up this argument in constitutional clothing does not look too bright”), and that he knew his ruling would not bring solace to the plaintiffs. In essence, he asked them to go back to the negotiating table, because he was not able to give them what they wanted. “I would like to see more effort renewed in coming together on some kind of settlement,” he told both sides. “I would like to think,” he concluded, “that if we can resolve this issue amicably we would better achieve everybody’s purposes than to have another opinion come down in a fat law book and be appealed and appealed again and forever grieved on.”91 Hellerstein began his written opinion in the case with the following description: The terrorists of September 11, 2001 murdered 2,749 people in Towers One and Two of the World Trade Center. Approximately 1,100 of the victims perished without leaving a trace, utterly consumed into incorporeality by the intense, raging fires, or pulverized into dust by the massive tons of collapsing concrete and steel. Full bodies were recovered for only 292 victims, and partial remains were found for 70 Who Owns the Dead? another 1,357 victims—sometimes a fragment of bone or a possession, sometimes more. City workers and contractors have inspected every bit of debris and, using sophisticated equipment, sifted the particles of debris to the extent of one-quarter inch of diameter, the space between the concentric circles of a small paper clip, with no further success. All human remains that could be identified, were identified. Only dust remains. 92 In the end, Hellerstein agreed with the city that there was no clear property right in this case because any remains at Fresh Kills were unidentifiable. He did not accept the plaintiff’s claims that the recovery effort at Fresh Kills left hundreds or thousands of identifiable human remains in the landfill. While the city may not have been as careful as it could have been, its actions fell far short of violations of human conscience.93 Further, any violation of the plaintiffs’ religious sensibilities by the city government was unintentional and a byproduct of its desire to get lower Manhattan back on its feet. In what can only be described as a lack of sensitivity to how WTCFPB families would read and hear the language in his opinion, Hellerstein imported his statement from oral arguments that the fines were “an undifferentiated mass of dirt” to which no property rights could be attached. 94 This description of what WTCFPB believed to be the remains of loved ones continued to anger those families who were most concerned about the status of the fines at Fresh Kills. This continued pain did not surprise Hellerstein. In the conclusion of his opinion, he urged the families of WTCFPB to accept the fate of the fines and to stop fighting for their removal. He asked them to channel their anger and frustration not against the city, but toward working with government officials to build a memorial at the landfill site to honor their loved ones. “The City has a plan for a beautiful nature preserve and park at the Fresh Kills site. There is room for a memorial on a height with a view of where the twin towers stood. The energy applied to this lawsuit might well be transferred to participating in the planning of the park and memorial. What better reverence could there be than a memorial that both recalls those who died, even without leaving a trace, and points to the tenacity and beauty of life that must go on? The terrorists sought to destroy our lives and our freedom. They failed, and a memorial in such a beautiful setting can symbolize the vital continuation of our vibrant democracy.”95 Fresh Kills 71 Obviously unhappy with the decision, and unwilling to accept that their loved ones’ remains could be classified as an “undifferentiated mass of dirt,” WTCFPB appealed Hellerstein’s ruling, but to no avail. The United States Court of Appeals for the Second Circuit found that Hellerstein committed no errors in rendering his decision and that the city had not violated any constitutional rights of the plaintiffs. It noted that the city was forced to deal with a situation that it had never faced before, that it did the best that it could under the circumstances, that its agents acted with the best of intentions, and that any mistakes made were the result of uncertainty and the desire to quickly return the city to normal rather than a malevolent or reckless intent of the government. Further, any shock to the conscience was caused by the magnitude of the event rather than the response of local officials. 96 The appellate court concluded with an echo of Hellerstein: On a human level, plaintiffs’ claims are among the most compelling we have ever been called on to consider. They have endured unimaginable anguish, and they seek nothing more than the knowledge that their loved ones lie in rest at a place of their choosing. We regret that we cannot bring them solace but we echo the sentiments of the District Court: “The events of September 11, 2001 will never be forgotten. No one knows the truth of these words more than those individuals who lost their loved ones to the attacks. In a very real sense, those individuals have suffered a wrong for which there can be no remedy. No matter the authority or power of this Court, it cannot bring back the loved ones lost, and it cannot bring peace to the plaintiffs or surcease to society’s collective grief around the events of September 11, 2001.”97 On June 1, 2010, WTCFPB petitioned the U.S. Supreme Court to hear an appeal of this latest ruling, but the request was rejected on October 4, 2010, ending the group’s efforts to have the fines removed from Fresh Kills and the North Field/Area A debris resifted. 98 Although the plaintiffs were ultimately unsuccessful in gaining a say in the fate of the fines from Fresh Kills, they continued to fight for what they perceived to be the dignified handling of remains from the World Trade Center tragedy, both at the former landfill and at the site. During the Fresh Kills fight, they attracted the attention of the media, appearing regularly 72 Who Owns the Dead? in local, national, and international newspapers, magazines and television news reports. They would ultimately join forces with other World Trade Center activists and use their collective media connections to stir up the two controversies that will be discussed in the second part of this book: first, over the meaning of the discovery of additional human remains around lower Manhattan several years after the recovery operation at the WTC had been declared over; and second, how unidentified and unclaimed remains ought to
0 notes
katebushwick · 5 years
Text
Neither words nor images can adequately describe the physical chaos that enveloped the World Trade Center site on the morning of September 11, 2001 and in the months that followed. Photographs and video can of course capture the sheer destruction that occurred as the twin towers collapsed floor by floor, the tons of debris reaching speeds of 120 miles per hour as they neared the ground. First-person accounts preserve the visceral fear that people in lower Manhattan felt as they either sought to escape the World Trade Center, or to rush in to try to save lives. Artifacts preserved in museums and archives can provide a reminder of the power of the falling buildings and of just how little actually survived the collapse. And missing persons posters and spontaneous memorials remind us of the lives that were lost that day. Yet, when prompted, most people who were at the site on the morning of September 11 say that the most vivid memories they have are aural: the initial sound of airplanes flying far too close to the ground, the thuds that accompanied the impacts of the planes into the towers, the thunderous rumble of the towers collapsing, and the deafening silence that enveloped the area in the time between the collapse and the start of the frantic rescue efforts. Survivors, first responders, and bystanders also point to the smell that enveloped the WTC; one that seemed to be equal parts burning wire, concrete dust, jet fuel, and death. The details of the attacks are well-known and easily accessible. Video of the event both from the media and eyewitnesses can be found all over the Internet—on YouTube, news and government websites, and on countless conspiracy theory web pages. Given the extraordinary global interest in 1 A Tuesday Morning in September A Tuesday Morning in September 21 9/11, hundreds, perhaps thousands, of written accounts exist, each with particular strengths and weaknesses, differing degrees of accuracy, and radically different perspectives on the truth of the standard story of the day as told by government officials and the mass media. Readers wishing for detailed reconstruction of the events should start with NIST’s Final Report on the Collapse of the World Trade Center Towers (2005). Jim Dwyer and Kevin Flynn’s 102 Minutes: The Untold Story of the Fight to Survive Inside the Twin Towers (2004) is an excellent description of what it was like for the people struggling to evacuate the towers after the attacks. Dennis Smith’s Report from Ground Zero (2003) is a good account of 9/11 from the New York City Fire Department’s (FDNY) perspective. A variety of firsthand narratives of the attacks and their consequences are collected in Damon DiMarco’s Tower Stories: An Oral History of 9/11 (2007). Reconstructing the Events of 9/11 At 8:46 am, five hijackers flew an American Airlines Boeing 767 into the north face of WTC Tower 1 (the “North Tower”). 1 Flight AA11 had recently departed from Boston’s Logan Airport and was bound for Los Angeles with seventy-six passengers and eleven crewmembers. The plane was estimated to weigh approximately 283,600 pounds on the day of the attack. It struck the building at a twenty-five degree angle at approximately 440 mph, creating a large gash in the exterior skin of the building between the 93rd and 99th floors. The North Tower suffered significant damage to the external skeleton and interior structural core, but withstood the impact quite well. Perhaps the most significant effect of the crash, according to most engineering accounts, was that the impact knocked off the poorly installed insulation that was supposed to protect the steel columns at the core of the building, and the floor support system, from fire. This turned out to be a critical problem. The remaining undamaged steel in the building began to weaken when the jet fuel, and then office furniture, paper, and other contents of the building, burned. 2 The first firefighters arrived on the scene four minutes after the impact, but they were already powerless to defeat the flames and smokein thetower. They could not carry much of their equipment up to the fire because the elevators were out, and the building’s fire suppression system had been destroyed in the initial impact. At 8:52 am, the first person was seen falling from the building. 22 Who Owns the Dead? By 9:00 am, FDNY chiefs had set up an incident command station in the lobby of the North Tower. They quickly began sending firefighters to the higher floors to evacuate people from below the impact zone and to reach as many survivors as possible above the impact zone. From analysis of communications and subsequent interviews, it is clear that FDNY personnel believed that a partial collapse near the top of the tower was possible, but they did not think the entire building would fall. FDNY officials signaled a fifth alarm, asking for additional resources to be staged at the corner of West and Vesey streets.3 At 9:02 am, as the North Tower was being evacuated, United Airlines Flight 175, also a Boeing 767 heading from Boston to Los Angeles, crashed into the south face of WTC Tower 2 (the “South Tower”). UA175 weighed less, but was traveling 540 mph, about 100 mph faster than AA11. It hit the South Tower at an angle of 38 degrees between floors 77 and 85. The angle and velocity at which the plane hit the building caused the floors above the impact zone to twist and rotate counter-clockwise. The South Tower’s structure sustained more initial damage than the North Tower, and was weaker as a result. Just as in the North Tower, the plane’s impact stripped core columns and supporting steel beams of insulation, making them vulnerable to the heat of the burning jet fuel and the contents of the plane and offices. 4 FDNY almost immediately issued a second fifth alarm call (this time for the South Tower) and asked for resources to be staged at the corner of West and Albany streets. At 9:47 am, a third fifth alarm was issued, with orders for units to come to West and Vesey for instructions. 5 By this time, communication was starting to break down within the FDNY structure—too many people were using the wireless radio system, which did not perform well in high-riseenvironments. The communication system within the towers themselves had been destroyed by the impact of the planes and subsequent fires. Partly as a result of these communication problems, many of the units responding to the fifth alarm calls went straight into the towers without checking in or receiving formal instructions. They recognized the severity of the situation and wanted to help save as many lives as possible.6 Phone calls received by relatives and friends of surviving occupants in the two towers between the impact of planes and the collapse of the towers (many of which have been made public), along with calls to 911 (parts of which have been released by the City of New York), suggest that the situation very quickly deteriorated from confusion but relative calm to sheer A Tuesday Morning in September 23 terror and knowledge of imminent death. While the first calls generally alerted recipients that something had happened at the World Trade Center, callers did not seem particularly scared and stated that they were awaiting instructions on what to do. Later calls, however, included professions of love and acknowledgement that the caller and recipient would likely not see one another again. Sadly, many calls to friends and family went straight to voice mail and were stored by phone companies for later delivery because the damaged phone systems quickly became overburdened by the volume of calls being made. Many recipients did not receive these messages until a few days after 9/11. As a result, frantic efforts were made to rescue people who were thought to still be alive before the sad realization that these messages were the final recordings of victims who had already died. At 9:58 am, the section of the South Tower above the impact zone began to lean to the southeast, and it quickly buckled. Over the next several seconds, the building collapsed floor by floor, as the material from higher floors smashed through lower floors like a group of dominos falling. By 9:59 am, the remains of The South Tower were spreading through lower Manhattan in a concussive wave. 7 At 10:00 am, sensing for the first time that the North Tower was in imminent danger of collapse, the New York City Police Department (NYPD) and FDNY ordered all emergency responders to evacuate. While most NYPD officers heard the call, the majority of FDNY personnel— particularly those on the higher floors—did not receive it due to the problems with their communication system. Some who did hear the order may have ignored it in an effort to save more people. At 10:06 am, an NYPD helicopter radioed to police commanders that the building was breaking down at the impact zone and would likely collapse, and at 10:21 police helicopter personnel reported that the building was starting to lean to the southwest. FDNY firefighters still in the North Tower almost certainly did not obtain this information. At 10:28 am, the North Tower collapsed in a twelve-second free fall. 8 NYPD and FDNY officials did not have strong working relationships and it was inconceivable for them to establish joint incident command operations, or even station personnel from each organization together to exchange and relay information. 9 Mayor Giuliani had created the Office of Emergency Management in 1996 to address this problem, but OEM had no success in bringing the two departments together. 10 NYPD personnel had much more information about the condition of the towers, and were able to communicate effectively in high-rise environments. This fact at least 24 Who Owns the Dead? partially accounts for why only 23 of their officers died, compared to 343 firefighters. The Port Authority Police Department (PAPD) lost 37 officers that day. 11 Amazingly, of the approximately 7,545 people who were in the North Tower below the impact zone, all but 107 safely evacuated thanks to a combination of instinct, emergency preparedness, and the bravery of rescue personnel. For those above the impact zone, the story was the opposite. All 1,355 people above the 92nd floor died on September 11. Many died as a result of the initial impact and fire, and the rest died because all means of escape had been cut off by the plane’s impact. All stairwells and elevator access had been destroyed, and the NYPD determined that a roof rescue was impossible due to intense smoke and heat and the difficulty of landing on WTC roofs in the best of circumstances. There weretwo major differences between thesituations in thetowers that led to a greater overall survival rate in the South Tower compared to the North Tower. First, by 9:02 am, many of the 8,600 occupants of the South Tower were aware that something horrible had happened in the North Tower and had already begun to evacuate—despite public address announcements to the contrary. Second, unlike in the North Tower, one of the three stairwells remained passable, providing an escape route for those above the impact zone. As a result, only 21 percent—619 of 2,900— of those individuals at or above the impact zone of the South Tower died when the buildings collapsed. Only 0.2 percent—11 of 5,700—of people below the impact zone died. 12 On the Outside Eyewitnesses reported steel beams weighing thousands of pounds flying through the sky like toothpicks, plane parts scattered about, pulverized concrete raining down on the streets below, paper floating from the twin towers like confetti, and, horrifically, bodies and body parts falling from the sky above. NYPD inspector James Luongo, who went on to be the police department’s incident commander of the remains recovery effort at Fresh Kills Landfill, described it in a 2002 interview: We [a group of NYPD personnel who were walking on foot to respond to the emergency before the first tower fell] started working our way down Vesey St., which was dangerous. The debris was falling . . . and A Tuesday Morning in September 25 it was total chaos. . . . We were trying to time the debris as it was coming down cause it was landing around us, and then we were noticing that it wasn’t only debris coming down but bodies. So we ran down Vesey St, timing the bodies that were coming down, and a few of them landed around us, while we were hugging the wall as much as we possibly could. At one point in time we had misjudged one of the bodies: we didn’t see one of the bodies coming down. So we had to hide underneath the pedestrian overpass that crossed Vesey from building six to building seven. I remember putting my face into the wall, my hands over my hat because I just didn’t want to get sprayed with the human remains when the body hit. I just didn’t want to get sprayed in the front, I didn’t want to walk around like that all day. In the back, at least I wouldn’t have to look at it, you know? Fortunately I wasn’t sprayed though. When the bodies hit the ground, they were disintegrated. They were just splattered, like watermelons. It wasn’t what I was used to. I’d dealt with jumpers before in my twenty-two years but I wasn’t used to this. And there was a piece of the plane landing gear in the middle of the street, there was a huge spring from the airplane in the middle of the street, there were legs strewn apart, different pieces of human remains, across Vesey St. There was a certain point where you could not go any closer, and if you went any closer you would get killed: the debris and bodies were raining down you just could not go past that. At one point in time on West St., I saw a woman . . . as she was getting her way out of the plaza area. She just disappeared. The debris came down . . . gone. It squashed her, that was the end of her and I didn’t see any more of her, just a piece of debris fell down on her, and she was gone. But you could not get into that area. . . . We evacuated the area as best as we could. Total chaos, people running, people screaming. I stopped looking [at] the bodies that were coming down. I was in an area where I wasn’t gonna get hit with the bodies anymore but you knew when the bodies were coming down because the people around were screaming as the bodies were coming down. And uh, I just stopped. You know you say a prayer, what are you gonna do, you know? 13 The dozens of other eyewitness accounts from first responders (made public through a FOIA request by the New York Times and a group of victims’ families) and survivors describe a scene similar to Luongo’s. Tom 26 Who Owns the Dead? Haddad, who survived the attacks in his office on the 89th floor of the North Tower, recalled that, as he was evacuating the building, “occasionally you’d hear these devastatingly loud thumps. At the time, I thought they came from more falling pieces of the building. It didn’t register, there were hunks and piles of meat all over the ground . . . nothing I recognized as body parts. Later on, I found out they were the remains of jumpers.”14 Jules Naudet and his brother Gedeon happened to be filming a documentary about rookie FDNY firefighters in lower Manhattan that morning and followed their subjects to the site; these thumps can be heard on their footage as they approached the burning towers. There is no good estimate of the number of people who jumped or fell to escape the hellish conditions at the top of the towers or hasten inevitable death. While NIST’s report on the collapse of the towers puts the number at 111 for the North Tower, other sources suggest that up to 200 people jumped or fell from the towers. 15 Whatever the number, images of falling victims have remained a raw and chilling part of the visual record of 9/11. One image in particular, dubbed “Falling Man,” by AP photographer Richard Drew, has inspired tremendous debate. It was published in newspapers around the world in the days after 9/11, but most media outlets refused to show it again after that.16 It inspired a documentary (9/11: The Falling Man), plays a significant role in Jonathan Safran Foer’s 2005 novel Extremely Loud and Incredibly Close, and serves as a central element of Don DeLillo’s 2007 novel Falling Man. The collapse of the towers is devastating when viewed on video. Firstperson stories of survival during the collapse bring home the terror that people in the shadows of the towers must have felt as they ran for their lives. Forensic anthropologist Amy Mundorff had just arrived at the World Trade Center with a team of colleagues from the New York City Office of the Chief Medical Examiner (OCME) to evaluate the scene and get a sense of the number of bodies that would be arriving at the morgue. In 2012, Mundorff recounted her experience: So four of us from that team went down in one car and Dr. [Charles] Hirsch and a medical legal investigator and one of our operations guys and the driver went down in his car. We were all going to meet down there. And we parked on Vesey Street and there were things, building parts and people were jumping. We walked over . . . across from the Marriott Hotel and we were deciding something but I cannot, for the A Tuesday Morning in September 27 life of me, remember what it is anymore. But we were going to . . . someone was going to go back and get a camera and we were going to regroup there. Then I heard this rumble that was loud and my [colleague] said, “Oh, a building is coming down.” And I turned around and just started to run. And I could see in front of me the stairs for World Financial One, so we were in the kill zone. And that tidal wave just picked me up and blew me into a wall. I mean, it was so powerful where we were that the building I was in front of, the stone façade was torn off. It was so powerful. . . . So when it all stopped, you could just hear beeping and it was like the firemen’s alarms and car alarms and it was pitch black. I knew I was in front of the stairs, so I felt my way up the stairs to get to the building. When I got to the building door, I came across one of my colleagues. And when we got inside the building, wefound another one. So one of them we could not find. And it was just like chaos and panic. 17 Mundorff credits her mountain-climbing husband for giving her the advice that may well have saved her life: in an avalanche, make sure you cover your head with your jacket and provide yourself with an air pocket in case you get buried. On instinct, as the debris cloud enveloped her and threw her into an exterior wall of a building, she remembered his advice and managed to survive. Eventually, she was ableto find two of her colleagues—one ofwhomhad received a severe head injury and sustained brain damage—and they made their way to the water’s edge behind One Financial Center, where a police boat took them to Jersey City. After receiving stitches for her injuries and finding out she had a broken rib and a concussion, Mundorff spent a sleepless night and a restless day at her parents’ home in Armonk, New York. She returned to work on September 13, knowing that her anthropological knowledge would be needed as victims started arriving at the morgue.18 In interview after interview, eyewitnesses describe the force of the collapse and subsequent debris cloud as something incredibly powerful. In an interview in 2002, firefighter Timothy Burke described the cloud as a “three-dimensional object” that you did not just breathe in, but ate.19 The collapses forced the pulverized material deep into the lungs of those who were in the vicinity when the collapses occurred, and caused people to vomit up the dust while simultaneously struggling to gasp in enough oxygen to survive. The cloud ultimately blanketed much of lower Manhattan 28 Who Owns the Dead? in a thick layer of a grayish powder that consisted of concrete dust, the crushed contents of two of the largest office buildings in the world, and human remains. After the Collapse People who arrived on the scene just after the towers collapsed report being awestruck by the devastation with which they were confronted. Life-long New Yorkers found themselves completely disoriented on streets that they had walked innumerable times before. Others felt like they were in a movie, or had stumbled into a warzone. “Surreal” was a common descriptor. 20 For journalist William Langewiesche, whose 2003 book American Ground: Unbuilding the World Trade Center (first serialized in The Atlantic in 2002) was criticized for its unvarnished portrayals of the people involved in the initial response to the attacks and the cleanup of the site, the reaction was different. “After years of traveling through the back corners of the world, I had an unexpected sense [of] . . . familiarity. Wading through the debris on the streets, climbing through the newly torn landscapes, breathing in a mixture of smoke and dust, it was as if I had wandered again into the special havoc that failing societies tend to visit upon themselves. This time they had visited it upon us.”21 Before the dust had even begun to settle, firefighters, police officers, paramedics, federal officials, construction workers, and ordinary citizens returned to the site to search for survivors. Hospitals in the area mobilized to treat the rush of traumatic injuries that they assumed survivors of the collapse would have. People who knew their loved ones were in or near the twin towers hurried to the scene, looking for their spouses, children, parents, relatives, and friends. Although some of thesearches weresuccessful, thousands of families had no luck. Verizon, the dominant telecommunications company in the region, worked to reestablish contact with any trapped survivors who may have had functional cell phones. The initial effort to locate survivors from the collapse of the towers was barely controlled chaos. First responders and volunteers alike searched the twisted and contorted rubble for signs of life, not always with regard for the structural integrity of the spaces that they explored. 22 Relying on instinct and adrenalinerather than training, they formed bucket brigades and commandeered any light construction equipment that they could find in order to search pockets of air amid the wreckage. Complicating matters, there was no effective credentialing system at the site until September 16, A Tuesday Morning in September 29 so large numbers of unnecessary personnel became involved in the search and rescue effort, making it difficult to work efficiently and potentially putting peoples’ lives at risk. 23 Of all of the groups that arrived on the scene, the stories of fathers searching for their sons were among the most poignant. Because firefighting tends to run in families, many of the firefighters who went missing on the morning of September 11 had other family members in the FDNY or other responding fire departments. They continued the search for months after the towers collapsed. Many of them became known as the “Band of Dads” and were regularly featured in media reports and a well-regarded photography collection by Gary Suson.24 Care was taken not to further damage any potential spaces where survivors might be trapped, but few people at the site were truly hopeful that anybody could survive the hellish conditions below ground. After all, in addition to the collapsed buildings, potential survivors had to contend with water both from heavy rain and efforts to put out the intense fires that burned deep within the pile. Sam Melisi, a member of Rescue 3, FDNY’s Building Collapse Unit in the Bronx, acted as a liaison between the firefighters and the construction and engineering personnel. In an interview from 2002, herecalled thinking optimistically that “sincethese weresome of the world’s largest buildings . . . we were going to find some of the world’s largest voids. I was very hopeful that we were going to start finding peopleright away. It didn’t pan out all that well.”25 Hoping that their loved ones had been injured in the collapse and taken to nearby hospitals, thosesearching for the missing called or visited the major trauma centersregularly in the days afterthe attacks.They also postedmissing persons notices wherever they might be seen by someone with news of their loved ones’ whereabouts.26 At this point, nobody had a clear sense of how many people were actually missing. In order to collect missing persons reports and gather any antemortem data that might help investigators identify victims, NYPD and the Red Cross, along with other organizations, established the Family Assistance Center at the NYU medical school in the hours after the attack. This facility was soon moved to the Lexington Avenue National Guard Armory, and then to Pier 94, where it remained open until the end of 2001 when it moved to Chambers Street. Family Assistance Centers (FACs) were also opened in several other locations in the New York metropolitan area, including Queens, Staten Island, Long Island, and Liberty Park in New Jersey. 30 Who Owns the Dead? Robert N. Munson, who was emergency service director of the Minneapolis Area Red Cross, worked at the New York City FAC from September 23 to October 14, 2001. He described it as a giant pier warehouse building—set up like a trade show with full carpet, poles with drapes, and some 75 agencies with all their staff and stuff; service agencies, government, immigration, FBI, child care, etc. It is a comfortable building in order to be welcoming and calming as can be to the families. The City of New York has done a good job. It is a full city of people in this building—with free meals for families and separate 3 meals a day for workers. The dining areas are nice (as they can be)—donated fresh flowers daily, tablecloths, and an ambiance of peace and calm different from the noisy, bustling activity everywhere else. There are clients and workers everywhere. Lots of noise. People come here to access a broad array of services. Security is the tightest I have ever experienced anywhere. You can’t get near the place without going through several barricades, body and bag checks—and once in, all workers need to be separately badged daily even though we have permanent clearance badges. Clients, of course have ID, are escorted, and limited by lots of armed police and military to certain areas. Interestingly, several dogs around with handlers—not sure if they are rescue dogs off duty or other “sniffers” and working dogs. Outside is where the wall of photos is that you see on TV—it is so moving that I avoid spending too much time passing by. 27 The FACs were designed to be one-stop-shopping for relatives of the victims and others directly affected by the attacks, such as residents who were unable to return to their homes or people who had lost their jobs as a result of the attacks. Visitors could receive mental health services, apply for financial assistance, food stamps, welfare, housing assistance, and other social services, get help with life insurance, and receive free legal advice. Those who were missing a loved one were also asked to provide information about the person and a sample of their own DNA as well as any items— toothbrushes, hairbrushes, or razors—that might contain DNA from the missing person. Although it was not known at the time, the NYPD’s lack of experience gathering information and biological samples on such a large scale caused several problems.28 Most notably, the police allowed anyone to fill out a A Tuesday Morning in September 31 missing persons report, regardless of their relationship to the person in question, and did not accurately record the biological relationship of the person who provided a DNA sample. This led to multiple missing persons reports for the same person, but with variations in basic physical descriptors like height, weight, distinguishing features, prior injuries, and clothing worn as well as slight differences in the spelling of names and the reporting of birthdates. 29 These errors made it very difficult for the OCME to use the information and biological samples for matching and necessitated a resampling effort several months later. Further, given that so many of the victims of the attacks lived outside the New York metropolitan area, the FAC in Manhattan could only deal with a fraction of the need for such facilities. According to Robert Shaler, the OCME had hoped to play a leading role in the collection of antemortem data and DNA samples, given their responsibility for identifying the missing and dead, but NYPD would not allow them to take on this job. 30 In OCME medicolegal investigator Shiya Ribowsky’s view, “the decree from City Hall that placed the cops in charge of the efforts to obtain DNA from victim families had turned into a debacle,” and the firefighters’ efforts to collect samples from their families was little better. 31 In addition to clerical errors and missed opportunities for collecting direct samples (such as biological samples from recent medical procedures), police and firefighters also lacked a basic knowledge of genetics and often failed to collect appropriate familial samples. The Pile Back at the pile, the grim reality of September 11 was becoming apparent to all involved in the rescue efforts. After a few dramatic rescues, there was nobody left to save. Anyone still under the rubble was dead, not trapped. And it was already becoming clear that most victims’ bodies were destroyed beyond recognition by the impact of the planes or collapse of the towers. New York Times reporter Dan Barry informed readers that, by the late afternoon of Wednesday, September 12 the jaws of huge cranes were biting indiscriminately into the piles of rubble, while police officers, firefighters, soldiers and other rescue workers pried at the ground with shovels and crow bars to free body parts, bits of human flesh, and rubbery patches of skin. Then, like 32 Who Owns the Dead? sanitation workers tending to some hellish park, they carefully dumped the scraps of human remains into a green trash bag held open by a soldier. At times, men gathered to puzzle over a piece of flesh on the ground; dogs sniffed at the bits with little enthusiasm and moved on. ‘We don’t find much,’ said a firefighter from East Rutherford, N.J. 32 The only consolation that the families might receive was that subsequent forensic examination of the remains revealed little posttraumatic response (e.g., swelling or blistering), meaning that the majority of victims died soon after they were injured. 33 Although the search and rescue effort officially continued for nearly a month, just a few days after the attacks, Mayor Giuliani was already gently informing New Yorkers and therest of the world that there was little chance of locating more survivors. “The recovery effort continues and the hope is still there that we might be able to save some lives,” he said. “But the reality is that in the last several days we haven’t found anyone.”34 Although Giuliani was forced to deliver this bad news to his city and the world, he leavened it with a promise that would ultimately lead to the largest identification effort ever undertaken. He mandated that the OCME do whatever it took to identify the source of every single human remain recovered from the WTC site, no matter how small.35 This meant that unlike most disasters, in which investigators take an “identify all victims” approach (that is, make sure to identify at least one part of every person thought to be involved in the incident to confirm their presence), the OCME had to keep the identification efforts moving forward until there were no remains left to be identified or they had exhausted the limits of technology. The OCME has taken this mandate seriously and continues to push the limits of DNA identification techniques in order to meet it. This effort, however, it has taken a significant emotional and resource toll on the organization. 36 A New York Effort The Federal Emergency Management Agency (FEMA) immediately activated several of its on-call urban search and rescue teams to assist local rescue workers at the World Trade Center site, but neither that agency nor the Army Corps of Engineers nationalized the site. Firefighters became the de facto leaders of the effort to recover the remains of those who did not make it out before the towers collapsed. A Tuesday Morning in September 33 Initially, first responders believed that living people were trapped in the rubble and didn’t want to wait for equipment to be trucked in from far away to search for them. Sam Melisi said: These were people you had worked with, and they were maybe alive. You knew they were trapped in there, and there was a sense of franticness, and it was personal. I remember crawling through the steel— it would have probably been by the hotel. There were some spaces that let you get below and take a look around. It wasn’t regulated at all. The first couple of days, anything went. It wasn’t like somebody was saying, ‘You can’t go in there, you can’t do this, you can’t do that.’ It was more, ‘Hey, if you think you can get in there, go ahead.’ All bets were off. It was just ‘Go and bring somebody home.’ 37 Salvatore Torcivia, a firefighter who had been a police officer before joining FDNY, confirms the free-for-all nature of the initial response. The area was so large that there was little coordination of the search field— one simply chose a section of real estate and dug in. Once the search was complete, the worker would use a can of spray paint to mark the area. 38 Looking back on the recovery efforts a year after 9/11, Richard Garlock, an engineer who advised the city on structural issues at the WTC, said he was often astonished to see the spaces that had been explored by firefighters and others during the hectic first hours of the search effort. 39 The Struggle for Control After a few days, order slowly emerged at the site. Like everything else that would happen at Ground Zero in the years after 9/11, however, this process was riven with political infighting and media-fueled controversy. In the wake of the 1993 World Trade Center bombing, New York City officials knew their city was a target for terrorist plots and made some efforts to prepare. One immediate problem city leaders faced was that the departments devoted to public safety, the FDNY and the NYPD, had been intense rivals for decades. In an era of declining budgets and reduced workforces, the departments fought for resources, jobs, and prestige. In his vivid and moving book Closure: The Untold Story of the Ground Zero Recovery Mission, Port Authority Police Department Lieutenant William Keegan Jr. explains that “except in their most basic roles—law enforcement and firefighting—each was intent on outdoing the other. The police 34 Who Owns the Dead? department had a scuba team; so did the fire department. The fire department had rescue units; so did the police department.”40 More problematically, each department operated autonomously and was not inclined to share information or establish joint command in the event of a large-scale event. The Office of Emergency Management was supposed to solve this problem. Keegan believed that the city would be much safer and more efficient if the departments worked together to respond to emergencies. Unfortunately, because of a combination of bad planning and bad luck, the OEM’s command center was located on the 27th floor of WTC 7 and was destroyed when that building collapsed on the evening of September 11. Thus, it was unable to take the lead in planning the response to the attacks. 41 In this power vacuum, the NYPD claimed immediate control over the site because it was considered an active crime scene. But once the District Attorney decided to close its investigation of the attacks, which it did very quickly because there was no doubt about what had happened to the World Trade Center, the FDNY took over. 42 With 343 firefighters missing, it felt an obligation to find them. Technically, the FDNY had an advantage over the NYPD: it had authority over collapsed buildings and all buildings on fire. The FDNY ended up having sole control of the site through the end of October, when they were forced to share (or more technically, cede) power to the city’s Department of Design and Construction (DDC). 43 In his book, Keegan describes the initial frustration that he and other Port Authority Police Department (PAPD) officials had when dealing with the much larger NYPD and FDNY. PAPD officials felt they had been left out of the loop in initial rescue efforts despite having suffered a significant loss of life—37 out of a force of 1,100—and their familiarity with the buildings. Because the PAPD was not part of the city, there were no preexisting lines of communication between it and either of the other uniformed service agencies. Neither the NYPD nor the FDNY made any effort to inform the PAPD about what was going on, or to ask for assistance. Keegan also felt that the competitive atmosphere that pervaded the relationship between the police department and fire department meant that neither had much interest in dealing with the much smaller and less powerful PAPD. 44 While the FDNY assumed authority over the search and rescue effort, the little-known DDC took charge of mobilizing the construction community to assist the firefighters and to begin the process of cleaning up the site. The DDC was largely responsible for managing all public construc- A Tuesday Morning in September 35 tion projects in the city, from sewers and roads to firehouses and libraries. The DDC’s two top officials, Kenneth Holden and Michael Burton, believed they had all of the necessary equipment available in the area and were prepared to handle the job. 45 Although the federal government would pick up the tab for the cleanup, New Yorkers would ultimately do most of the work. Within a few days, four major construction contractors had been hired by the DDC: AMEC Construction Management, Bovis Lend Lease, Tully Construction, and Turner Construction. Holden and Burton divided the area into quadrants and gave each company responsibility for removing the debris from one of them. They faced a significant challenge: dynamite and other explosives that were normally employed by the demolition industry could not be used because of the instability and complexity of the site. Debris had to be hand cut and hauled out one piece at time. 46 “The pile was an extreme in itself,” wrote Langewiesche continuing: It was not just the ruins of seven big buildings but a terrain of tangled steel on an unimaginablescale, with mountainous slopes breaking smoke and flame, roamed by diesel dinosaurs and filled with the human dead. The pile heaved and groaned and constantly changed, and was capable at any moment of killing again. People did not merely work to clear it out but went there day and night to fling themselves against it. The pile was the enemy, the objective, the obsession, the hard-won ground. 47 By the end of the first week, the basic structure was in place: FDNY was in charge of the rescue and recovery effort and the four New York construction companies were in charge of removing debris from the site. As the effort wore on, this arrangement would lead to significant strife and controversy. As Keegan notes, “From that moment on, there would bethose of us who saw Ground Zero as a rescue and recovery site and our job as being to find fallen heroes for their loved ones, and those who saw it as a construction site and their job as being just to clean it up.”48 Firefighters directed the effort to recover human remains and had complete authority to tell construction workers where to dig, at least through the end of October. They worked in teams of seventy-five firemen, supplemented by NYPD and Port Authority police officers. The search effort was a three-part process. Initially, uniformed service members stood by, armed 36 Who Owns the Dead? with rakes and shovels to examine newly loosened debris, as construction workers pulled the steel and rubble piece by piece from the site. If suspected human remains were discovered, these were removed and brought by FDNY Emergency Medical Services escorts to one of the temporary morgues set up around the site. According to at least one account, every remain recovered was prayed over before being removed. 49 A second inspection was done at the debris-transfer point before it was loaded onto trucks and barges to go to Fresh Kills Landfill for processing and disposal, where a third inspection was performed to locate anything that may have been missed on site.50 The firefighters initially concentrated their efforts on the places where they expected their own to be found, but the site never yielded the “mother lode” of bodies that everybody was hoping for. As the recovery effort got underway, the construction companies—especially Bovis Lend Lease— played a supporting role. The uniformed services told Bovis superintendent Charlie Vitchers and his management team where they wanted to search, Vitchers consulted structural engineers to make sure the area was relatively safe, and, if so, cranes and other machinery were brought to the area to assist in the removal of debris. 51 Search personnel recovered body parts of varying sizes and degrees of recognizability. In the first few days, body parts were still relatively intact and easy to spot, but as they begin to decay, they started to look like the rubble and debris in which they were supposed to be found. After a few days, the best indicator of remains was the smell. Torcivia recalled that “the smell got to everyone after a while. By the second or third day, it was raw down there. I was gagging. If we uncovered anything that resembled a body or a body part, we’d shovel it into a body bag—there was no other way to pick up what wefound. We hoped they’d be ableto ID theremains through DNA testing down at the medical examiner’s office.”52 More complete bodies were rare. For the most part, the remains of firemen were the best preserved because of their heavy bunker gear, while the remains of women were the least well preserved because they tended to wear the lightest clothes. Firefighters—both living and dead—quickly emerged in the press and popular culture as the heroes of the tragic situation. They were described as selfless patriots who marched into battle, in sharp contrast to the cowardly and barbaric terrorists who had murdered thousands of innocents. Although most firefighters resisted the urge to play up their heroism, there was at least A Tuesday Morning in September 37 someresentment among the police and construction workers.53 In Langewiesche’s account, nowhere were these tensions more pronounced than in the treatment of human remains: “On the one extreme the elaborate flag-draping ceremonials that the firemen accorded to their own dead, and on the other, the jaded ‘bag ‘em and tag ‘em’ approach that they took to civilians.”54 Firefighters offered differential treatment to their own victims compared to other uniformed service victims and civilians. Each time the remains of a firefighter or firefighting gear was located, theentire Ground Zero recovery effort ceased. All workers at the site, whether uniformed service personnel or civilian construction workers, paused and saluted the fallen firefighter. If remains were recovered, they were wrapped in an American flag and paraded through a lineup of Ground Zero workers.55 These remains were immediately sent to the OCME via ambulance rather than being placed in a refrigerated trailer on site for later transport to the OCME. 56 More pernicious than the problems that emerged as byproducts of the need for speed were the overt efforts of FDNY recovery personnel to “reconstruct” the remains of their fallen brethren—by placing body parts found adjacent to empty or partially empty articles of clothing into that clothing—in order to be able to provide their loved ones with as complete a body as possible. This manipulation was seen only with FDNY clothing, and was usually obvious upon examination—for example, when parts of a leg were stuffed into the sleeves of bunker gear or when boots that were associated with a set of bunker gear contained two left feet. 57 FDNY personnel often removed personal effects, especially jewelry, from their brethren to ensure that family members received them in a timely and comforting fashion. It is unknown whether such actions prevented the identification of any firefighters (as when the only recovered piece of an individual was placed with another body and not discovered through anthropological investigation or DNA identification). OCME personnel, however, did not confront FDNY personnel at the time because they recognized the grief that firefighters were dealing with and were not sure that their concerns would be taken constructively. 58 Once human remains were found in the Pit, they were brought in bags to the body collection point (BCP), where medicolegal investigators briefly examined, catalogued, and prepared the remains for transport to the triage center at OCME headquarters on 30th Street between First Avenue and FDR Avenue in midtown Manhattan. OCME activity was limited at 38 Who Owns the Dead? Ground Zero—they were not included as part of recovery teams and were only occasionally consulted about best practices for removal of remains to ensure maximum evidence retention and minimal damage to physical integrity. 59 It was often difficult for OCME personnel to know whether the location information provided for each remain was accurate or not. It was especially unclear how deep workers had dug to find a particular remain, which could help determine which floor the victim was on at the time of his or her death and speed the reassociation of remains. Since human remains were often discovered only after raw material had been removed from the Pit, OCME personnel also worried that the GPS reading (or grid coordinates) did not indicate the original location of the remain. Mundorff and others wanted the military’s Joint POW-MIA Accounting Command to help set up the procedures for recovery of human remains, but the city never asked the agency for assistance. Further, in the days after the 9/11 attacks, Sophia Perdikaris, an archaeologist at Brooklyn College, worked with the Society for American Archaeology to collect the names of 350 trained archaeologists who would be willing to support the recovery effort at Ground Zero. Neither the city nor the federal government asked for their assistance. 60 People familiar with the recovery effort say that FDNY could have benefited from more professional anthropological and archaeological expertise on site. After all, the FDNY did not have training in forensic archaeology, human anatomy, or human remains excavation. One major problem that resulted was the comingling of remains from more than one person in a single body bag, rather than the separation of remains that could not absolutely be related to one another into unique bags. Further, bodies in an advanced state of decomposition were not always handled properly during recovery. 61 Nobody has suggested that FDNY botched a significant number of cases or that the recovery effort was marred by wholesale incompetence, but many believe an on-site anthropologist or archaeologist would have significantly reduced the number of commingled remains that arrived at the OCME facility, increased the number of more complete remains, and made some reassociations easier. This would have saved OCME personnel a great deal of time, effort, and stress. It also became an issue as families interacted with the OCME, desperate for information about where exactly the remains of their loved ones had been found. Many families wanted this information in order to piece together the last moments of the victim’s life. A Tuesday Morning in September 39 Where was he when he died? Who was she with? Had he tried to escape or did he stay put? Although information about the location of human remains was not usually reliable in reconstructing the last moments of a given individual, it did help families and friends make sense of the horrific event. 62 In addition to using better recovery methods, professional archaeologists might have searched for human remains beyond the perimeter of the sixteen-acre World Trade Center site—which is where FDNY concentrated their efforts—even if it could not have been cordoned off by law enforcement. Richard Gould, now retired, was a professor of archaeology at Brown University at the time of the World Trade Center attacks, and a founder of what he calls “disaster archaeology.”63 Gould’s wife was working in New York City at the United Nations and was in the city on September 11. Gould made his first visit to the site on October 6, 2001. In his 2007 book, Gould notes that what immediately struck him on this visit was the presence of large amounts of ashy dust and debris outside of the search and recovery area. Gould maintains that on this first visit, he saw what appeared to be small fragments of charred bone—the largest was about two or three inches across—lying in the dust. Although he had his camera with him, he says he did not attempt to photograph these remains or collect them for fear that he would alert the mourners and other visitors to the site of his discovery. On subsequent visits, Gould said he saw more of the same phenomenon: fragmented human remains mixed in with the ashy matrix that was expelled by the twin towers as they collapsed. He was surprised, however, that cleanup crews were making no effort to collect this potentially forensically relevant material—they were working as quickly as possible to clean up the thick dust that had settled on lower Manhattan, but did not have the remit to carefully examine what they were sweeping up and hosing down. Mayor Giuliani had ordered lower Manhattan to be cleaned up as quickly as possible, to restore a sense of normalcy and to remove potentially dangerous material from the streets and buildings around Ground Zero. Gould’s observations revealed both a profound problem and an opportunity to put to the test the principles he was developing in the field of disaster archaeology. On November 2, 2001, after communicating with officials from the Office of Emergency Management (OEM), OCME, NYPD, and FDNY, Gould and Perdikaris did an informal scan of twenty building roofs in the vicinity of the WTC site. They found that power-washing 40 Who Owns the Dead? teams had already scrubbed the roofs, but that a significant amount of debris remained in airshafts and under ventilators.64 A few weeks after the search concluded, the New York Daily News published an article on the work of FDNY’s Phoenix Unit to catalogue the locations of human remains using GPS technology. 65 An accompanying map showed that human remains were found where Gould had observed, but did not examine, human remains on October 6 and 7, 2001 and at the site of a subsequent March 2002 search he conducted with colleagues on Barclay Street. Although Gould could only speculate whether his reports led to the discovery of these human remains, it is clear that the “GPS locations provided independent confirmation that significant amounts of human remains were dispersed over wide areas of lower Manhattan outside Ground Zero. These reports also made the existence of such debris scatters public, precipitating a rush of inquiries by relatives of WTC victims to the FDNY and Medical Examiner’s Office.”66 After the Daily News article was published, several other human remains deposits were discovered at other sites around lower Manhattan in late 2002 and early 2003. This issue would come back to haunt therecovery effort in 2005 and 2006, when more human remains were discovered in numerous locations in and around the World Trade Center site. Sanctifying Remains By early October 2001, city officials had come to the grim conclusion that many families of victims of the World Trade Center attacks were unlikely to receive remains of their loved ones. Rather than giving back nothing, however, the city came up with an innovative solution that would both show that it cared for the victims of the attacks and prevent families from purchasing World Trade Center debris from “profiteers.”67 If remains could not be located in the debris, then the debris would be transformed into a relic that could stand in for the bodies of the dead. Mayor Giuliani ordered the NYPD to collect debris from the site, sanctify it through careful (though arbitrary and ad hoc) rituals, and then place a small amount of it in cherry wood urns to be delivered to families. In the first step of the process, powdered debris was shoveled into three fiftyfive gallon drums at Ground Zero and blessed by a chaplain on site, then draped with American flags, transported with a police escort to One Police Plaza, blessed again, then guarded by two honor guards twenty-four A Tuesday Morning in September 41 hours a day in a room that was freshly cleaned, repainted specifically for this purpose, and fitted with potted plants to bring life to an operation that otherwise referenced only death. In the second stage of the process, the remains were carefully spooned into plastic bags, which were sealed and placed in high-quality wooden urns by gloved members of the NYPD’s ceremonial unit. This act was done with great care, in a room with low lights and soothing music. The urns were then sealed, inspected, and stored for safekeeping until they were handed over to families in late October at a special ceremony at Pier 94. 68 Trouble at Ground Zero As the recovery effort wore on through September and into October, the emotion that fueled the frenzy at the site began to take its toll on everybody involved. Tensions among the various participants in the cleanup efforts were running high. Mayor Giuliani and his team felt that it was time for the city to move forward. There was no going back to September 10 to be sure, but the pace of the cleanup effort at Ground Zero had to increase. It was time to wrest control of the site from the firefighters. Firefighters felt that they were waging a constant battle against political machinery that had a strong interest in cleaning up Ground Zero as quickly as possible. More than anything, they wanted to recover all of their dead, as well as the remains of civilian and other uniformed service personnel, and did not want political agendas or construction deadlines to prevent them from doing so. “They [the politicians] want to pull out people with cranes,” one firefighter explained to the New York Times. “We want to bring back the brothers with dignity. They think the quicker they can clean it, the better they look. We’ve got friends there, brothers and family. We’ve known these guys for years. This goes very deep.”69 Construction workers often found themselves trapped in the middle of this battle—wanting to help the firefighters find as many remains as possible, but pushed by their bosses to get the job done on schedule.70 Tensions reached a boiling point on November 2. At a press conference, Giuliani madeit plain that, throughout therecovery process, he never believed remains would be recovered for the majority of victims. “I’ve known from the beginning, from the first night, that it would be a burial ground. The medical examiner, the first night that I met with him on the evening of September 11, told me that the crush of the buildings and the high degree of 42 Who Owns the Dead? heat was going to mean that . . . the majority or vast majority of people would disappear because they would evaporate.”71 Yet recognizing the importance of the recovery effort for the psyche of the FDNY, the city, and the world, he encouraged it and allowed it to proceed without much regulation. The time had come, however, for a new order to be imposed. “The reality is, then,” said Mr. Giuliani, “if you’re the police commissioner or the fire commissioner or the mayor and you’re responsible for the lives of other people, you have to say, well, maybe these people are too emotionally involved to be involved in this operation. Maybe these are not the people that have the ability to detach so they can handle it professionally.”72 The implication of this statement was not only that the role of firefighters ought to be reduced, but also that they had been making poor decisions on the job. Ultimately Giuliani used the issue of safety to scale back the number of firefighters engaged in the recovery effort—from between sixty-five and seventy-five to twenty-five—and to limit their ability to stop removal of debris from the site in order to recover human remains.73 Firefighters greeted the announcement with anger and indignation. Whatever the merits of the decision, its justification made little sense: in a round-the-clock recovery effort that had spanned more than six weeks, not a single firefighter had been killed or seriously hurt on the job. Several hundred of them converged on City Hall in protest, chanting “Bring Our Brothers Home.” They also demanded the ouster of Giuliani and Fire Commissioner Thomas Von Essen, who they believed supported the mayor rather than the firefighting community. What started as a largely peaceful demonstration, however, quickly turned violent as firefighters clashed with police. Twelve firefighters were arrested and five police officers were hurt. The New York Times reported that the fight “rattled the city’s top officials and laid bare the frustrations of the living who are unable to bury their dead.”74 Despite efforts to put Giuliani’s plan into effect, the firefighters ignored the new directive and continued to search for remains as aggressively as they could. In response, Giuliani spoke directly to the public about the firefighters’ allegations. “They really are so off base that it’s a sin,” he said. “And I mean that in a moral sense. What they’re doing is sinful. The effort here is to try to recover as many human remains as possible. They have absolutely no monopoly on caring about the people there.”75 He also made various conciliatory gestures—increasing the number of firefighters on search duty at the site to fifty and dropping charges against nearly all the protesters—but none mollified the firefighters. 76 A Tuesday Morning in September 43 Family Activism The search for, and treatment of, human remains motivated the families of victims into forming activist and lobbying groups. Although most of the relatives who became involved were not political activists before 9/11, they quickly found their voices when it came to the remains of their loved ones. Other than the male firefighters searching for their sons on the pile, the majority of these newly active individuals were women—mothers, wives, and sisters of the missing. Grief, marriage, motherhood, and widowhood became potent political weapons that politicians could not afford to ignore. Much like the women of the Plaza de Mayo in Argentina and Srebrenica in Bosnia before them, these activists used their special social status to demand that elected leaders do more on behalf of their loved ones. 77 A November 25, 2001 New York Times article highlighted this phenomenon: “A new political group has emerged from the cinders and ash of the World Trade Center disaster site. It is a group that no city official wants to offend, one whose broad powers are only beginning to be realized by its members. In the verbal shorthand of these troubled days, it is known simply as ‘the widows’.”78 One of the first activists to emerge was Marian Fontana, the wife of missing firefighter David Fontana and mother of a young son. She helped found the 9-11 Widows’ and Victims’ Family Association, with the support of FDNY union officials, to ensure that the needs of families and victims would not be trumped by the financial or political interests that had an enormous stake in cleaning up and redeveloping the site as quickly as possible. One of the first causes Fontana adopted was restoring the number of uniformed service personnel working in the Pit. Fontana used her identity as a firefighter’s widow and the mother of a now fatherless child to support this cause. As news of thereductions in firefighters at the pilespread, Fontana asked to meet with Giuliani and Von Essen on November 9. 79 In an interview shortly after the meeting, Fontana told a New York Times reporter, “Yes, I realize the power we all have. I definitely want to do right by the families, foremost, and the firemen, and there is pressure to proceed. It’s a little daunting sometimes.”80 Civilian families were represented by the group Give Your Voice, which had recently been formed by the family of twenty-six-year-old apprentice electrician James Marcel Cartier, who died on the 105th floor of the South Tower, and others. Give Your Voice’s mission was twofold: first, they wanted “to be sure that every effort is being made to spot, recover, preserve, identify 44 Who Owns the Dead? and deliver any human remains to the families of the victims in order that a decent burial may be had by the families and loved ones.” More generally, though, they wanted to provide a voice for civilian families that would complement that of uniformed service families, as well as a structure to funnel information from the city and other agencies involved in the recovery effort back to civilian families. 81 In their view, civilian families had been “forgotten” in the conversation and media coverage about 9/11 victims, and they wanted to remedy this travesty. The organization was not created in response to the activism of the uniformed services families, so much as in complement to it. In fact, Give Your Voice went out of its way to work hand in hand with those families. In their early web updates, they regularly thanked uniformed service families for their encouragement and support. Even their logo brought together the two stakeholders: one side of the image showed a man and a woman—a tradesman or construction worker and a woman in a suit, with a newspaper and briefcase—and on the other side was a firefighter’s hardhat.82 Once the dignity of the recovery effort was ensured, Give Your Voice planned to turn to other issues, such as how to help families navigate the complex financial assistance system, in which dozens of organizations were ostensibly providing support to those who lost loved ones in the attacks. Indeed, in a November 17, 2001 family update, the organization noted that “to date there is no Unit designed to assist Families of Civilians other than the Pier, which has been a horrible experience for most Families. Nor is there any liaison in the Mayor’s Office to offer information regarding any assistance. To date most of the information Civilian Families receive is from the Media and close Friends in the Fire Department.”83 The November 9 meeting between Marian Fontana and Giuliani led to a larger meeting, on November 12, between the firefighters, the mayor, and senior officials associated with the World Trade Center response (coincidentally, the meeting was the same day that American Airlines 587 crashed in Queens as a result of pilot error and mechanical issues). The meeting lasted three hours. The families were represented primarily by a group of firefighter widows that was gaining media visibility and political power, most notably Fontana, but also civilian members of Give Your Voice. The firefighter wives felt neglected because they had not received back wages and other benefits. But more importantly, they believed the city was abandoning their husbands, who had died in the line of duty. The civilian families felt that too much attention was being paid to uniformed service A Tuesday Morning in September 45 personnel and city employees, and that their emotional and financial needs were not being met.84 For both groups, though, their primary concern was that clearing the trade center site quickly would make the recovery of their loved ones’ remains less likely. 85 It sickened many of them to think that their loved ones would simply be bulldozed out of the site and dumped unceremoniously on top of decades of garbage at Fresh Kills Landfill. In their view, this was morally wrong and callous. The widows lambasted DDC’s Mike Burton, saying he should be made to tell the children of missing firefighters that their fathers would be found at a garbage dump. They called him “Mr. Scoop and Dump.”86 Monica Iken, who founded the organization September’s Mission to advocate for the creation of a suitable memorial to the World Trade Center attack victims, stated a few months later that she was astonished at the speed of the cleanup effort. “I don’t understand why there’s such a rush. It’s just discouraging to see that because that’s our cemetery, that’s all we have. I mean, most of our loved ones are still in that site. And to see a rush [taking] place when there [are] remains that need to be found, it’s disheartening because they want to just close it up like it didn’t happen and call it a day. The fact that they’re rushing through this process when our loved ones are still there is very hard for us.”87 When Chief Medical Examiner Charles Hirsch described the very low probability of finding whole, or even partially intact, bodies now that the recovery effort was hitting the middle of the pile, the widows at the November 12 meeting expressed outrage.88 Hirsch did not wish to hurt the families, or to provide cover for the cleanup effort; he was simply trying to brace them for what he thought was the most likely scenario. 89 In the following weeks, OCME officials clarified Hirsch’s statement—arguing that while complete or nearly complete bodies could indeed withstand the fires that raged in the pile in the early days, smaller partial remains that typified the majority of victims would have a much lower probability of surviving and yielding usable DNA. 90 It turns out that Hirsch was technically wrong—there were a few nearly whole bodies found in pockets further down in the ruins—but at the time there was little scientific information to suggest that this would be the case. The November 12 meeting crystalized for Giuliani, his administration, and the world at large what they already knew: the “unbuilding” of the World Trade Center was as much about emotion as it was about equipment and manpower. In order to bring about a transformation of the site 46 Who Owns the Dead? and to restore New York to normalcy, they had to pay attention to the needs and desires of the families. In particular, city officials had to remember that Ground Zero wasn’t just a chaotic construction site that needed to be tamed; it was also a place where nearly 3,000 people had lost their lives. In an effort to regain the firefighters’ and families’ trust in the administration, Giuliani restored the search team to seventy-five and provided widows with formal channels for airing their grievances and getting information about the progress of the recovery effort. According to administration officials, revised rules would improve organization and ensure safety on the site (although very few injuries had been reported during the first two-and-a-half months of the recovery effort). In the new system, most firefighters would wait outside an active search area while a few designated “spotters” looked for signs of human remains as grappler operators and construction workers removed debris from thesite. Only when potential human remains or personal effects had been uncovered could the rest of the firefighter contingent approach to investigate. The decision seemed partly due to a desire to mollify families of the missing, and partly to bring an end to the embarrassment that resulted from the protest that had taken place on November 2.91 The tensions did not disappear, but they subsided for most of the rest of the recovery effort. Although workers in the Pit continued to bicker and even fight from time to time, interviews suggest that most lowlevel workers at the site had a common sense of purpose, and that most disagreements were the result of stress and frustration rather than entrenched anger or hatred. 92 Peter Rinaldi, a Port Authority engineer who provided his professional expertise and knowledge of the World Trade Center complex to the DDC as it managed deconstruction efforts, said that most of his interactions with the FDNY were negotiations about the balance between worker safety and the recovery of remains. He noted that risks taken for one recovery might have a negative impact on many others if problems occurred, so each individual decision had to be taken in the larger context of the recovery effort. 93 Perhaps the best example of such issues was the FDNY’s desire to recover remains from underneath the last major debris ramp (called “Tully Road” because it was built in Tully Construction’s quadrant), which was used to get construction equipment into, and debris out of, the Pit. The material that the ramp was built on was believed to contain a vast trove of human remains because it was located in the region where the lobby of the South Tower once stood. This location was where A Tuesday Morning in September 47 the FDNY had had one of its command centers and where many people who narrowly missed their opportunity to escape would be found. Uniformed service personnel were anxious to excavate this area, knowing that there was at least one body there, and at one point got so close to the ramp that its structural integrity could have been compromised. DDC personnel and the FDNY had many long conversations about potential damage to Tully Road if excavations took place underneath it and the delays that this would cause to recoveries in other areas of the Pit. Ultimately, the FDNY waited until a bridge ramp was put into place to excavate the Tully Road, and numerous remains were indeed found. 94 As the recovery effort moved forward, workers at the site and families waiting for news were beginning to accept that many of the victims of 9/11 would never be recovered—either on site or at Fresh Kills. As 2001 gave way to 2002, some blamed this absence of remains on the physical dynamics of the event (bodies had been vaporized, shattered, or burned beyond hope of identification), while others blamed it on the human limitations of the recovery effort. For some families, though, the concern was not just about whether remains would be recovered, it was about whether they would be recovered with dignity.
0 notes
katebushwick · 5 years
Text
The 2,753 victims of the September 11, 2001 attacks on the World Trade Center in New York City were business people, lawyers, janitors, bond traders, electricians, secretaries, food service workers, firefighters, police officers, engineers, computer specialists, mothers, fathers, sons, daughters, brothers, sisters, cousins, lovers, friends, spouses, and community members. The remains of 1,113 of them have not been identified. Of those who were found, all but 293 were recovered from among 21,900 bits and pieces scattered throughout the debris of the fallen towers: a tangle of steel beams, rebar, pulverized concrete, asbestos fiber, plus the contents of thousands of offices and retail outlets.1 Buildings that were once 110 stories collapsed into the space of just eleven, seven of which were below street level. Rescue workers initially picked through the rubble by hand, frantically searching, first for survivors and then for victims’ remains. They were choked by dust and smoke and the stench of death. Soon, giant bulldozers and grapplers took over the job of removing debris, and rescue workers dedicated themselves solely to finding remains, including their own brethren. The World Trade Center was attacked just as large-scale DNA identification efforts were becoming possible. The biotechnology boom of the 1990s had produced technologies that could be used to rapidly extract and analyze genetic material from biological specimens. Simultaneously, scientists involved in the investigations of large-scale accidents and mass atrocities were learning how to apply these tools to the damaged and degraded forensic specimens recovered from complex graves. Human rights advocates and activists also realized loved ones not just spiritually and psychologically, but also socially and legally. Without identification of their loved ones, relatives cannot access financial and social services, dispose of personal property, or seek compensation for their loss. They can also become socially marginalized. 2 These advances led New York City’s chief medical examiner, Charles Hirsch, to promise that he and his staff would attempt to identify and return to families every human body part recovered from the site—even those that were heavily damaged by the collapse of the towers and the underground fires that raged at the site for weeks. The job would not be easy—it would require a bewildering mix of technological expertise, statistical acumen, and persistence. More than $80 million has been spent on the effort thus far, and the Office of the Chief Medical Examiner has committed to continuing in perpetuity the effort to identify remains as new techniques become available. The primary goal, of course, is to link even the tiniest fragment of human remains to a person in an effort to provide proof of death for those families that hunger for such knowledge. 3 But the massive forensic effort was also undertaken to demonstrate that Americans, as individuals and as a society, were dramatically different from the terrorists who so callously disregarded the value of life. It was as much a political and moral statement as it was a scientific and legal one.4 This is not a book for the faint of heart. It tells the story of the recovery, identification, and handling of human remains in the aftermath of the September 11, 2001 terrorist attacks on the World Trade Center. It also delves into the contested efforts to memorialize the victims of the attacks both at the World Trade Center site and at the Fresh Kills Landfill, where much of the debris from the Trade Center was taken for sifting and disposal, and the controversy over the storage of remains at the National 9/11 Memorial and Museum. It exposes the raw grief and persistent anger that motivated a small group of families to continue to contest redevelopment and memorialization efforts at the site more than a decade after the 2001 attacks. In addition, this book seeks to explore the impact and legacy of efforts to recover, identify, and memorialize the dead on the families of victims, the City of New York, the nation, and the world. September 11 was the first time since the Civil War that such a large number of dead bodies had to be dealt with on American soil. 5 Yet the United States had a history with the issue outside its borders: its complicity in dissident disappearances in Latin and South America during the 1970s; Introduction 3 in lending scientific expertise to the identification efforts in those same countries in the 1980s and early 1990s; in leading the international effort to identify the missing after the Balkan wars of the 1990s; through its efforts to recover the remains of American soldiers missing in foreign wars; and in the blatantly political efforts to uncover mass graves in Iraq in order to justify the invasion of the country in 2003. Analyzing the U.S. response to mass death can help Americans better understand similar events around the world, and to empathize with people confronted with such atrocities. Global policy cannot be developed based on the uniquely American response to 9/11, but the United States can no longer turn a blind eye to the psychosocial, political, and scientific needs of societies struggling to cope with mass death. Policy makers can no longer assume that locating bodies and reburying them is enough—the World Trade Center story amply demonstrates that the exhumation and identification of human remains is inherently political and fraught with controversy from beginning to end. Human remains have political, cultural, and emotional power. 6 The death of a loved one in a mass disaster or an act of terror can leave relatives of victims and the missing feeling emotionally and spiritually drained. But it can also give them special status within society and a voice that can be used to make demands on government institutions that would ordinarily not listen to them. 7 Emboldened relatives of the dead and the missing, especially women, often speak out on social and political issues with little regard for negative repercussions. They are fighting for justice and the return of their flesh and blood, and littleelse matters to them. This is true of mothers, wives, and grandmothers of victims of war, disaster, and mass killing around the world. 8 Since 1977, for instance, the Mothers of Plaza de Mayo don white scarves and march in the center of Buenos Airesevery week to demand information about, and accountability for, their children who went missing during the 1976–1983 military junta in Argentina. 9 Similarly, in the aftermath of the 1995 genocidein Srebrenica, family groups in Bosnia successfully demanded that international actors identify and return bodies of their missing loved ones rather than just gather demographic profiles for use in war crime prosecutions.10 New York City witnessed thesamesituation after September 11, when wives, sisters, and mothers became advocates for the missing and the dead. Many men also became advocates for the victims of 9/11—especially firefighter fathers searching in the rubble for their firefighter sons. In addition to the actions of relatives left behind, mass death necessitates broader social, political, and cultural responses. Families and communities look to honor the dead and, in some cultures, ensure their smooth transition from the realm of the living to the realm of the dead. 11 The state hopes to reassert its control over society, especially when it played a role in—or failed to prevent—the disaster. It seeks also to demonstrate care and concern for the lives of its citizens and the nation as a whole.12 For everyone involved, and especially forensic scientists, there is a more general desire to ensure that the violation of the dead does not remain permanent. 13 While forensic identification cannot bring the dead back to life, it can restore some sense of normalcy to families and communities whose loved ones have died in traumatic and violent ways. Individual Identification and Collective Commemoration In many places, including the United States, violent mass death can stigmatize the location where it occurs, and the site must be cleansed, destroyed, or transformed into a memorial. 14 Thereis also a practical problem: what to do with the bodies and body parts, particularly when a substantial portion cannot be identified and returned to families? In the case of the September 11, ownership and control of unidentified remains greatly affected debates about the future of the sixteen-acre World Trade Center site. Thus, beyond the forensic dimensions of identifying the missing, this book explores how human remains became central to the memorialization process at the World Trade Center site. The mere possibility of eventual identification means that the bones can never be buried and forgotten. Instead, they must be maintained in an active repository, keeping both the remains and families in a state of extended limbo. We are only beginning to address this dynamic, the salience of which has increased dramatically in the age of DNA identification. After the attack on the U.S. naval base at Pearl Harbor on December 7, 1941, the vast majority of the 1,177 navy and marine personnel who were killed on the battleship USS Arizona were classified as buried at sea and left in place underwater. While the victims could have been recovered— the ship rested close to shore and its parts and materials were salvaged throughout the war—they were assumed to be unidentifiable due to their fragmented and burned condition. 15 In the 1995 Oklahoma City bombings, human remains that went unidentified were referred to as “common Introduction 5 tissue” and collectively buried in a memorial tree grove near the state capital. 16 After the 2001 World Trade Center attacks, on the other hand, each remain was stored separately and treated as an individual entity that might one day be identified as new forensic techniques became available. This policy ruled out collective interment. It also meant that the creation of a “tomb of the unknowns,” such as had become popular after World War I in Europe and the United States, would be unlikely. 17 Historian Thomas Lacquer argues that several factors led to a shift in Western (and particularly Anglo-American) conceptions of what ought to be done for the victims of violent death, especially in combat. Prior to the twentieth century, war victims were generally left in place to be eaten by scavengers or buried in mass graves. 18 By World War I, there was a concerted effort to bury them individually in marked graves and to memorialize them by inscribing their names on grand monuments after the war. One reason for this change was that, by the twentieth century, soldiers were fighting on behalf of democratic nations and were thought to deserve equality of treatment in death as in life.19 But democracy and politics on their own are not sufficient explanations for this change. Lacquer argues that the sheer magnitude of the slaughter in the trenches demanded a new response. The Great War monuments attempt to make sense of the scale of deaths, while making manifest their ultimate incomprehensibility. Perhaps most poignantly, Lacquer notes that many who fought in the battles, as well as commentators who wrote and spoke about the Great War, were worried that the sacrifices of these young men would soon be forgotten and all evidence of their deaths would be subsumed by nature retaking the land. “There is evident here a powerful anxiety of erasure, a distinctly modern sensibility of the absolute pastness of the past, of its inexorable loss, accompanied by the most intense desire to somehow recover it, to keep it present, or at least to master it.”20 The fears were compounded by the condition of remains on the battlefield that resulted from the new machinery of war. Shelling, land mines, artillery bombardments, and machine guns produced not complete bodies with bullet or stab wounds, but mounds of flesh, and disarticulated arms, legs, torsos, scalp, blood, and tattered uniforms. In other words, if the dead were not buried as individuals and their names were not recorded on massive monuments, then Nearly a hundred years later, this fear of erasure seemed to motivate at least some of the families of the World Trade Center victims and their allies. In the aftermath of World War I, nature would reclaim the battlefield and render the events that took place there—and those who died there—invisible. This time, redevelopment would be the culprit, as well as the desire of city residents and city leaders to put the horrible events of 9/11 behind them and get on with business and life. In many ways, this tension between the desire to memorialize and remember, and the desire to move on would animate debates about the site for much of the next decade. Further, new genetic technologies have changed the way we remember the dead. The emergence of DNA identification means that it is far less likely that there will be unknown soldiers in future wars. As a result, monuments to unknown and unnamed war dead will no longer be a way to honor their sacrifices. 21 Similarly, in theera of DNA identification—and in keeping with the nineteenth century belief that dying an anonymous death and being buried in an unmarked grave was a sign of social exclusion and despair—it is no longer enough to produce a single collective memorial to ordinary people killed in mass conflict events. 22 We are compelled to remember them as individuals and push technology to its limits to identify their remains. Yet, just as collective memorials and tombs of the unknown served to tie the nation together in the past, these individual stories—underwritten by DNA identification—now serve as conduits for collective understanding of conflict as they become threads of a collective tapestry. 23 They have social and political power that can be called upon when needed. An important motive for memorializing the dead at the World Trade Center is to justify the military and diplomatic actions taken to protect the United States and its citizens from similar attacks in the future. To downplay the human toll of terrorism is to lessen Americans’ willingness to put up with war and intrusions on their civil liberties. In a nation that exalts individualism, knowing the names, faces, and stories of each of the victims makes it hard for citizens to accept inaction. While humans are generally able to shrug off the deaths of thousands of strangers—we do it every day while reading, watching, or listening to the news—it is difficult to ignore the death of even one person we have come to know as an individual. Introduction 7 The challenge of memorializing the victims of 9/11 as individuals is that narratives stressing communal sacrifice cannot be made too overtly— visitors, viewers, or readers of these memorial efforts must arrive at this conclusion without noticeable coercion. When the link between individual and nation is made too obvious in the context of 9/11, many victims’ families and other stakeholders have protested and actively intervened— especially when the events of 9/11 were used by the Bush administration to justify the war in Iraq and policies that many believed further eroded Americans’ privacy and civil liberties, and by activists to shine a light on what they saw as the global struggle for freedom and liberal values. Ironically, though, efforts by relatives and stakeholders to depoliticize the victims of 9/11, and the memorialization process as a whole, have served to reinforce the use of these victims for political purposes. For example, when the New York Times published “impressionistic sketches” of the victims of the World Trade Center attacks in its “Portraits of Grief” feature, it sought to portray them all as living the prototypical American dream or on the way to achieving it. 24 The Times editors responsible for these portraits were fully aware of what they were doing. “We recognize,” they wrote in an unusually candid commentary on the “Portraits” project on October 14, 2001, “the archetypes that define the ways these stories are told. The tales of courtship and aspiration, the ways these people relaxed and how they related to their children—these are really our own stories, translated into a slightly different, next-door key.”25 Such representations not only erased the diversity of the victims, but also elided the fact that many of them were not U.S. citizens, including at least a few who were undocumented immigrants. The ways in which the New York Times—and so many other voices in American culture—used the events of September 11 to tell “our own stories” about America, good and evil, and right and wrong, made it impossible to see the lives of the victims through anything other than a political lens. For different reasons, investigators also sought to identify the remains of the suspected perpetrators of the attacks. The presence of remains at the crash site would solidify their connection to the crime. Further, families were adamant that the remains of their loved ones not be comingled with those of their murderers—and demanded that these remains be separated as much as possible. 26 For the U.S. government, there was also the more vexing question of how to deal with the identified remains of the hijackers who used their bodies as weapons and actively rejected the set of international norms and laws that govern conflict among states, including the disposition and treatment of enemy remains. Such decisions simultaneously invoked law, conceptions of punishment, obligations to victims and their families, the projection of the country’s image to other nations and other would-be terrorists, and emotion.27 The handling of the perpetrators’ remains had to be done in a way that neither glorified them nor treated them with the level of respect accorded their victims. Yet there are still no defined policies for dealing with this challenge. After the killing of Osama Bin Laden by Navy Seals, for instance, the U.S. government decided to dispose of his body in the ocean in accordance with Islamic practices when burial on land is not possible.28 American officials noted that they could have handled the body in a less respectful way—for instance, by dumping it from a helicopter without washing the body or wrapping it in a white sheet—but they determined that a proper burial was the right thing to do. While Muslim scholars disputed the propriety of the effort—burial at sea is generally reserved for individuals who die at sea and cannot be brought to land—at least one called the decision “pragmatic.”29 In the context of the September 11 attacks, this responsibility involves long-term, or perhaps permanent, storage, because, while technically permitted to do so, neither the perpetrators’ countries of origin nor their families were willing to claim their remains once they had been identified. For foreign governments (Saudi Arabia, Egypt, and the United Arab Emirates—all U.S. allies), claiming the remains would be tantamount to admitting that one of their citizens was responsible for the murder of 2,753 people. And in many ways, the decision to become a violent jihadist is a de facto rejection of belonging to any one country in favor of joining the community of believers who answer only to Allah.30 For families, claiming the remains would be an admission that their kin was indeed a terrorist. As such, the remains of the perpetrators are interred separately from victims’ remains in an undisclosed location under the control of the Office of the Chief Medical Examiner (OCME). Controversy While this book focuses extensively on the controversies that emerged over efforts to recover, identify, and memorialize the victims of the World Trade Introduction 9 Center attacks, I do not wish to suggest that controversy itself is bad—in fact, controversy is perfectly normal in this situation.31 The intersection of personal pain, anger, and grief with commerce, real estate, and politics ought to provoke debate and disagreement in a democratic society. To think otherwise would be, as historian Edward Linenthal writes, a “strange assumption.”32 Controversies give us a window into what people care most about and how things might have worked out differently. They also help us understand how cultural meaning and memory are produced around painful events of the past. 33 By studying controversies as they occur, we can see how disagreements were—or were not—resolved, and how and why certain groups continue to contest the matter at hand after the dispute has been formally resolved. We can also see why these groups are occasionally successful in reopening debate, and why they usually are not.34 Ownership The desire of local and national leaders to rapidly repair the hole in lower Manhattan’s fabric was challenged by the multiple meanings of the site. For rescuers, it was a disaster area to be tamed; for military and national leadership, it was a thesite of an enemy attack; for residents, it was a shocking and traumatic violation of their homes, communities, and everyday lives, not to mention an environmental nightmare; for families, it was a place of mourning, where a loved one breathed his or her last breath, and a cemetery; for the public, it was a place of absence where buildings once stood and lives were once lived, a place of protest, a tourist site, a place of national trauma, a (re)construction site, a neighborhood, a commercial district, and the center of global capitalism; for the architecture community, it was an opportunity to make an architectural and urban planning statement while remembering the victims and revitalizing lower Manhattan. The presence of so many stakeholders with so many agendas meant that the World Trade Center became a new kind of battleground: an economic, legal, and moral one over who could claim ownership of the site.35 At the most basic level, there were significant contractual and financial battles over proprietorship and control of the property. More conceptually, there was strong disagreement about the overall uses to which the site should be put (in which authority was based on expertise, whether professional or lay, and asserted by urban planners, architects, residents, or business owners). Finally, there were ethical debates about what ought to be done to honor and respect the thousands of lives lost on September 11 (in which authority was moral, political, and nationalistic and asserted by all parties, but most forcefully by the families of victims and their advocates). Sacred Space Soon after the dust and smoke settled, the question of whether the World Trade Center site would be preserved as a sacred space or brought back to life as the heart of a vibrant, revitalized neighborhood came to the fore.36 For those who did not see the site as inherently sacred, the best option was to clean up, rebuild, and get on with living. For those who did, the site had to pay homage to the victims of the attacks and could not simply be redeveloped as if nothing had happened there. Ultimately, the design and planning of theredevelopment of thesite was about balancing thetwo—and most of the disputes about the remains revolved around the extent to which the activist families believed the planners did or did not recognize the sacredness of the site. So, in what sense can thesacredness of the World Trade Center be understood? Religious studies scholar David Chidester and historian Edward Linenthal highlight two broad theoretical frameworks that can be used to answer this question. One argument holds that sacredness is physical and emerges from the place itself—either as a result of events that happened there, or from some essential property of the site that projects power or spiritual qualities that elevate it above other locations. This is the view held by many family members who lost loved ones on 9/11. The other argument, which emerges especially from the work of French sociologist Émile Durkheim, is situational.37 In this view, sacredness is not inherent but is produced through human action for specific social ends. The sacred is produced through the cultural work of sacralization. Chidester and Linenthal go on to argue that there are three main characteristics of sacred space: it serves as a place for rituals, which they define as formalized, repeatable symbolic performances; it causes visitors to focus on questions of what it means to be a particular kind of person in a meaningful world; and, finally, its power makes it socially valuable and the subject of contestation. Control of sacred space thus becomes an exercise of power. Indeed, a key aspect of the story told in this book is that of families of victims who were fighting to preserve a space for their loved ones’ Introduction 11 memories, free from other interpretations or stories of anyone else’s suffering. At the same time, other stakeholders sought to erase what happened, or at least to provide alternative meanings that enabled life, and commerce, to resume at the site. In the end, a sort of compromise was reached and the World Trade Center site was divided into gradations of sacredness: the OCME repository was completely sacred; the memorial and museum less so but still partly sacred; and the rest of the site was profane, but still requiring some degree of reverence and respect. Museum studies scholar Paul Williams invokes the concept of “secular sacredness” to discuss memorialization, precisely because of the complex, ambivalent nature of these remains and the numerous religious understandings of the “sacred” that exist around the world. In contexts like the World Trade Center site, traditional notions of religious sacredness break down for many reasons, not least of which is the fact that visitors come from numerous religious traditions, and that one religion or another is also implicated in the atrocity being remembered. Even when this is not the case, religion rarely provides an acceptable explanation for why people commit inhuman acts. Further, most memorial museums ultimately put forth a universalist vision that all human lives have value and that secular, rational support for human rights is our best hope for a more peaceful and just future. 38 Williams notes the often-contradictory effects of displaying and/or housing remains within memorial museums. While the presence of human remains signals the importance of the site and enables rituals regarding the dead to take place (such as pilgrimage, prayer, and mourning), the display of these remains may render them profane by preventing traditional (and religiously bound) funerary rites. There is a sense in which the remains lend the site authenticity and connect visitors to the tragedy that happened there, but that the use of remains in this way can further reinforce their profane condition. Ultimately, the 9/11 Memorial Museum decided to thread this needle by storing the remains out of view of the public with the public’s knowledge. The designation of the repository as a “separate space” walled off from the memorial museum denotes both its power and importance to the site overall (in that the remains make it impossible to deny that death occurred there) and the potential the remains have to taint the site in some way if not handled appropriately. This placement became a contentious issue for many victims’ families. They felt that their loved ones, or the loved ones of other families in their position, would become a museum exhibit and therefore be debased, or rendered mere objects. Nationalism Complicating matters even more is the nationalist dimension of 9/11. While the victims of 9/11 may not have died in direct service to the nation, and many of them were citizens of other countries, their deaths took on a broader meaning for all Americans because they were targeted by enemies of the United States. When the city opened a viewing platform at Ground Zero at the end of December 2001 to allow the public to see the progress being made in the Pit and to pay tribute to the victims of the attacks, and to bring tourists back to lower Manhattan, outgoing mayor Rudy Giuliani promised visitors a moving experience: “This gives you all kinds of feelings of sorrow and then tremendous feelings of patriotism. I really urge Americans to come here, and everybody to come here, and say a little prayer and just reflect on the whole history of America and how important democracy is to us.”39 Further linking the site to notions of patriotism and sacrifice, Giuliani noted that that denying the public the right to view Ground Zero would be like “denying people access to other sites of [national] historic significance, like Gettysburg or Normandy.”40 It was in this vein that New York governor George Pataki decided to recite the Gettysburg Address, the speech that Abraham Lincoln delivered in 1863 dedicating a cemetery for Union soldiers killed in the Battle of Gettysburg. In introducing Pataki, New York City mayor Michael Bloomberg noted that “139 years ago President Abraham Lincoln looked out at his wounded nation as he stood on a once beautiful field that had become its saddest and largest burial ground. Then it was Gettysburg. Today it is the World Trade Center, where we gather on native soil to share our common grief.” In these two sentences, Bloomberg situated the victims of the World Trade Center attacks within what Linenthal describes as the “comforting narrative of patriotic sacrifice”—the notion that freedom has a price, and the individuals who died on that day sacrificed their lives for all of us. 41 Pataki then recited the speech verbatim: “Fourscore and seven years ago our fathers brought forth on this continent a new nation, conceived in liberty and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation or any nation so conceived and so dedicated can long endure.” Introduction 13 Invoking the definition of sacred space as a place that is inherently meaningful because of what happened there, Pataki continued, “We are met on a great battlefield of that war. We have come to dedicate a portion of that field as a final resting-place for those who here gave their lives that that nation might live. . . . But in a larger sense, we cannot dedicate, we cannot consecrate, we cannot hallow this ground. The brave men, living and dead who struggled here have consecrated it far above our poor power to add or detract. The world will little note nor long remember what we say here, but it can never forget what they did here.” Pataki finally read the speech’s conclusion, which demands that the soldiers who perished on behalf of the Union not die in vain and that “this nation under God shall have a new birth of freedom, and that government of the people, by the people, for the people shall not perish from the earth.” The decision to recite the Gettysburg Address was seemingly an effort to rise above politics and to place the dead at the forefront of the commemoration, yet its inclusion in the ceremony suggests that state authorities felt obliged to acknowledgethe dead in some public way without overtly making political capital out of the event. It is interesting that the common grief and national sadness was forcefully articulated not by President Bush or other national leaders, but by local politicians with grand aspirations— Bloomberg, Pataki, and former mayor Rudy Giuliani. In this way, the nation was brought together in a shared sense of grief to form a political community of mourners. 42 This community was not always tolerant, however. Individuals and institutions with actual or suspected ties to Islam or the Middle East became targets of harassment and violence.43 Thus, the dead were not remembered in a politically neutral manner. Linking the World Trade Center attacks to the bloodiest battle that had ever taken place on American soil had serious political ramifications. It placed the World Trade Center attacks among the most important events in the history of the United States. It also signaled that the United States, as a nation, was engaged in a battle between good and evil in which the future of freedom and the democratic way of life were at stake. And, given that the core of the Gettysburg address is an explicit acknowledgment of the sacredness of the Gettysburg site, Pataki was formally stating that Ground Zero was in a very real sense hallowed ground—a place where patriots sacrificed their lives for the nation. Yet this decision was odd in many ways. The Civil War pitted Americans against one another, with the very survival of the nation at stake. The 9/11 perpetrators were an amorphous group of terrorists who belonged to no one particular country—thus the United States was not at war with itself, another nation, or with any one clearly defined entity. What’s more, the victims at Gettysburg were soldiers who went into battle knowing that they stood a good chance of death or injury. Except for the uniformed service personnel who rushed into the twin towers, the victims of the World Trade Center attacks were civilians in every sense of the word. They had gone to work that sunny, warm morning expecting to do their jobs and then return home at the end of the day to family and friends. They had no intention or expectation of dying, and certainly were not representing the nation in any way other than as ordinary citizens. Yet, none of that seemed to matter to those hungry for retribution. The rhetoric and actions of politicians made them into de facto martyrs whose lives, and deaths, needed not just to be remembered and honored, but also avenged. 44 Further, Lincoln went to Gettysburg to consecrate a cemetery for the dead. One year after the World Trade Center attacks it was clear that the site would be neither a cemetery nor principally a memorial to the victims of the attacks. Unlike the battlefield at Gettysburg, the World Trade Center was not a placid piece of farmland. It was an urban center, and not just any urban center. The World Trade Center was at the heart of the nation’s largest and most important city—capital of the arts and finance, a canyon of majestic skyscrapers, the home of the Statue of Liberty and the symbolic font of America’s rich immigrant tradition, and generally the place that defines what it means to be successful in the United States. It was also hometo eight million people who were fiercely proud of being New Yorkers. There were simply too many stakeholders involved for the stated sacredness of the site to last. At Ground Zero, we can see the interests and dignity of the dead (and their families) clashing with broader local, national, and global interests. This tension is evident in the progressive reduction of space that was devoted to commemorating the victims of the attacks. Mayor Giuliani and many family groups initially argued that the entire sixteen-acre site ought to be devoted to remembering the victims, but most of them quickly settled for eight acres. Soon, “sacred ground” became limited to the “footprints” of the towers. According to cultural studies scholar Marita Sturken, “The idea of a building’s footprint evokes a sense that a structure is anchored in the ground. It is also anthropomorphic, as it implies that the building left a trace, like a human footprint, on the ground.”45 This notion that the Introduction 15 space where a building once stood is a suitable place to mourn the dead is also the seen in the Oklahoma City memorial. “The emphasis on the footprints of the two towers demonstrates a desire to situate the towers’ absence within a recognizable tradition of memorial sites. The idea that a destroyed structure leaves a footprint evokes the site-specific concept of ruins in modernity. In the case of Ground Zero, one could surmise that a desire to reimagine the towers as having left a footprint is a desire to imagine that the towers left an imprint on the ground.”46 Other than the presence of Mayor Bloomberg, former mayor Rudi Giuliani (who began the name reading), the governors of New York and New Jersey, and the regional accents of many of the people who read names, allusions to New York City were notably absent. The rest of the more than two-and-a-half-hour anniversary ceremony consisted of dignitaries, survivors of the attacks, and relatives of the victims of the attacks reading the names of the 2,753 people who perished in the attacks on the World Trade Center. In the background, a string ensemble, including cellist Yo-Yo Ma, played wistful, mournful music. The pain in the voices of the readers was palpable and one can see the grief in the faces and bodies of families who were in the Pit. The scene remains nothing less than heartbreaking. Only one other speech was given that day: New Jersey governor James E. McGreevey closed out the ceremony by reading excerpts of the Declaration of Independence. Thus, at least at the one-year anniversary of the attacks, 9/11 was framed in terms of martyrs and a threatened democratic nation, reinforcing the notion that the deaths of the victims (even foreigners and the undocumented) was a patriotic sacrifice and not a random act of murder—and that the entire nation belonged to a single community of mourners united by common grief (for the victims), common principles (democracy and freedom), and common purpose (to avenge the deaths of the victims, to defend our principles, and to defeat the terrorists and terrorism itself). Grief and Mourning The aftermath of 9/11 also made plain the reality that grief is both an individual and a collective phenomenon which, contrary to popular belief, does not operate on a regular schedule. While some people pushed their lives forward in the aftermath of 9/11, others remain focused on the events more than a decade after their occurrence. One of the goals of this book is to understand the actions of the small group of families that have remained active in seeking to influence the memorialization of the victims and the redevelopment of the World Trade Center site more than a decade after the attacks. It is tempting to pathologize the actions of these individuals, first dubbed the “memorial warriors” and later the “grief police” by New York Magazine, to make it seem as if they have not “moved on” from their loss, or that they are seeking to gain control of the rebuilding and memorialization process as a way to compensate for their inability to bring back their loved ones. 47 To do so, however, would be to assume that that their demands are unreasonable and that those in power have truly made a good-faith effort to accommodate their needs and desires. Numbering no more than a few dozen a decade after 9/11, this group was made up primarily of middle-class wives, mothers, and sisters of victims and firefighter fathers of firefighter sons, almost all from the New York metropolitan area. This group self-consciously refused to relinquish their ownership claims or moral authority over the human remains associated with the World Trade Center attack victims and indeed the site itself. They also claimed to speak for a sizable population of 9/11 families that agreed with them but were unable to speak out publically on 9/11 matters for reasons of emotion, family duties, economic hardship, or geographic distance from New York. They also seem to have been ordinary people before 9/11 who had previously shown little desire to be public advocates. Loss, anger, and a feeling that the Bloomberg administration and the Lower Manhattan Development Corporation (LMDC) simply didn’t care about them transformed them in fundamental ways. “I was a different person before 9/11,” Sally Regenhard, mother of firefighter Christian Regenhard, told journalist Deborah Sontag. “I tried to speak out, let’s say in Co-op City, where I lived. But now—I’m fueled by adrenaline, outrage and love for my son and that has made me a bigger pain in the ass than I ever was before.”48 How should we try to understand their activities and the intense mainstream media interest they received? Linenthal highlights four narratives that came to predominate after the Oklahoma City bombings: a progressive narrative (in which the city recovered from the event and came back stronger than ever), a redemptive narrative (that God had a plan for those who died and those who lived), a toxic narrative (that the city had suffered a great trauma from which it would never fully recover), and a traumatic narrative (that the city had suffered a great trauma from which it had to recover). 49 These four narrative structures have many parallels in the con- Introduction 17 text of 9/11, with the addition of a fifth narrative structure that focused on the political dimensions of the attacks and the emergence of a long, global war against terrorism. For Linenthal, the Oklahoma City bombing is an “unfinished bombing,” in that it continues to “claim people through suicide, to shatter families through divorce, substance abuse, and the corrosive effects of profound and seemingly endless grief. It is a toxic narrative, and it exists alongside of, and intermingled with, the other story lines.”50 Despite the existence of this toxic narrative, Linenthal notes that “there seemed throughout the city— indeed throughout the culture—an unspoken statute of limitations on mourning. The failure to ‘get on with it’ or ‘be back to your old self’ after a prescribed period indicated, according to the traumatic vision, the presence of an illness and the need for treatment.”51 In other words, grief and mourning were defined by what was considered socially acceptable. This included when, how, and for what time it was appropriate to be despondent, which types of behaviors were normal and which were not, and when one should be done with the mourning phase and transition to the getting on with life phase. Linenthal notes that this perspective is most consistent with the regenerative and redemptive narratives, but only the toxic narrative encapsulates the reality of chronic affliction and an inability to ‘put the past behind you’ that so many of the bereaved felt. The toxic narrative suggests not a return to the old self or putting one’s life back together, but rather a shaping of a new self in the aftermath of such an experience. Linenthal demonstrates that there were strong differences of opinion within the bereaved community about how public grief ought to be. For some families, it was a private matter that was nobody’s business. Others, however, wanted the world to know who their loved ones were and offered a public eulogy by speaking to the press, making the funeral open to the media, or both. Those who opened up to the media gave voice to a kind of communal grief. Other families responded by retreating into the woodwork, while still others took on outspoken, often strongly political, activist roles, advocating for particular forms of memorialization and remembrance or changes in public policies regarding the legal system, victims’ rights, and the death penalty. Whatever the case, it is clear that all of the people who were killed in Oklahoma City and on September 11, 2001 died “culturally significant public deaths,” setting them apart from the thousands of people who die every year from everyday, usually invisible, violence. 52 As such, their deaths were re-experienced on a daily basis by family members confronted with news stories of the bombings, and then during the trials of the perpetrators in the case of Oklahoma City and the nearly daily invocations of 9/11 in the mass media and by politicians during the decade after the event. The publicity associated with these deaths created an extended bereaved community, and hence a community of mourners that both comforted and intruded into the lives of the families of the dead. Thus, the identity of the victims “not only signifies the relationship between a name and asset of physical remains but also encompasses the social ties that bind a person to a place, a time, and, most importantly, to other human beings.”53 The creation of this community of mourners highlights the three different ways that victims of mass atrocity become recognized: first and most obviously, scientifically through the actions of forensic science and DNA identification; second, socially, when family, friends, and communities accept the scientific claim of identity made by the authorities; and finally, collectively, when the person is recognized by the broader national and international community through public commemorations and memorials, and fitted into a historical narrative about the past.54 Thereis, of course, spatial and temporal separation between each of the three moments of recognition, but they are interwoven in ways that will be explored throughout this book. Memorials and the Nation Memorials to tragic events are expected to serve many purposes: to remember the event; to explain its historical importance; to mourn the deceased; to remember their lives and give their deaths broader meaning; to highlight the threat of terrorism and violence; to tell the perpetrators and the world that the goodness of humanity was not defeated by terrorism or violence; to serve the local community through nature, the arts, and public space; and to serve the nation as a source of pride, resilience, and political meaning.55 Memorials fundamentally remember events that were unexpected and create a disjuncture in our understanding of history and ourselves.56 They also suggest that the future should be different in some way. The decision to memorialize a set of past events plays a part in narrating what type of future should exist, both at the site of the memorial and in the society doing the remembering. Introduction 19 When an event is memorialized, there is a certain appeal in focusing primarily on the redemptive narrative identified by Linenthal in the aftermath of the Oklahoma City bombings—showcasing the better side of human nature, the pride and work ethic of a city or region, the essential goodness of the nation and its people, and the capacity for regeneration and redevelopment in areas affected by terrible events. Perhaps because of the desire for redemption, since World War II, and especially since the 1980s, American memorial culture has been expanded and democratized, both in the sense of which events get memorialized and who has a say in how the memorialization will be done. The time that passes from event to memorial has also been compressed.57 Rather than discussions about memorialization taking place years or even decades after something happened, they often start days after an event.58 There is an intense desire not to forget tragedies and lives lost, and to convey some sort of message—especially a positive one—to future generations about the event.59 The recovery, identification, and memorialization of the victims of the September 11 World Trade Center attacks brought science to bear on questions of identity, politics, and memory. The promise of identifying human remains through genetic technologies has fundamentally altered the way we will memorialize the dead in the future. These changes are not unambiguously good. Will the value of victims be measured by how much technology is applied to identify them? Will the loss of memorials to the unknown dead, and the lack of a clear end to identification efforts, change the way we remember traumatic events—both individually and as communities? How will the culture of memorialization change when there are no longer anonymous remains that belong to the collective, but only remains awaiting ever more powerful technologies to be identified and repatriated to families? These questions cannot be answered yet, but the response to the September 11 World Trade Center attacks can at least provide us with some clues. It is to this story that we turn now.
0 notes
katebushwick · 5 years
Text
Toward an Anthropology of Fragments, Instabilities, and Incomplete Transitions The post–cold war era has been a watershed for anthropology as it has been for many of the societies and communities that anthropologists study. This volume’s essays on Vietnamese refugees, the fallout of German reunification for East German jurists, and the rise of the powerful informal economy in post-Soviet Russia illustrate the diverse and changing legacies of the period. The transitions from socialist and capitalist authoritarianisms to varieties of market-driven democracy are neither linear nor unilateral processes, although the commentators and politicians who make their livings by characterizing such developments have for the most part been slow to recognize this (Weisberg 1999). Those commentators who praise global neoliberal economic integration, which would dismantle highly centralized socialist state economies and the protectionist markets of the West, do not always acknowledge that this moment is only one of several historical waves of capital expansion with its Janus-faced array of new possibilities and harsh dislocations. The modernist impulse to see rational progress or ‘‘the end of history’’ in these political and economic transformations is thwarted at every turn by far more complex and less controllable realities (Fukuyama 1993; Scott 1998). Once one considers the complex interplay of politics and economics in a world of striking cultural diversity, it becomes evident that change does not generate shifts from one coherent formation to another. Howard De Nike’s essay for this volume captures an important dimension of the post-socialist political and economic transition in Germany through a case study of the troubled unification of the democratic and socialist legal systems under the mandate of the West German judiciary. His analysis illustrates the central role of history and memory in the consolidation of West German dominance—the strategic conflation of the Nazi and communist periods used to strip the senior generation of East Berlin judges and prosecutors of their legitimacy and jobs while the West German establishment embraced their own past as one of unproblematic anti-fascism. This analysis illustrates the doubleness of bureaucratic memory through which the past is re- 380 Kay B. Warren constructed in terms of a highly partisan present to justify the marginalization and disintegration of a high-status social field. That there were more collegial alternatives to the scapegoating of the post-WWII generation of jurists in other parts of Germany underscores the political choices that were involved in this ‘‘democratic’’ and apparently authoritarian transition. If De Nike’s essay captures the fulcrum point of a classic post– cold war nationalist transition, then Stacia Zabusky’s essay on big science and European integration documents the emergence of new post-nationalist formations of elite work. These ephemeral virtual networks demand great individual creativity and the tolerance of fragmentation and uncertainty in the face of ongoing work-group and institutional instability. This pattern echoes what Aihwa Ong (1999) describes as ‘‘flexible citizenship’’ in the post-nationalist world of global capital flows. Communities, whatever their scale, continue in heterogeneous ways to reconstitute themselves as they make the world their own, inevitably in the face of tremendous economic and political constraints on their actions. As a result, anthropology has increasingly become the study of instability and fragmentation, of systems caught in contradictory currents of change. It is important to recognize that Carol Greenhouse’s and Beth Mertz’s framings of this problematic in terms of ethnographic feasibility and a humanistic quest for ethical understanding and this essay’s concern with political economies are all parts of the same project. The analyses in this volume respond to these sea changes with a range of issues and framings. The authors seek to characterize current national and transnational engagements and to explore historical processes in terms specific to their ethnographic contexts. The case studies illustrate that, while instability may be a marker of our era, it is hardly a monopoly of the present. Michael Taussig (1992, 1997) has critiqued the unreal—or rather surreal—character of stability and the ways political regimes attempt to mask their destructive fragmentation of social life for international audiences, even as they characteristically pursue policies to heighten insecurity and uncertainty in the lives they seek to control (Sluka 1999). Most often, fragmentation has been attributed to state violence, to dehumanizing colonialism and authoritarianism. At times this construction a≈rms a hierarchy of nations with unspoken hubris: the West is democratic and above violent internal politics; others Toward an Anthropology of Fragments 381 are not. Some of the most striking current political anthropology, however, questions this formulation in subtle ways, widening our understanding of fragmentation and violence through findings of violent fragmentation across authoritarian and democratic regimes (Tambiah 1997; Aretxaga 1997, 1999). The question for engaged ethnographers is how to resist becoming complicit in the misrepresentation of normative (nationalistic) politics as stable systems. And how not to leave unprobed constructions that normalize the danger of Otherness as threats that emanate from outside stable systems or as sedition from within (Warren 1993, 1999). The concept of stability twinned with the ominous threat of instability conjures a world of bounded units—the territorially defined nation-states of political science and the discrete cultures or societies of anthropological accounts—in what plays out to be (as often as not) a defensive support of the status quo of power arrangements. There are several ways out of this situation. One is to study the political acts of conjuring, idealizing, and protecting stability, of representing populations as bounded nations or cultures, and of pursing modernist rationalism as an end in itself. This is what Scott (1998), Holston (1993), Fox (1990), Ferguson (1994), Schirmer (1999), and others have done so well in powerful social critiques. Another is to focus on instability itself, on communities caught in contradictory transformations to pursue the current tensions and mismatches of neoliberal capitalism and democracy that are played out in the practice of local and national politics. This is what Tambiah (1997), Aretxaga (1999), Comaro√ and Comaro√ (1999), Nash (2001), the contributors to Sluka (1999), and many of the essays in this volume do. Such projects involve first moving away from a uniquely state-centric analysis, from national interest as the most important measure of political calculation, to a more finely grained picture of multiple centers of politics and social interests. They are also increasingly a move away from regional studies—originally a product of cold war research funding that channeled transnational studies down the well-trodden path of great-power spheres of influence and toward a more fluid sense of transnationalism and international connections (Kearney 1996; Appadurai 1996). This reframing calls for a recognition of the impact of the heterogeneous global flows of capital and culture through emerging regional economic blocs, border transgressing mass media, major institutions promoting international law and human rights, nongovern- 382 Kay B. Warren mental organizations (ngos), and the ever more complex diasporas of refugees, immigrants, and migratory workers. Few of these patterns are novel, as the feminist anthropological literature on footloose transnational industrial production made clear twenty years ago (Nash and FernĂĄndez-Kelly 1983). Much is now being made about the impact of neoliberalism and the jarringly rapid transnational flow of capital without concern for state borders. Grassroots protests against the World Trade Organization (wto), International Monetary Fund (imf), and World Bank policies that mushroomed after 1999 have brought these issues into the realm of public debate. There is now recognition that state sovereignty has been weakened as countries are subject to powerful and volatile economic forces beyond their immediate control. Transnational patterns of investment and currency speculation pursue profit maximizing strategies that cast the world in terms of economic markets rather than in terms of the security of families and communities (Trouillot 2001, Nash 2001, Stephen 2001). From the national perspective, the issue is not just what producers are paid for their labor and companies for their exports but also how countries cope with transnational industries and their capacity to cut jobs and relocate at will to pursue lower production costs and less regulation. Before financing major development projects or rescuing national economies in monetary crisis, international organizations routinely insist on structural reforms consistent with global norms for World Bank and imf loans. Thus, loans are granted on conditional terms that compel economic restructuring. The resulting pressure to privatize what have conventionally been public services threatens government job patronage, great and small, and the services and subsidies that keep transportation and food prices lower for the poor than markets otherwise demand. A growing gap between the rich (who are able to benefit from these currents of change) and the poor (who face economic stresses and unpredictabilities that endanger their basic subsistence strategies) appears to be the price of doing business in the neoliberal era. While aggregate statistics show that the growth in income gaps began to level o√ for some world regions in the 1990s, the reality looks very di√erent when one examines individual communities and displaced populations that su√er the brunt of uneven economic development. The Zapatista rebellion in Mexico, riots and ethnic tensions in Indonesia, and the mass uprising that triggered the 2000 coup in Ecuador demon- Toward an Anthropology of Fragments 383 strate the intensity of citizen responses to these economic shocks.∞ The 2000 demonstrations against the imf and World Bank were designed to highlight the interconnected character of the global economy, the overwhelming debt burden carried by some of the poorest countries, and the severe repercussions of neoliberal reforms for the most vulnerable populations. Increasingly, we see regional blocs emerging: the EU, nafta, and other regional trade alliances that reflect the erosion of any one state’s ability to respond to local needs. Nevertheless, it remains very di≈cult for anthropologists who have worked on states with aggressive authoritarian histories to support the argument that states are growing irrelevant in the global economic order. Despite the global economy and the intervention of the international community into selected regional disputes such as the Kosovo war, it is still clear that many states maintain coercive powers over their citizens and that militaries still use the language of national security to repress dissent. History continues to hold many lessons for us on this score. One thematic that crosscuts the essays in this volume is the coercive nature of states and the ironies of colonial rule. Robert Gordon’s essay on the South African administration of Namibia after World War I is particularly astute in rethinking the issue of state violence, power, and subjectivity in a colonial situation. This colonial state was by all measures underadministered. The mission of civilizing the native communities and establishing the rule of (procedural) law was used by those in power to assert the legitimacy of an ethnically stratified polity. Colonial procedures and social rituals, as invented traditions, were tactically used to heighten the distance between the colonizers and the native communities. Lacking the capacity for full-scale surveillance, the government solved the problem of state control through policies that devolved substantial powers to the settlers. The measure of vagrancy legislation was not, Gordon argues, the rate of arrests but rather that this policy, in e√ect, gave settlers special powers as state surrogates, including the capacity to intervene at whim in the lives of native families. Yet, following this power arrangement in practice reveals important ironies about colonial rule. While the colonial system normalized and legitimized settler violence, it failed to calm their anxieties about the possibility of native revolts, and this anxiety was exacerbated by the settlers’ own demographic and political fragmentation. As Gordon shows, there is only the delusion of an exit from the contradictions of a 384 Kay B. Warren colonial political formation built on this particular combination of the rule of law, violent control, and settler anxiety. The analysis raises the issue of how much the controller was controlled as state policies designed a highly brittle and ambivalent role for their local surrogates.≀ In response, one wonders how often native communities played on their capacity to precipitate settler panic. Carroll Lewin analyzes another colonial form in her examination of the process through which German occupiers instituted ghettos to forcibly segregate Jewish populations in the newly conquered territories of Eastern Europe during the Holocaust. Resettlement was used by the Germans to strip Jews of their property, to force them into slave labor for the war e√ort, and ultimately to subject them to extermination. At the center of German occupation was an imposed system of self-rule through Jewish councils (Judenrate) that were given the duties of dealing with the conflicting demands of German bureaucracies and regulating many aspects of ghetto social life, including the rationing of food, organization of forced-labor squads, provision of basic services, and the fulfillment of deportation quotas to what turned out to be death camps. In a climate of terrible violence and uncertainty, Nazi policy created terrible existential dilemmas for the ghetto populace. By pursuing these dilemmas, Lewin addresses an issue which Gordon’s framing does not, the response of the subaltern to the hegemonic modes of control that envelop them. Through case studies of the council leadership in the Lodz, Warsaw, and Vilna ghettos, she illustrates the varied approaches that leaders brought to their roles—and the varied responses of other Jews to their actions—as they worked under the unstable Nazi deception that by cooperating with German authorities they could help their fellow Jews. Lewin takes a second analytical pass on ghetto politics to show how the system of conflicting German and Jewish rationalities (or cultural logics) fit together in this terrorist state. The horror is that the desire to find even a contingent morality and mutuality was destined to fail in the face of this factionalized—and merciless— power structure. Only the interplay of deception and denial kept alive the illusion that work would slow deportation, that the loss of some would permit the survival of others, and that resistance would only bring collective repression. Lewin would likely agree that German-imposed Jewish self-administration in the ghettos was an extreme variant of a common political form that reappears in the present neocolonial terror- Toward an Anthropology of Fragments 385 ist state and promotes forms of self-rule and self-surveillances in addition to death squads and genocidal policies that undercut resistance. The creation of divided realities, the exploitation of radically di√erent rationalities, and the blurring of victimizer and victim is typical of the structures of control imposed by violent states (Aretxaga 1996; Sluka 1999; Warren 1999). In order to divide civilians from the organized resistance, states have invested a great deal of energy in undermining the bond of trust between citizens, community members, and close family relatives—the very people upon whom individuals were dependent for survival—by forcing people to spy on each other or encouraging people to settle old scores by secretly reporting their enemies to the state. States use a variety of strategies to accomplish these forms of control, yet one commonality in the patterns is the demonization and dehumanization of the Other so that those captured in this category fall outside the routine discourse of moral claims. As I have found in my research on Guatemala, the fragmenting of social fields produced by counterinsurgency violence fosters internalized violence directed at the community itself. It also compels local culture makers to seek ways of expressing the resulting crisis of meaning in innovative ways—at times through surrealist imagery that transcends the representational limits of language—and causes people to seek cultural forms through which they can validate communal life, however momentarily. In Guatemala, these countercurrent responses to violence emerged at a moment in the 1980s when community survival was at stake (Warren 1998). As in the German case, there has been great dispute over wider citizen awareness of the genocide.≄ The problem for the anthropological study of state terrorism—given limited access to the original events and the ambiguous status of memory∂—is to represent the terms of conflicting rationalities and existential dilemmas in situations where power is dramatically skewed. In such situations, those in power seek to control civilians through practices that heighten insecurity and foster the displacement of political violence onto other social antagonisms. If one looks across violent regimes, there is no universal pattern to be found, so the ethnographic goal remains that of understanding the variety of situations and their outcomes. Lewin would add to this burden the challenge of revealing the ways that, in such overwhelming circumstances, people seek a quotidian normality—the macabre children’s games and organized cultural 386 Kay B. Warren activities of the ghettos—and the mimesis of moral mutuality and humanity that some Judenrate selectively extended while negotiating with their captors, even as genocidal politics made a mockery of it. It is the interplay of these di√erent rationalities during the German occupation that, as Lewin observes, at once made life livable and left intact structures of control that would take this away. The challenge for ethnographers of coercive states is to position themselves so that they can narrate the interplay of these conflicting rationalities. This is extraordinarily di≈cult in war zones where anthropologists are not exempt from the cultures of terror we seek to describe. During our field research, many of us experience existential dilemmas that echo the chronic uncertainties lived by those with whom we work. We come to know about and witness events that for a variety of ethical reasons we cannot fully reveal (Warren 1989, 1998, 2001). For contemporary anthropology, the interpretive dilemma becomes finding ways to portray the coexistence of powerful kinds of authoritarianism that have been changed by intensifying transnational economics and the resulting social dilemmas. A second thematic in these essays is globalization. Ongoing theorizing by Carolyn Nordstrom, Michel-Rolph Trouillot, and Aihwa Ong suggests that anthropologists are repositioning themselves in response to a variety of new circumstances. Nordstrom (2000a, 2000b, n.d.) argues for a new form of economic anthropology. Her goal is the creations of methodologies and ethnographic forms to study wartime trade alliances, or ‘‘shadow networks,’’ that crosscut countries, languages, and identity groups in Angola and Mozambique. As she observes, there has been little ethnography that examines internal wars as interstate events that generate transnational patterns of exchange (2000b:14). The recent convergence in Africa of weak states, chronic warfare, and crosscutting markets has spurred the development of transnational social fields that are the conduits for what Nordstrom terms ‘‘il/licit trade,’’ which often operates in the same spheres of influence as formal trade (2000a). War in remote corners of the world creates the demand for a jumble of commodities, services, and humans, demeaned as expendable objects. While regional and international markets are hungry for ‘‘local products’’ such as gems, strategic minerals, oils, drugs, timber, mercenaries, and war orphans, wars also generate the demand for weapons, private armies, computers, and luxury goods for wartime elites. There is a particularly anthropological dimension to this project Toward an Anthropology of Fragments 387 in that these informal economies generate their own alliances, norms for exchange, and authority structures. Nordstrom criticizes the conventional focus on formal institutions and the consequent neglect of informal economies in these situations. Nancy Ries’s essay in this volume on the collapse of state socialism in Russia takes on Nordstrom’s agenda, the tracing of the growing impact of vibrant informal economies where states are weak, in a very di√erent context. In Russia, as the state lost the power to politically and economically regulate the economy, settle disputes, or enforce contracts, elements of the shadow economy filled the vacuum. Ries argues that the continuities in this cultural system are striking. For many citizens, participation in the informal economy through strategies of mutual assistance, domestic gardens, food hoarding, and pilfering remain key to survival in the face of meagre and often late paychecks, massive inflation, and dwindling benefits and employment from the state. Aggressive entrepreneurs have the option of working in the ever-expanding shadow economy that has left urban life saturated with illegal activities and violent enforcers. Life with the Russian market economy has given rise to a great deal of culture work, especially the crafting of narratives by the general populace to express their growing cynicism and to make sense of the uncertainty of authority, reciprocity, and risk in the new world order. What is striking in these narratives of moral economy is the Russian yearning for a strong institutionalized state, one in which, for some, an idealized and locally responsive mafia would supply order and enforcement that the o≈cial state cannot (or does) not. This marks a shift across the 1990s from the dread of the mafia as a source of great evil to acceptance of its role in ordering social relations. Although Ries ascribes a particularly Russian imagery to this situation of eroding trust, ironic dreams of a well-ordered world are not uncommon for postauthoritarian states in other parts of the world. The populations of Latin American countries, such as Guatemala, have struggled after the antiguerrilla wars of the 1980s with the loss of legitimacy of their legal systems, growing criminal delinquents who prey on the common people, and chronic economic uncertainty. Interestingly, local communities there have also been faced with the task of making sense of the ways violence breaks down trust and reciprocity and, in Guatemala, they have used Maya legends of transforming selves to explore the existential dilemmas of the danger of trust (Warren 1998). 388 Kay B. Warren Since the return to civilian rule in the late 1980s and the rise of general crime since the disarming of the guerrillas, civil patrols, and the military, one hears an undercurrent of yearning for rescue from social disorder by leaders who express law-and-order politics and populist concerns, even if in the past these same figures were associated with brutal counterinsurgency violence against civilians. In a number of Latin American countries, similar desires have been very cleverly manipulated by parties on the Right. Michel-Rolph Trouillot (2001) agrees that despite the strident rhetorics of sovereignty and nationalism it is time to rethink our understanding of states in light of globalization. He advocates approaching the state as a multiplicity of social fields, boundaries, and institutions. For him, ethnographic research becomes the study of ongoing events and processes that reflect the dynamics of transnational power relations, the circulation of capital and growing concentration of economic power, and the restructuring of labor markets. In the volume, several essays reveal how complex, heterogeneous, and violent these social fields can be, especially for those at the margins. Phil Parnell focuses his description of poor Filipinos struggling for urban land and housing in the context of the ‘‘composite state,’’ the result of fragmented cultures, alliances, and patron-client relations. Elizabeth Faier traces the fragmented lives of Palestinian feminist activists and nationalists in Israel who attempt, at great personal cost, to find ways of bridging the disjunctures between their multifaceted lives as urban activists and as rural daughters. They struggle with the tensions involved in social advocacy on two fronts, with the gendered realities of their daily lives, and with the possibility of honor killings by their own family members for their challenges to traditional patriarchy. Trouillot o√ers a conceptual innovation, the study of transnational and state powers through their e√ects. Among these ‘‘state e√ects’’ are the production of individualized subjects, collective identities, languages of governance, and jurisdictional boundaries (4). With international development policies stemming from neoliberal economic models, states are yielding major functions to private groups and corporations. Moreover, as international organizations and ngos assume state functions in areas such as economic development, peacekeeping, and education, they produce state e√ects in their own right. The ethnographic challenge I would add, is coming to a fuller understanding of the interplay between transnational e√ects and domestic politics. Toward an Anthropology of Fragments 389 James Freeman and Nguyen Dinh Huu’s study of Vietnamese refugees in this volume illustrates the interplay of states and the un system of governance in the lives of unaccompanied minors who fled the country after the U.S. military retreat and the fall of Saigon in 1975.∑ These children were relocated to camps in Hong Kong, Thailand, the Philippines, and elsewhere, and the authors focus on the violent ironies of contemporary transnationalism in which states determine who is an alien at their borders, and international organizations have the power to redesignate refugees as illegal immigrants. Even as these detention centers became permanent homes over the years, their substandard living conditions and limited schooling and job training continued to underscore the refugees’ transitory status and were seen as appropriate ways to encourage the asylum seekers to return to their homeland. Though Freeman and Nguyen do not discuss life in the campus in any detail other than the incubation of alternative families and a violent youth culture, one suspects that the resulting political culture included the demonization of communism even as Vietnamese society moved on and readjusted to the end of the cold war. This would have contributed a bizarre time warp to the inmates’ many other dilemmas. Beyond fearing persecution, youths did not have the basic social skills and education to return to daily life in 1990s Vietnam, and they realistically worried about the prospect of inadequate support from distant relatives with their own problems. Freeman and Nguyen make a case for the injustice of Western-centric models of aid delivery promoted by the United Nations High Commissioner on Refugees (unhcr) and their ngo a≈liates which acted as state surrogates in determining the fates—including forced repatriation in 1993—of these children. As ngo activists themselves, Freeman and Nguyen argue that there were a variety of options beyond the breakup of camp families and the forced repatriation of siblings to distant relatives, which were left unexplored yet might have better served these minors as they faced adulthood. Their critique of the un bureaucracy—its discourse and practice of assistance—echoes Lewin’s discussion of the modernist need for closure which, in its quest for order as recognizable progress, denies the complexity of situations that fail to conform to this vision of change. In their view, this powerful transnational bureaucracy—with its own sovereignty and state e√ects—escapes accountability for its actions in a way that leaves its model of intervention intact no matter what its result. The rationality of the unhcr’s actions rested in the organization’s methods for weighing the eligibility of individuals 390 Kay B. Warren for third country asylum and, absent this possibility, creating conventional understandings of the best interests of children. The unhcr bureaucracy rotated sta√ through standardized positions with the result that o≈cials were moved on before they could see the consequences of the decisions they made. On the grassroots level, ngos did much of the monitoring and were pressured to generate reports that fit within the organization’s procedures and time frame. The authors charge that, in the worst situations, o≈cials created a kind of double-speak that hid the actual mistreatment of children in the humanitarian language of ‘‘durable solutions,’’ ‘‘family reunions,’’ and ‘‘orderly repatriation.’’ For the Vietnamese refugees, the problem was that this language assumed a stability of culture, country, family, and individual psychosocial development that cannot exist in a world where change has produced such radical disjunctures. In this context, to see stability where there has been very little is a misstep with severe consequences. Viewed in a global context, I would add that it becomes apparent that international organizations are constrained by a crisis life cycle in which it is important for cases to be closed out—that is, successfully resolved in order to give meaning to the e√ort and to encourage international financial support—so the system can move on to the next crisis. The children became victims of a transnational boom-bust cycle in the funding of crisis aid, as Stephen Jackson has so insightfully pointed out, one that often generates its own unanticipated violence and corruption (1999). As a result, the state e√ects of these international organizations have their own life cycle that influences the organizations’ responses toward the people they reclassify in the fateful interplay of domestic and international policies. Finally, Aihwa Ong’s view of globalization (1999, n.d.) emphasizes regionally specific patterns of development that emerge from the differential demands of the global economy. The consequent degrading of state power has produced new political geographies that can be characterized by their ‘‘graduated sovereignty.’’ On the one hand, hightech production in global information cities, industrial corridors, and growth triangles in Southeast Asia concentrates economic power that crosscuts state boundaries. Elite employees at these global centers cultivate identities and political subjectivities that echo transnational rather than national flows of capital. Yet other regions are marginalized by this process, especially when they are stigmatized as sources of cheap labor or as backwaters of economic change. Toward an Anthropology of Fragments 391 The question for Ong, however, is not the presence of these new hierarchies but rather the insecurities and responses that these regional patterns engender in those who benefit from them as well as those who su√er from diminishing state services, fragmented citizenship, and growing economic insecurity. She argues for a reorientation of ethnography to study the ways that elite subjectivities are reshaped by languages of religion and community that stand apart from the conventional discourse of politics and social citizenship. Her dual perspective further argues for a special role that ngos can play in communities that find themselves exploited and marginalized by neoliberal economics. Here she would have ethnographers focus on the ways that transnational norms of social justice have been appropriated in local e√orts to build novel forms of social capital and self-reliance in order to cope with changing patterns of uncertainty and risk. Ong’s comprehensive framing of globalization leaves one with the feeling that many of us have part of the story without seeing the whole, and that anthropology, with its new commitment to multisited field research, needs to foster more integrative and collaborative research methodologies. One can see the contributions in this volume by Stacia Zabusky (on high-tech science in the European Union) and by Eve Darian-Smith (on the reinvention of Kent localism by both urban interlopers and locals who appropriate European Union law as a tactical weapon against encroachment and state regulation) as contributing to Ong’s call for the study of regionalisms as consequences of new forms of transnational production and politics. For the fuller European case, one would want to add more on the issues of guest workers and the transnational labor pools that European countries draw upon for their economies. This volume’s essays argue that analytical insights about the nature of transnationalism, power, and changing subjectivities can be garnered by studying a variety of historical and current situations. The challenge for anthropologists studying contemporary situations is conceptual and ethnographic. How do we research situations of flux and fragmentation even as we are experientially and structurally part of the story? How do we create ethnographic genres to convey our findings? One way is through narratives of absence and displacement that capture the contradictory currents of change, changing social fields, and the failure of state institutions and older models of citizenship in the face of di≈- cult transformations and transitions. Another way is to trace the social 392 Kay B. Warren struggles and culture work of people attempting to make sense of and cope with the particular kinds of fragmentation and displacement they experience. Nordstrom, Trouillot, and Ong remind us of the importance of studying the intimate interpenetration of the local and the global and the importance of informal as well as formal power structures. A third approach argues that the anthropology of outrage—that is, the act of taking sides in political disputes and economic crises—is not su≈cient in itself. Rather, anthropology’s unique capacity is to stand back and understand conflicting rationalities though more encompassing models that reveal the wider array of political economic interconnections and existential dilemmas of living in the global era.
0 notes