In Our Image: Artificial Intelligence and the Human SpiritIn Our Image: Artificial Intelligence and the Human Spirit
Noreen Herzfeld

CBD Price: $20.19
( Usually ships in 24-48 hours. )
Add To Cart
Presenting three approaches to the image of God and three parallel designs for artificial intelligence, Herzfeld examines what it means to "re-create" ourselves as seen in the context of theology and science. She explores "being human" and "humanlike"; and suggests a Christian spirituality that's relevant in a technological age. 135 pages, softcover from Fortress.
     

Back To Detail Page
From the preface

(to view a footnote, click on its number)
In an article in Wired, Bill Joy, chief scientists at Sun Microsystems, warns that advances in robotics and in nanotechnology could result, as soon as 2030, in a computer technology that may replace the human species.1 Hans Moravec, of the artificial intelligence (AI) lab at Carnegie Mellon, pushes the time back to 2040 but agrees, "by performing better and cheaper, the robots will displace humans from essential roles. Rather quickly, they could displace us from existence."2

What is a Christian to make of predictions such as these? Is the idea that computers are the next step in evolution compatible with traditional Christian understandings of what it means to be human? Is there anything about being human that machines will never duplicate? Even though computer science and technology have a long way to go before computers will begin to think or act at all like human beings, now is the time for us to examine what motivates the field of artificial intelligence and what exactly it is we hope to create through that field. Whether computers, our "mind children," as Moravec calls them, are positioned to replace humanity or to coexist with us, whether we even wish to pursue the dream of AI at all, depends on which aspect or aspects of our own nature we hope to copy in our attempt to create autonomous machines.

It is this larger question of what it means to be human that is graphically posed by AI. The goal of AI is to create an "other" in our own image. That image will necessarily be partial, thus we must determine just what it is in ourselves that computers must possess or demonstrate to be considered our "mind children." The question of what we humans might share with one completely other to ourselves has been examined by Christian theologians through the concept of the image of God in which, according to Genesis 1, human beings were created. Is this image that humans share with God related to the image we wish to share with our own creation in AI? What part of our nature do we consider so important that we wish to image it in our creation? This book examines what it means to create in one's own image in the dual contexts of Christian theology and the field of artificial intelligence. In both fields, the image being transferred from creator to creature has been viewed in a variety of ways, yet is always something central to our understanding of what it means to be human. The goal is to see whether these understandings of the human condition, from two such disparate fields, are at all commensurable and to explore what implications our concept of being human might have, both for the project of creating and coexisting with artificially intelligent creatures and for the project of creating a Christian spirituality that is relevant in a technological age.
This book has been over ten years in the making. As a professor of computer science at a liberal arts college, I found myself one day asking why scientists were interested in creating an artificial intelligence. Philosophers we were reading for class, including Hubert Dreyfus and John Searle, had examined the question of whether it might be possible to create a machine with human-like intelligence. Computer scientists were looking at how one might do this. But nowhere in the literature could I find any consideration of why our society has had such a lingering fascination with the idea of creating an artificial person, a fascination evident both in the research labs of schools such as MIT, Stanford, and Carnegie Mellon, and in the stories and films of science fiction. While artificial intelligence as a field has shown minimal progress since its inception in the 1950s, the concept of a fully human-like computer seems to resurface every decade or so as a significant societal influence. But why work so hard to create something human-like, a "person," out of silicon when creating new persons in the old-fashioned way seems so much easier (and more fun!)? I decided to pursue this puzzle: "Why AI?"

I very quickly learned that my education as a computer scientist left me woefully ill-trained to answer this question. I knew a bit of the how, but very little of the why, of human motivations, of what drives us to seek one thing rather than another. This was a question of the human spirit, of what makes us who we are, of where we come from and where we are going. Thus I was led, after many twists and turns, to a formal study of theology and, finally, to the genesis of this book.

Portions of chapter five dealing with cybernetic immortality are drawn from a chapter to be published by the Center for Theology and the Natural Sciences, Berkeley, California. Portions of chapter six were developed with funds from the Science and Spiritual Quest Project, Center for Theology and the Natural Sciences, Berkeley, California, and the American Association for the Advancement of Science.
From chapter one, "In Our Image: The Desire for Artificial Intelligence"

(to view a footnote, click its number)

"What are humans that you are mindful of them,
mortals that you care for them?"

--Psalm 8:4-5

What does it mean for something or someone to be created in the image of another? This is a question that has been examined by Christian theologians through the ages, for it appears at the very beginning of our sacred texts. Genesis 1 states that human beings are created in the image of God. But God is not the only one to create in the creator's own image. As humans, we too have shown a perennial desire to create in our image. We have created images of the human body in painting, clay, metal and stone; images of human activity are described in literature and the arts. We have created machines that mirror human activity through their own actions. While aware that these images are both partial and superficial, they still have exerted a tremendous influence on how we view ourselves and our place in the world.1 Despite this influence, most of us would not consider an artistic or literary image of the human being to be an image of humankind in the way we are the image of God. Visual and literary images lack a dynamic modeling of our mental life; and it is in our thoughts, memories and actions that many of us find the greatest sense of our identity as persons.

The advent of the digital computer in the mid-twentieth century has given us a new medium with which to create images of ourselves. The field of artificial intelligence, in particular, explores the use of that medium to create an image of the human being in a way that extends far beyond the merely physical or the static. The potentiality of the compute to mimic human thought has opened the door for a new era of self-imaging. It has also sparked a profound debate as to what it is that we wish to image with our artificial intelligence, what it is that makes us truly human.
In Our Image: A Culture's Hopes and Fears

Interest in creating an artificial human, a dynamic alter ego that makes decisions or engages in human activities, has been a part of Western culture from its beginnings. Artificial humans appear in Western literature as early as Homer. In the Iliad, robots appear both in the guise of mobile serving tripods and as the copper giant, Talos, created by Daedalus to secure the shores of Crete from invaders.2 Ovid records the well-known myth of Pygmalion's construction of a lovely and beloved statue of a woman, which the gods later bring to life. Medieval Jewish folklore introduces the Golem, an artificial human constructed of clay, which comes to life through the inscription of a holy word on its forehead. Moving beyond myth and story, the actual design of machines that appear to talk, move independently, play chess, or compute sums interested some of the greatest thinkers of the Renaissance and the Enlightenment. Leonardo DaVinci bulit an automaton in the shape of a lion: René Descartes designed one in human form. Both Leibnitz and Pascal constructed mechanical computing devices using gears, while the Spaniard, Leonardo Torres y Quevedo, constructed a relay-based machine to play chess end games.3 Yet it was recognized that these machines were either manipulated by humans behind the scenes or were extremely limited mechanical devices. No medium, until the advent of the digital computer, held the promise of mimicking the versatility of human thought and action. A true artificial image of the human being remained solely in the realm of fiction, envisioned in stories such as Mary Shelley's Frankenstein or Karel Capek's R.U.R. (which introduced the term robot). It was not until the second half of the twentieth century that computers became sophisticated enough to give new life to our dreams of creating a fuller image of ourselves.
Popular interest in artificial intelligence has waxed and waned several times over the era of the digital computer, from the 1940s on. Shortly after the construction of the first vacuum-tube machines, in the late 1940s, university research programs in artificial intelligence (AI) were started, most notably at MIT, Carnegie Mellon, and Stanford. Among the general public, the number of artificially intelligent characters that appeared in the fiction and films of the 1950s and '60s furthered a corresponding awareness of AI. Interest in AI as a research field waned in the 1970s; the optimistic predictions of success made in the '60s had not come to fruition, and government and private funding, both in the U.S. and abroad, was curtailed. On the popular level, however, AI remained a staple of science fiction. Research interest picked up again in the '80s, with Japan's announcement of the Fifth Generation project and the advent of commercially useful expert systems. While neither of these endeavors produced the successes that were anticipated, AI remained as an established source of characters and plot lines in science fiction. A brief decline in books and articles popularizing AI in the late 1980s was more that recouped in the late '90s, when two media events propelled AI into the public limelight. These events sparked new enthusiasm, evident in such disparate phenomena as the increasing popularity of artificially intelligent characters on television and in the movies,4 the appearance of a new crop of books popularizing AI and predicting its imminent success, and the run on stores before Christmas 1998 to purchase Furbies, a children's toy that was said to exhibit the rudiments of artificial intelligence and the ability to learn.5

The two media events that precipitated this renewed popularity well illustrate the dual nature of our culture's interest in AI. The first was occasioned by the purely fictional; January 12, 1997, was the date assigned by Arthur C. Clarke in his novel 2001 for the "birthday," or coming on line, of the paranoid computer HAL.6 The arrival of this date initiated a spate of books, articles, and radio and television programs honoring HAL, directory Stanley Kubrick, and Clarke, and discussing how close we were at the time to creating a computer like HAL, one that could see, speak, play chess, plan, even lie and murder.7 That a fictional character could garner such attention illustrates the large part that fiction has played in shaping our perceptions and dreams of what artificial intelligence is or might be.
Reality quickly followed on the heels of fantasy, with a computer that seemed to have mastered at least one of HAL's skills. On May 11, 1997, an IBM RS/6000 SP computer dubbed "Deep Blue" triumphed over the reigning world chess master, Gary Kasparov, in a six-game match. The two had first faced each other in February 1996, when Kasparov won a close match. The rematch, in May 1997, ended in victory for Deep Blue. This victory was examined and reexamined in the press, regarded with everything from glee to gloom. Both the number and the prominence of articles regarding this match showed the intense interest generated by the specter of artificial intelligence.8 Headlines ranged from the speculative ("Man Lost to a Machine, but What Does It Mean?") to the excited ("Good News Is That the Machine Won") to the deeply worried ("Man vs. Machine: A Chess Showdown between the World Champion and a Computer Has Some Wondering If More than Bragging Rights Are at Stake" and "Computers Make Pawns of Us All").9 Many articles showed a desire to allay fears that computers truly are intelligent, or worse, that humans might soon be supplanted by machines.10 That this triumph represented more than a chess game was not a fabrication of the press. Kasparov himself stated shortly before the second match that his intent was to "help defend our dignity."11

The interest shown in the Kasparov/Deep Blue chess match was exceptional. The obvious pitting of machine against human being lent itself to speculation on the nature of intelligence and the future prospects, not only for artificial intelligence but for humankind itself. Chess is a fairly difficult task for human beings to master; hence it has been one of a variety of tasks that have been used at a litmus test for intelligence. A machine that could play chess better than a human being raised twin specters. First, our tools, in the guise of the computer, might outstrip our own capabilities, thus supplanting us as the most intelligent of beings. And second, machines that could do many of the things we do as humans might mean that we are expendable, or no more than machines ourselves. These were some of the same fears raised in audiences by the fictional HAL.
These concerns have stayed in the background of public awareness. As noted, Bill Joy, chief scientist at Sun Microsystems, published a controversial article in the spring 2000 issue of Wired magazine, entitled "Why the Future Doesn't Need Us." Joy warns that self-replicating robots and other advances in technology could result, as soon as 2030, in an artificial intelligence that might replace our species. Joy's article sparked much debate, both within the computer science community and in society at large. Even the renowned mathematician and cosmologist Stephen Hawking joined this debate, calling for further research in the manipulation of the human genetic code in order to enable us to remain smarter than our machines.12 In the summer of 2001, the release of the movie AI, a project of both the late Kubrick and Steven Spielberg, in which an artificially intelligent computer does survive beyond humanity, gave public voice to these fears and spawned a renewed interest in artificial intelligence in the press.

The media attention triggered by these events is symptomatic of a larger and continuing cultural fascination with computers and, more specifically, with the idea of computers exhibiting human traits.13 The deep and abiding interest in artificial intelligence in our culture is evident in more that media attention. Intelligent computers, robots, androids, and cyborgs have come to be the staple characters in science fiction stories and films. Books popularizing AI, such as Ray Kurzweil's The Age of Spiritual Machines, are bestsellers.14 University classes in artificial intelligence are increasingly popular and maintain high enrollments, while artificial intelligence continues as a well-funded research field.15

This fascination with AI remains strong despite the fact that actual progress in the field has been disappointing, certainly not on a level that would warrant the attention given to it in the public realm. As many of the articles that defended human abilities following Kasparov's defeat by Deep Blue hastened to point out, the reality of artificial intelligence research has so far been greatly outstripped by our image of the field. Since the beginnings of artificial intelligence as a research field in the mid-1950s, achievements have lagged far behind both the prognostications of scientists and the hopes and fears of the public. If our culture's fascination with AI is not rooted in the reality of results, then what is its source? Why do we continue to embrace the prospect of creating a machine in our image?
A Spiritual Question: "What Is Human Being?"

This continued fascination says more about our own nature as human beings than it does about the nature and possibilities of computer technology. In this book I leave aside the questions of whether it might be possible to create an artificially intelligent computer, or how one might best go about doing so. rather, I address some questions that our desire to create an intelligent computer raises about our own nature, namely, why we might be so interested in creating an artificial intelligence and what the approach we take to doing so reveals about our nature as human beings. The second part of the question could be restated as: What, precisely, are the qualities or capabilities that we hold as so important to our human nature that we hope to image them in AI, and how might our choice among these qualities affect any intelligent machine we produce, our relationship with that machine, and our understanding of ourselves?

One approach to answering the question of what our desire to create in our image says about ourselves is through the discipline of spirituality. According to theologian Philip Sheldrake, desires are "our most honest experience of ourselves, in all our complexity and depth, as we relate to people and things around us."16 Desires speak to us about who we are and what we hope to be. They inform us of our innermost nature. Sheldrake limits desire to the individual, viewing it as an intensely personal experience.17 Yet when many individuals hold the same desire and engage in activities to pursue it, when the stories we tell one another embody this desire, or when public policy reflects the intention to further certain activities that lead toward the fulfillment of a desire, one might then say that the desire is manifested culturally. The dream of creating an intelligent computer fits the above criteria. It is actively pursed by researchers from coast to coast (and in Western Europe, most notably, the United Kingdom). Artificially intelligent robots or androids appear regularly in story, film, and television. Articles abound in the popular press, and much research in AI is governmentally supported. An examination of our search for a mechanical image of ourselves will tell us something about who we, as twenty-first century Americans, see ourselves to be, what it is in our nature or being that we most value, and what we perceive as necessary for a convincing image of human kind.
At the root of our fascination with creating an artificial intelligence in our own image lies a continuing problematic of defining what it is we wish to image--in other words, what it means to be truly human. In the field of artificial intelligence this is discussed in terms of intelligence: what it is, how it might be measured, and how it is acquired. The elusive concept of intelligence is used to describe what is essential in us rather than merely contingent, what stands at the center of our human nature, distinguishing us from other animals and making us the species we are. The same question, what it means to be truly human, has been approached in the Christian tradition through the concept of the imago Dei, the image of God in which, according to Genesis 1:26, human beings were created. Thus, prior to examining our image in the context of AI, it is not inappropriate to examine how this image has been understood by theologians through the centuries. The image of God in humankind has become one cornerstone of Christian anthropology, a locus for understanding who we are in relation to both God and to the world.

What constitutes this image is not explicated in the Genesis text. Historically, interpretations of the image of God in humankind have varied, yet most can be categorized in one of three ways, as substantive, functional, or relational. The substantive interpretation views the image of God as a property or set of properties intrinsic to each of us as individuals, with reason a frequently mentioned property. A second interpretation views the image, not as a property, but as a title given to humans by virtue of what we do, in particular, our function, as God's representatives, of exercising dominion over the rest of creation. Third, relational interpretations understand the image of God to be manifested in human-divine and human-human relationship. According to this understanding, the image is not something found in any individual but is always corporate in nature, arising in interaction. All three approaches are current in twentieth- and early twenty-first-century theology.
Analogous views of what constitutes intelligence, and thus what lies at the heart of being human, are found in the varied approaches taken to designing an artificial intelligence. Symbolicists, such as Allen Newell and Herbert Simon, view intelligence as a property, held by the individual, and existing independently of any action. Others, such as Steven Tanimoto or Toshinori Munakata, view intelligence simply as a label that we place on certain activities. For them, the goal of AI is simply to make working programs to solve problems or perform tasks in particular areas. This approach is analogous to the functional approach of theology; if humans demonstrate the image of God by acting in the world in the place of God and by God's authority, the image of humankind can be seen whenever a computer acts in the place of a human being. Finally, there is a relational approach to AI that suggests that intelligence is both acquired and demonstrated through relationship.

That we fund such an analogy between approaches to AI and understandings of the image of God is initially somewhat surprising. It is not immediately apparent that the image of God in human beings should necessarily be constituted similarly to the image of humanity that we wish to pass on to computers. The image of God in humanity and the image of the human in a machine could be two entirely different things. For, after all, what does God have to do with computers? These two parallel images, however, describe neither God nor computers; both are used by humans, in reference to humans. The imago Dei, or divine image in humans, has traditionally functioned as a symbol to describe the intersection between humanity and God. It has also symbolized what it is that we value most in ourselves, what separates us from the animals, and that which forms the necessary core of our nature. Artificial intelligence, our imago hominis, represents the intersection between humans and computers. Yet any artificial intelligence that does not fit the same criteria of exhibiting both (1) what separates the human from the animal and (2) what is necessary rather than contingent to our nature would likely be judged insufficient, not a true image of ourselves and hence not fully intelligent.
Why a Spiritual Approach to AI?

The study of the human person in a theological context has traditionally been considered one of the main areas of systematic theology. I have chosen to discuss this question in terms of spirituality rather than systematic theology. While this may at first seem unusual, it is not entirely out of the mainstream. Two key scholars in the field of spirituality have defined spirituality in anthropological terms. Sandra Schneiders notes that the definition of spirituality has broadened in recent years "to connote the whole of the life of faith and even the life of the person as a whole, including its bodily, psychological, social, and political dimensions."18 Jean-Claude Breton describes spirituality as "a way of engaging anthropological questions and preoccupations in order to arrive at an ever richer and more authentically human life."19 While their anthropological focus is helpful, these statements are too broad to function as definitions, since they leave relatively little in the realm of human life and activity that would not fall under the purview of spirituality. For the purposes of this book, I define spirituality as the study of the human experience of encounter with an Other or the divine and our lived response to that experience. The first part of this definition would seem to focus on God, yet the experience of encounter with the divine involves an examination of both the object of encounter (for Christians, God) and the subject that encounters. It is the nature of that subject, humanity, that provides the locus for our lived response to our experience. Spirituality encompasses our deepest relationships with God, self, and others. In this context, our dreams of AI open new windows from which to see how we understand ourselves and how we encounter one that is like us yet radically other in nature. While I will focus on how various interpretations of the divine image in humanity might be used to answer the question of why we have a continuing interest in artificial intelligence and to critique the form that interest has taken, this is but one facet of the larger issue of how computers in general, and AI in particular, are changing our understanding of ourselves and our place in the world.
Schneiders provides a threefold methodology for examining questions in spirituality consisting of (1) description, (2) critical analysis, and (3) constructive interpretation.20 This methodology reflects the movement in spirituality itself, from encounter, through analysis of that encounter, to a lived response. In this book I follow a similar trajectory. I have already described our culture's desire for AI. Thus I begin with the twofold question: Why create an artificial intelligence, and what is it in ourselves that we wish to image in such an intelligence? Chapters 2 through 4 provide the background for answering these questions, first by looking at the concept of human creation in God's image, then by considering the history of our attempts to envision or create an artificial intelligence, both in the scientific laboratory and in our imaginations. While these chapters will answer the second question of what it is in ourselves that we are attempting to image, chapter 5 will engage the initial question of why we seek to create an AI. Chapter 6 moves from analysis to response. There I note, first, some implications our search for AI has for Christian spirituality in our century, and second, a few practical implications Christianity has for how we might wish to go about the process of designing an artificial intelligence and how we might be obligated to treat such a one, should it ever be created.

The concept of a creation that is in the image of its creator is a recognition of our desire for the identification of a link, some radical similarity, between ourselves and god, ourselves and any artificially intelligent computer. How we interpret the imago of imago Dei or imago hominis (image of the human) establishes the nature of that link, which in turn defines the nature of our relationship with the other. The way we define God's image in our human nature or our image in the computer has implications, not only for how we view ourselves but also for how we relate to God, to one another, and to our creations.
Footnotes

  1. Bill Joy, "Why the Future Doesn't Need Us," Wired, April 2000. (return to the text)

  2. Hans Moravec, Robot: Mere Machine to Transcendent Mind (Oxford: Oxford University Press, 1998). (return to the text)
  1. One need only consider the iconoclastic controversies of the eighth and ninth centuries to see the powerful responses and cultural challenges unleashed through our contemplation of images. For a historical survey of Western responses to visual images of both the divine and the human, see David Freedberg, The Power of Images: Studies in the History and Theory of Response (Chicago: University of Chicago Press, 1989). (return to the text)

  2. Jean-Claude Beaune, L'Automate et ses mobiles (Paris: Flammarion, 1980), 38. (return to the text)

  3. Daniel Crevier, AI: The Tumultuous History of the Search for Artificial Intelligence (New York: Basic Books, 1993), 2-3. (return to the text)

  4. Commander Data of the television series Star Trek: The Next Generation and the lovable droids R2-D2 and C-3PO from the series of Star Wars movies are perhaps the most obvious examples of extremely popular science fiction characters representing artificial intelligence. (return to the text)

  5. Publication of books popularizing AI, and often predicting its imminent success, tends to be cyclical. Such books were quite popular in the 1960s when AI was first developed, again in the early to mid-1980s with projects such as the Japanese Fifth Generation and the rise of commercially viable expert systems, and now again after the turn of the century, as a new rash of books such as Ray Kurzweil's once again suggests that intelligent computers are right around the corner. (return to the text)

  6. HAL came to the attention of the populace, not so much through Clarke's novel as through Stanley Kubrick's subsequent movie adaptation. In the movie, Kubrick moved the date back to 1992, presumably to give HAL a longer operational life and thus make its disconnection or "death" more poignant. Oddly enough, and perhaps in deference to Clarke, little notice was paid to HAL on the 1992 date. (return to the text)

  7. Examples include the publication in 1997 of a Festschrift in HAL's honor, David Stork, ed., HAL's Legacy: 2001's Computer as Dream and Reality (Cambridge: MIT Press, 1997), and articles in issues of Wired, Time, Newsweek, Fortune, and The Economist, as well as major news syndicates. (return to the text)
  1. A search of fourteen major newspapers from coast to coast returns 275 articles regarding the chess match between Deep Blue and Kasparov, played in May 1997. The Washington Post ran sixteen articles on the second match, the Christian Science Monitor eighteen, many times more that would have been elicited by a human vs. human chess match. (return to the text)

  2. Miami Herald, 15 May 1997; Contra Costa Times (Concord, Calif.), 20 May 1997; Miami Herald, 1 May 1997; Miami Herald, 15 May 1997. (return to the text)

  3. For example, see the following: "Intelligence Still Eludes Computer: Deep Blue Is Far from Human" (Detroit Free Press, 11 May 1997), "The World Chess Champion's Loss to a Computer Does Not Mean Humans Will Be Slaves to Machines, Experts Said" (Minneapolis Star Tribune, 13 May 1997). (return to the text)

  4. Robert Wright, "Can Machines Think?" Time, 25 March 1996: 50. (return to the text)

  5. Nick Walsh, "Alter Our DNA or Robots Will Take Over, Warns Hawking" (Observer, 2 September, 2001). (return to the text)

  6. The New York Times ran fifty-three articles that mentioned artificial intelligence between July 1997 and July 1998, well after interest in both HAL's birthday and the Deep Blue/Kasparov chess match abated. These articles ranged from short reports of new advances in computer hardware or software to philosophical discussions on what it means to be intelligent, creative, or simply human. (return to the text)

  7. Ray Kurzweil's The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Viking, 1999) remained on Amazon.com's bestseller list in July 1999, over six months after its publication. (return to the text)

  8. In the sixteen years that I have taught Artificial Intelligence at St. John's University I have never had a section that was not filled to capacity. (return to the text)

  9. Philip Sheldrake, Befriending Our Desires (Notre Dame, Ind.: Ave Maria, 1994), 12. (return to the text)

  10. Ibid., 11. (return to the text)
  1. Sandra Schneiders, "Spirituality in the Academy," in Modern Christian Spirituality: Methodological and Historical Essays, ed. Bradley C. Hanson, American Academy of Religion Studies in Religion, no. 62 (Atlanta: Scholars Press, 1990), 18. (return to the text)

  2. Jean-Claude Breton, "Retrouver les assises anthropologiques de la vie spirituelle," Studies in Religion/Sciences religieuses 17 (1988): 101. Raymundo Panikkar defines spirituality in a similarly anthropological mannner, as "one typical way of handling the human condition." Panikkar, The Trinity and the Religious Experience of Man: Icon--Person--Mystery (Maryknoll, N.Y.: Orbis, 1973), 9. (return to the text)

  3. Sandra Schneiders, "A Hermeneutical Approach to the Study of Christian Spirituality," Christian Spirituality Bulletin 2 (spring 1994): 9-14. (return to the text)