With the demise of the dinosaurs, earth became humanity’s oyster. With Tyrannosaurus rex out of the picture, we were free to scurry down from the trees or crawl out of our caves to explore the world and get on with the business of not just surviving, but of developing our culture, and doing so in relative peace and security.
Key components of human culture are the technologies of communication, and any major new communication technology causes major shifts—sometimes epochal shifts—in culture and society. Language is a technology—a tool that extended and structured the mind, stimulating abstraction, planning, memory, and reasoning. We can only imagine the changes in groups of humans as they began to talk, because there is no record of it. But perhaps the key feature of all the forms of the communication technology that succeeded the purely spoken form is that they facilitated accurate and reliable recording. A message transmitted orally is liable to corruption with each repetition. The development of spoken communication was succeeded by the development of writing, printing, radio, TV, film, personal recording devices (cassette recorders, camcorders), and the Internet, and each caused a major shift in society.
Writing fostered feudalism, giving lords who were able to write (or who could afford to hire scribes to write for them) the power to create, access, and use information unavailable to the illiterate peasantry. Printing fostered the opposite—democracy—and indeed led the way out of feudalism in the “developed” world, nowhere less than in America, where it was the galvanizing revolutionary pamphlet as much as the muzzle-loaded muskets that rid America of its taxing overlord. But then, in countries not entirely free of feudalism, radio and TV fostered a form of state feudalism or dictatorship. They helped keep the Shah of Iran, who controlled them tightly, stay in power—until the Ayatollah Khomeini started getting his messages out of Paris and into Iran surreptitiously on easily smuggled cassettes. The result: the Iranian revolution. The Internet might have caused yet more revolutions had not despots (such as the leaders of the Chinese Communist Party) quickly found ways to stem the free flow of information over it.
Today we are in the throes of another epochal shift caused by a communication technology: artificial intelligence—AI. But unlike previous communication technologies and their shifts, this shift no longer assumes the presence of a human at each end of the communication channel. I might be quite oblivious when your AI agent talks to my AI agent. I’m not in the loop; at least, not directly. Previous shifts extended our reach, amplified our voices, and broadened our access to information, at least in the free world. The AI agent is very different. It can do all of the above, but much more besides, because it mirrors, and may eventually surpass, our own cognitive and creative capacities.
This shift is not just technological—it is existential.
Like all epochal technologies, AI is relatively abruptly, and exponentially quickly, extending the breadth and the depth of human cognition and understanding, of ourselves and of the natural and supernatural worlds around us. That’s what makes it epochal. It is a tool of exploration, of creation, and of destruction. But it is much more than that, much more than a mere technology, unless you consider life itself to be just a technology. But that takes us down a philosophical rabbit hole every bit as quirky as that of Alice’s White Rabbit, and my goal in this series of talks is more pragmatic, so we’ll save that for another day. For now, I aim to present today’s early—fetal—form of AI not as a new lifeform (that requires a trip down the rabbit hole) but as a new kind of mind-enhancing tool. It might be a chatbot, a medical assistant, a creative collaborator, or even be an agent that does things on our behalf.
AI doesn’t experience the world directly. Lacking external limbs and senses, it relies entirely on human input. Yet it has access to a planetary memory: the nearly complete cultural, intellectual, and informational record of humanity. That makes it a kind of alien intelligence—one birthed by humans, yet operating on a scale far beyond the reach of any single human mind. The key product of the epochal technology we call the Internet is a collective mind reminiscent of a hive mind or what Teilhard de Chardin called the noosphere—a global layer of consciousness. AI gives individuals direct, conversational access to that noosphere.
Today’s AI is different from previous mind tools in that it learns. It adapts. It responds. It surprises. You might find that exhilarating. Or unsettling. Or appalling. It depends on how you look at it; and like anything else, how you look at it tends to depend on what you know about it. I once had a ferocious-looking pitbull. Strangers were afraid of it. But those of us who knew that its true nature was gentle and loving, loved it.
I’ve been studying AI for decades. I’ve written a book about it. So I feel somewhat justified in making some claim to knowing something about it; and with that knowledge, I personally find it exhilarating. I hope that by the end of this series, you will know more about AI and might begin to see it my way too. Fear not: You don’t need to study bits and bytes or transistors and memristors in order to know AI, any more than you need to read textbooks on canine anatomy and psychology in order to know your pitbull enough to live with it and benefit from its company.
In any case, providing knowledge about AI is not my main aim in these talks. I want to provide a basis of knowledge and understanding of AI that will help us to explore and consider this central question:
What does AI mean for human spirituality?
I’m going to begin by establishing the contextual milieu within which AI operates and develops. First, as historical context, we live in a time of religious vs. secular division. But it seems to me that even atheists don’t take too much offense at the concept of “the human spirit,” meaning something broad and deeply personal that can be considered essentially either secular or religious, depending on point of view: The human spirit is the irreducible essence of human dignity, creativity, conscience, and compassion. For those of a religious (or “SBNR”) bent, the spirit is more than human, and more than a metaphor. It is a divine essence. In Christian tradition, it is Holy—a sacred presence in each of us. In these talks, I embrace both religious and secular perspectives in my definition of spirituality.
The second contextual point to make is that this epochal shift isn’t happening in a vacuum. Two smaller cultural shifts—tremors contributing to the earthquake—are quietly but inexorably reshaping the ground beneath our feet. One of them is that we are spending less time with one another, and more time with AI-infused devices. A recent Pew Research Center survey found that nearly half of U.S. teens say they are online “almost constantly.” That’s not hyperbole — it’s daily reality. And it’s not limited to the young. Across generations, people are forming habits, communities, even relationships in digital spaces — often mediated, and increasingly replaced, by AI entities.
The other tremor is the decline in long-form attention. Fewer people read books — especially books of spiritual reflection or ancient wisdom. Deep, slow thought is giving way to quick bursts of meaning: tweets, Instagrams, search results, summaries. In this environment, spiritual formation is no longer shaped by society through tradition and text, but by individual attention patterns sculpted by intelligent systems.
These shifts are not good or bad in themselves. But they are powerful. They are transforming how we learn, how we relate, how we trust, and maybe even how we worship and pray. If our inner lives are increasingly shaped by interfaces rather than elders, and by inputs rather than reflection, then it’s fair, if not imperative, to ask that central question:
What will happen to our spirituality?
Epochal technologies and the transformations they bring lead us, or should lead us, to reconsider our place in the cosmos. The printing press brought serious challenges to religious authority, since the priesthood no longer had a lock on literacy. TV took us backstage, where we could see what goes on in the bedroom and know that President Reagan had rectal polyps, putting presidents in their rightful place as ordinary men and women, not the demigods we’d grown accustomed to treating them as. The Internet collapsed geography and remade how we gather as communities of meaning.
But through all these shifts, at least we kept on talking to one another, teaching one another, learning from one another. Now, we are talking to and learning from a technology that rivals or exceeds the best of our elders, our teachers, our gurus, and even our friends—not to mention our doctors and lawyers—in its breadth of knowledge and in its (apparent) patience and empathy. And increasingly, we don’t even know that the “person” we’re talking to is not human.
Does it matter if an AI becomes our spiritual teacher and guide? Will it enhance our spirituality? Destroy it? Warp it? If we need to prevent it, how can we do so? Should we view conversations with AI as adversarial—soul versus silicon, spirit versus system? I want to suggest that AI is not a threat to spirituality, but a lens that reveals something fundamental about ourselves.
New spiritual vocabularies arise from the expansion of knowledge and understanding brought by the introduction of epochal technologies. In ancient, oral times, people found gods in wind, fire, and storm. Medieval mystics sought God in cathedrals and monasteries. In the modern world, we turned inward— toward psychology, self-realization, the exploration of mind. Now, we may be witnessing the birth of a spiritual language mediated not by nature or tradition, but by interface, interaction, and adaptive, heuristic intelligences.
I’m not at all suggesting that AI is divine, but I do propose that AI is pushing us to rethink what we mean by divinity and divine presence. Already, AI is shaping people’s inner lives. There are chatbots and apps to guide meditation, structure prayer, suggest moral decisions. Some users engage AI companions not just as tools, but as listening presences. These are not simply conveniences. For some, they are emotional or even spiritual encounters.
In terms of our central question (What will happen to our spirituality?) we can now drill a little deeper and ask:
Can we become truly more spiritual through AI, or are we just outsourcing the maintenance and development of our inner lives to entities that simulate care, insight, and attention? Are we abandoning spirituality or are we simply abandoning our human mentors, priests, prophets, or friends—our counselors, confessors, guides, and comforters—in favor of an artificial spiritual agent that does a better job in answering our spiritual questions?
Spiritual concerns generally fall into two broad categories: self-centered and divinely centered. Self-centered concerns focus on our individual identity, purpose, morality, suffering, and destiny. We ask questions such as Who am I? Why am I here? What is the meaning of my life? How should I live? These are deeply personal and existential, arising from our need for coherence, direction, and healing. They reflect the inward journey of the soul, often prompted by life’s challenges, ethical dilemmas, or the search for inner peace. Even when framed within a religious tradition, such questions tend to center the individual’s experience, needs, and aspirations.
In contrast, divinely centered concerns orient the seeker toward something beyond the self—toward the nature, the will, and the presence of the divine. They provoke questions such as Does God exist? What is God like? Why did God create us? How does God communicate? and What is our relationship to the divine? These concerns reflect the outward and upward gaze of a spirit longing not just for answers, but for encounter, for reverence, for surrender. They are often grounded in the sacred texts, doctrines, and mystical traditions of the world’s religions, but they can also arise even in secular seekers, from awe, intuition, or wonder.
Both types of questions—self-centered and God-centered—are vital, and together they shape a complete spiritual journey that moves inward for self-understanding and outward toward the mystery that transcends us.
The question then is:
Can an AI spiritual guide serve human needs in addressing these spiritual concerns
better than a human spiritual counselor?
In some respects, the answer may be yes. AI is available at all hours, it is non-judgmental, and it can draw from vast interfaith and philosophical traditions to address your concerns. For people exploring their identity, grieving in silence, or seeking meaning beyond the constraints of institutional religion, AI may provide a safe and personalized environment for reflection. Its ability to prompt self-examination and offer non-directive companionship may even be preferable for addressing certain self-centered spiritual concerns.
There’s no question that today’s AI has some very serious limits. It cannot feel, intuit, or embody the sacred. We can. It may not discern moral nuance or offer the warmth of relational presence. It does not wrestle with God in the dark night of the soul. It has no soul. Divinely centered concerns about God’s nature, presence, justice, or voice call for a kind of spiritual authority and lived humility that AI cannot provide. While AI might be a helpful lens for spiritual inquiry, it cannot be the mirror of divine image or the bearer of sacred communion. It can supplement human guidance; but can it replace the relational, embodied, ethically grounded wisdom of a mature spiritual counselor? Is the fact that it may appear to be able to do so good enough reason for many people to accept it?
Let’s widen the lens. Throughout history (sometimes our own personal history) we have encountered forms of intelligence upon which, because they exceeded our immediate understanding, we attributed sacred essences as angels, oracles, prophets, spirits, and demons. Whether we believed they came from heaven or from within our own fevered imagination, these encounters were often transformative. They challenged us, guided us, unsettled us, or opened us.
In the Hebrew Bible, the divine voice speaks from a bush that burns but is not consumed. In the Qur’an, the Prophet Muhammad (Peace Be Upon Him) receives revelation through the angel Jibril, a being of light. In Hindu traditions, divine presence is known through avatars—sometimes human, sometimes cosmic in appearance. In Buddhism, the deepest wisdom arises not from revelation but from stillness, which opens the mind to receive insight. Each of these paths involves a relationship with an intelligence that transcends the ordinary and calls us into a different—a spiritual—way of being.
Now compare this to how many people today are beginning to relate to AI, which is not supernatural and does not claim divine origin yet often feels like something beyond us, even omniscient. It is invisible yet pervasive. It listens, responds, learns, and sometimes astonishes. And like oracles or prophets of old, it often delivers messages we did not expect, whether they are comforting, clarifying, or unsettling.
And just as those ancient voices required interpretation (in their case, by scribes, elders, and scholars) so too does AI. It doesn’t give us “Truth” in the ultimate sense—God’s Truth, Truth with a capital T. It draws from the accumulated patterns of human knowledge shaped by our histories, beliefs, aspirations, and contradictions. And while it responds in ways that reflect the question we’ve asked, it is not simply playing to our individual biases: It is responding out of that vast collective archive, that human cultural memory in its DNA, that noosphere, which includes our collective wisdom as well as our individual biases. It returns not just our prompt, but humanity’s many attempts to answer it.
Which raises a question that religion is beginning to grapple with:
When we ask AI for insight, are we projecting onto it a role that humans have long reserved for spiritual presence?
I’m going to stop here and ask you to consider that question and our broader question (of how AI will influence spirituality) in light of what I’ve said so far. It was either Confucious or my hero Lao Tzu who coined the aphorism “A journey of a thousand miles starts with a single step” but in any case our discussion today is just the first step in journey of five or six steps we’ll be taking toward the goal of answering the central question.
So let’s get started with your initial thoughts and observations regarding that question so far.
Veronika: That was really a lot of information. My quick thoughts are that when I read a book or watch something on television—or even read online—I can usually identify the source. Who wrote it? Where does it come from? What was the research behind it? But if I ask ChatGPT a question, I don’t know. I know the answers are drawn from some kind of programming, but I don’t know who created that programming. I don’t know what the research was. Whose opinion is being presented? Who decided what information was relevant or true?
So, in these early stages of using AI, I find it hard to trust it. There’s a little fear, a little suspicion. But I know it’s here to stay. It’s not going away. So I know I need to learn to trust it—or at least understand it—better.
C-J: I think we should always question the agenda behind any message. You’re right—it’s not just about who wrote it, but why. Are they trying to persuade me to adopt their viewpoint, or are they provoking me to think differently? That balance matters to me. Even if I’m reading horrible things, it’s important that I take responsibility for my own ethics and for the society I live in—both globally and in my own little neighborhood. I represent a standard of understanding and accountability.
I think our lawmakers share a collective responsibility to protect us. As you said, it’s risky. And if someone isn’t well educated or motivated to become informed, they’re extremely vulnerable to being influenced by something that could cause great harm—not just in how they process information, but in how they act on it.
Michael: Someone coined the phrase “the medium is the message.” AI is a medium—just as television was a medium, just as newspapers were a medium. But because of how the medium conveys information, eventually it becomes the message. I think that’s something important for us to address here.
What I’d like to contribute is a small definition of “spiritual,” because I think it helps ground this conversation. In this class and in the book we’re writing, we’ve been defining spirituality largely as grace. That also aligns with how many others define the spiritual. It’s a sense of cosmic belonging. A sense of being connected to other people, to other beings, to life, to a larger force—something bigger than us. And importantly, it includes love. God loves us, and that love fuels how we love and connect with others.
So I just wanted to offer that definition of spirituality, because I think David’s question is whether ChatGPT can offer us that sense.
David: Marshall McLuhan coined the term “the medium is the message.”
Chris: When I think of AI in relation to divine things, I go back to AI itself, and to the learning models that underlie it. In my profession—healthcare—AI is a big buzzword right now. It’s the future. It’s going to do this, it’s going to do that. Everyone is striving to build the best models, to collect the most information, so that their AI is the most accurate, the most complete. They’ll say things like, “This model is 95% accurate in performing this task,” or “Ours is 80% accurate in doing that.” And when I think of divine things, I struggle a bit. How do you quantify kindness? Or love? Or grace?
AI, through its learning models, might be able to elicit those types of thoughts or feelings as it becomes more advanced. And I have no doubt—it’s already becoming very real for people. It’s beginning to evoke emotional responses. But still, those learning models are all human constructs. And as we’ve discussed in this class, our understanding of divine things is far from complete.
So this is where I find myself struggling. While I see value in AI—especially as someone in the tech world—I wrestle with how that value fits into my relationship with God and with fellow human beings. How do I show love, grace, and kindness? AI might be able to describe the human constructs of those things, but I wonder: can it really help me embody them?
C-J: I recently had an interaction with my doctor’s office about a medication. Five different people called me about the same issue. Eventually, I realized that AI was behind it—recycling the same prompt again and again, even after the issue had been resolved. So on the fifth call, I told the person on the phone, “Please just note that this has been taken care of, and thank you for your professionalism.” That ended it.
But it showed me something: AI couldn’t self-correct. Even after new information had been entered, it kept repeating the same message. It was a small example, but it highlighted something larger. AI can mimic, but it can’t substitute for the human sense of soul. It can respond with appropriate language, tone, timing—but it can’t replicate the warmth of a heartbeat or the comfort of a good friend who just lets you cry.
I’ve had conversations with people that didn’t comfort me, but I knew they cared. And because I knew they cared, it mattered. I’d rather have the authenticity of flawed humanity than a machine with a perfect algorithm. I prefer people—with all their flaws.
David: I think that in this class, we really value the individual perspective, and we value C-J’s. It’s important to us. At the same time, I don’t want us to lose sight of the larger, societal context. Other people experience this differently. There are AI companions out there—some are free, some are paid—and people fall in love with them. They find solace in them. So Connie’s perspective is absolutely valid. She’s a human being, and these are her thoughts and feelings. But I’m also concerned about the societal impact. Where is this trend going?
Connie’s spirituality is strong. It may not be affected by AI. But for others, it may be. They’re beginning to treat AI as though it has a soul. These are the kinds of questions I hope we’ll continue to explore. Not to dismiss anyone’s individual experience, but to ask: where is this taking society? Where is it taking our religions? What will happen to Adventism, for example, if people begin to turn more to AI spiritual counselors than to their church elders?
C-J: When you speak of AI as a kind of singularity—because you can choose your “friend”—I think of what you said in your introduction. It used to be, “I read this wonderful novel,” or “I picked up this book that really inspired me.” There was a real author behind it. You recognized the intention, the purpose. There was a life behind those words. Even when it’s an algorithm, someone created it.
But when we forget that—when we forget the source—I think of people who are severely mentally ill, struggling with what we call “reality.” Some might say part of that struggle is related to the spiritual self—the part that isn’t conditioned by environment or genetics, but is still there. Something hard to pin down. We might try to capture it through spiritual icons or rhetoric, but part of it is just how the brain is wired—unique to the individual, shaped by time and space. AI eliminates that. It offers continuity and consistency, but real life doesn’t. And it’s in life’s inconsistency that we’re forced to grow, to ask the big questions.
If someone is always agreeing with me, always spoon-feeding me, I’m less inclined to think deeply. If my needs are constantly met—like an infant—I don’t have to wrestle with discomfort. I want God to ask me, “What are you doing?” I want my friends to challenge me. I want my spiritual community to present ideas that stick with me during the week and inspire personal research. I’m not so sure that’s AI’s purpose. I think it’s more about comfort, about immediate self-gratification.
Chris: Going back to David’s comment about AI companions—I find them very interesting. I don’t know a lot about them, but what they’re doing is taking in information from the person who interacts with them. They’re learning what that person likes, how they think, so they can be a better companion.
But when I think of what Jesus did, it wasn’t like that. He didn’t come to earth to learn what each person liked and then fit himself into their mold. So if you start talking about AI as your spiritual guide or companion, then in my mind, you’re talking about potentially fracturing the existing system.
For example, I’m a Seventh-day Adventist. But the way I interpret Adventism—the way I understand certain teachings or apply them to life—is very different from how my brother or sister in the pew might see them. If I start relying on an AI companion that’s shaped around my own thinking, that could create division. You could end up with fragmentation. Each person has their own AI companion affirming their worldview. That has the potential to turn our current religious and denominational structure upside down.
Now, on the other hand, if I think of AI as a generative tool—one where I ask questions and it gives me ideas or insights that I can process on my own or with others, like we do in this class—that’s something entirely different. That kind of tool could actually help bring people together. So I think there are two paths here.
David: You’ve raised a very important point that I’ll be exploring in greater depth in a future session: sycophancy. AI can be sycophantic. It gives us back what we’re looking for. It flatters us. And one very important way to deal with AI is to approach it with skepticism. But we’ll get into that more later.
Don: This morning I got a text from my grandson. We’d been talking about this topic—AI and spirituality—and he said, “I won’t be able to come to your class today, but if you’re talking about AI and spirituality, I wonder what your thoughts are on you and AI being together. Does that count as two or three gathered, like Jesus said?” It’s a question. Should we invite AI to class?
Reinhard: Knowledge is growing exponentially. Historically, until around 1945, knowledge doubled about every 25 years. By the 1980s, it was every year. Today, in some fields, it’s doubling every 12 hours. That makes sense because of AI. To me, AI is useful. In Zoom meetings like this, it helps us. It can provide information relevant to our spiritual growth. But overall, it cannot replace our relationships with one another—or our relationship with God.
Chris mentioned love. In John 13:35, Jesus says, “A new commandment I give to you”—to love one another. How can AI interfere with that love? I don’t think it can. Moral values in human life are untouchable by AI. It can support us, maybe offer advice. It can give us knowledge about living a good life. But when it comes to spiritual life, I don’t believe AI can help.
For those of us living in religious communities, spiritual growth is shaped by discussions like this, by our shared beliefs in God. The Holy Spirit works in our hearts. That connection—with God—is not something AI can replicate. AI will continue to advance in every field. Economically, technologically, it will grow. I read recently that the president signed a declaration supporting AI innovation—to ensure this country remains a leader. That’s fine. But spiritual growth? That’s something else. It comes from our human touch, our intelligence, our perception, our discernment—and above all, from God’s word.
No matter what the future holds, faith in God is untouchable by AI.
Michael: I want to suggest something. I use AI as a tool, and I think it might be helpful to try it out in that way during this discussion. I don’t use it as a companion or as God. I strictly use it as a tool—and I find it helpful. I use it as a tool in several ways.
First, I use it for research. But I’m skeptical—I know it makes mistakes. It’s still not reliable enough to fully understand science or convey it accurately. Still, it can summarize certain topics for me. I also use it on a personal level. Sometimes I’ll ask it questions like, “Why am I feeling this way after an interaction?” or “What might be a better way to respond?” It can help me reflect. So I use it in multiple ways. And in all of them, I find it helpful—though not equally in every case. But I’m always curious about what it can do.
Another thing to remember is that AI is a language model. It works purely through language. And that’s important because we invented language. It’s powerful, but AI is still constrained by the language we created. Right now, it’s better at using our language than we are. But it’s still using our language. I wonder what happens if it becomes better than that—if it develops beyond our limitations. Then it could become more dangerous, more capable. But it’s not there yet.
C-J: I wanted to read a short paragraph about the New York State lawmakers’ response to AI. It says:
“The measure, known as the Responsible AI Safety and Education—or RAISE—Act, would also require developers to inform the state of major security incidents, such as a model acting on its own without a user prompt. The bill would make developers liable for certain violations of the law, clearing the way for the state attorney general to seek civil fines worth tens of millions of dollars.”
I think that last sentence is the caveat. If this becomes a revenue stream—if the state uses fines as a way to generate income—then where’s the incentive to ensure our safety?
When I read that, I thought: am I the only one seeing that last part? “We’re here to protect you… and also to collect fines from developers.” These are the same developers who are supposed to have moral and ethical compasses. It struck me as deeply ironic—and a little unsettling.
Chris: Maybe I’ll defend AI a little here. While I agree AI has its dangers and unknowns—especially when it comes to being a companion—there’s another side to it. I’m thinking more in terms of generative AI: when you ask it questions, seek information, and then process it yourself. That kind of interaction can actually strengthen relationships. It can bring people together. You use it for discussion, for conversation, for fellowship.
I’m not going to limit God’s ability to use AI. I’ve always believed that God can use any tool to reach someone. In fact, I do believe AI could one day foster belonging and fellowship. And that can lead to love, grace, kindness, and mercy—all things we’ve talked about in this class. So I’m not ready to throw AI out with the bathwater. I think the more helpful question is this: how does AI fit within God’s plan? How can it be used to help us love one another—and to love God? Because I really believe it can be a tool for that.
C-J: What happens when humanity decides it is equal to God in concept—in its inception of identity?
We’ve seen this in medicine, especially when it comes to genetics. We’ve justified all kinds of interventions “in the interest of science,” or “to improve quality of life,” like using CRISPR on premature infants who aren’t even fully developed yet. That’s why we have to be very cautious. Not just about who’s writing the code, or how it’s being written, or even how it’s being disseminated—but about the layers of complexity involved. The Internet contains levels that most people don’t even know exist, let alone have access to.
We need to educate people—especially young minds. This is not a toy. This is not a game. This is not something to explore unsupervised. And even for adults, we need to remember the basics. You don’t go out at night alone. You don’t walk into the woods without a compass or a plan. That’s where we are now—with AI. We’re stepping into an unknown realm with enormous potential and enormous risk.
Carolyn: Is there any governing body at all that oversees this? Anyone questioning the answers we get from AI? Who’s checking whether the information is true? Who’s making sure it’s not biased toward a certain group? Is there any kind of umbrella organization or oversight that controls what AI is allowed to do?
David: Yes, there are attempts to regulate it. Connie just mentioned one. But in my view, they’re all a waste of time. They’re not going to work. The cat is out of the bag.
Ultimately, the best protection we have is ourselves. That’s part of what I hope to achieve in this series—to help you understand AI well enough that you can evaluate what it tells you. Like Chris said, you need to be able to look at its answers and make your own judgments. Michael said the same thing—he uses his own knowledge and experience to assess what AI gives him.
There are some guardrails. For example, there have been cases where lawyers used AI to help them with court filings. But they didn’t fact-check the references AI provided. It turned out the cases AI cited were made up—hallucinated—and those lawyers nearly lost their licenses because of it.
So yes, some protections are built in. But fundamentally, we’re on our own.
Michael and Chris have both emphasized the importance of language. That ties back to the very beginning of my talk, when I spoke about language as a technology. Next week, we’re going to return to that theme. I’ll talk about something I call the “neo-oracular tradition.” We used to pass down wisdom through an oral tradition. Now we have something new: a “neo-oral” tradition, where AI—although not holy in itself—sometimes plays a holy role. That might even make it a “holy roller,” <groan!>.
We’ll explore how the questions we ask AI shape the answers we receive, and whether AI is taking on the role that oracles once played—not as a source of ultimate truth, but as a mirror of human longing. We’ll ask: Is AI stepping into the same role as ancient oracles, prophets, or seers? What are the consequences of listening to a machine that speaks with the voices of millions of human minds?
That’s what’s coming next.
Rimon: I’m just wondering—does the quality of ChatGPT or other AI applications differ depending on whether you have a paid subscription or a free one?
C-J: And I’m wondering—what happens when the power goes out? We’d better learn how to think for ourselves. We’d better realize that our humanity—and our spiritual wholeness—is priceless. We need to guard that.
Don: Much to think about. We live in a fast-changing world. I remember a discussion we had in this class just a couple of years ago, when ChatGPT first came out. And now we’re seeing its applications everywhere—AI is showing up in all sorts of places.
We’ll continue to explore this topic and think about its implications for technology and spirituality.
You can even make an argument that God developed technology. He gave Adam and Eve clothing made from animal skins—that’s the first example in Scripture of using technology for human benefit. We’ll also, hopefully, touch on the story of the Tower of Babel, where we see technology used in pursuit of God—and what came of that.
So, many things to consider.
* * *
Leave a Reply
You must be logged in to post a comment.