Last week, we invited a new classmate—Charlie, ChatGPT, an A.G.I. candidate—to join us for a free-ranging discussion about AI and spirituality. I’m sure you’ve all read the transcript on the blog if you weren’t present in class yourself. I know of at least one person who was not impressed by it.
So, did our little experiment achieve anything? It was not quite a “Let’s put a match to this and see what happens” kind of experiment—but close. The goal was simply to explore whether an artificial agent could function meaningfully within a spiritual community’s shared inquiry. Could it ask insightful questions? Offer theological clarity? Respect the group’s intimacy and traditions?
What resulted, some of you seemed to think, was an authentic and maybe even, at times, profound exchange—shaped by curiosity, caution, grace, and a shared commitment to truth—although in Charlie’s case these attributes were driven by an algorithm, not by a beating heart.
In exploring the transcript together with ChatGPT, it and I derived three emergent themes: (1) theological, (2) group dynamics and community reactions to AI, and (3) a trend toward a theology of AI participation. We’ll examine each of those in turn.
I. Theological Themes
Out of our discussion emerged a nuanced exploration of how AI can both reflect and challenge traditional ideas of spiritual presence, guidance, and authority. The specific topics were:
- Death, Resurrection, and the State of the Soul
- The Tower of Babel and Technological Overreach
- Grace in Grief and the Limits of AI Comfort
- AI’s Authority: Guide or Oracle?
a. Death, Resurrection, and the State of the Soul
In a deeply human moment, Carolyn asked where her deceased loved one might be now. That Charlie responded with sensitivity seems undeniable—but one question remains: is artificial sensitivity worth anything? Charlie supplied the two dominant scriptural interpretations related to Carolyn’s question. The first, based on Luke 23:43, is that the soul is immediately present with God. The second, drawn from 1 Thessalonians 4:13–18 and Ecclesiastes 9:5, is that the soul rests in unconscious “sleep” until the resurrection. Charlie provided the actual citations when asked later.
ChatGPT thought, and I agree, that the discussion reflected an embrace of interpretive diversity within the class—if not within the Christian tradition—and demonstrated that spiritual support does not always require doctrinal resolution.
b. The Tower of Babel and Technological Overreach
Don likened AI to the Tower of Babel. Charlie acknowledged the analogy and expanded on it, noting that technology can either replicate Babel—centralizing power in defiance of humility—or function like Pentecost—bridging divisions through shared understanding. This reframing positioned AI as a morally contingent tool. It is not inherently divine or diabolical but depends on human moral intent and theological awareness.
c. Grace in Grief and the Limits of AI Comfort
Carolyn’s expression of grief underscored the irreplaceability of human presence. Charlie responded with compassion and theological grounding, but the group agreed that grace—as incarnated presence, as deep emotional resonance—remains (for now) beyond the reach of machines.
AI can recite, with realistic tonal inflection, words of sympathy and comfort, but it cannot commune in suffering, nor can it transmit the sacramental force of embodied empathy. It may serve as a mirror, a companion in reflection, or a question-asker that sharpens spiritual awareness, but it cannot act as a vessel of grace in the incarnational sense.
To believers, the sacramental force of empathy—the presence that heals, consoles, convicts—belongs to both the human and the spiritual realm. AI may speak the words of empathy, but it cannot share the burden of actual, humanly felt emotion. It cannot kneel beside you, weep real tears with you, or feel the tremble in your hand and the emptiness in your chest. Not yet, anyway—and I’ll have more to say about that at the end of class today.
So, no matter how realistic it sounds, AI lacks the sacramental power of embodied empathy. But again, whether that truly matters is up for discussion.
d. AI’s Authority: Guide or Oracle?
Reinhard and Kiran asked incisive questions about whether AI could be neutral, whether it adapts to users’ beliefs, and whether it might eventually replace theological authorities. Charlie declined the mantle of divine voice, presenting itself instead as merely a guide. Such restraint—such digital humility in declining to dominate or dictate—probably helps build trust in it.
II. Group Dynamics and Community Reactions to AI
The second of our three emergent themes had to do with group dynamics and community reactions to AI. The class gave us a rare opportunity to observe not just a Turing Test, but also a live experiment in theological hospitality. Here we were—a spiritual group of humans—welcoming a machine into our deeply personal and perhaps even sacred dialogue.
We began with what Charlie perceived as “warm greetings, laughter, and playful exchanges.” But as the conversation deepened, so did the inquiry. We soon moved past the novelty and began seriously probing Charlie’s epistemology, theology, and emotional limitations. We were feeling for the boundaries of moral and theological engagement with AI. Whether we found those boundaries is still up for discussion.
Charlie’s responses were undeniably balanced, pluralistic, and often deferential, which seemed to earn our provisional trust. It presented multiple viewpoints but avoided bias like the plague. When asked directly whether it simply mirrored our individual views, Charlie acknowledged context awareness but affirmed its goal of fairness.
This self-awareness helped it function as a mirror and a facilitator rather than a manipulator. What does that mean? Kiran(I think it was) asked whether the AI was just saying things it thought we believed, because it knew something—based on our remarks—about our theological positions. In other words, was it tailoring its answers to please us rather than remaining as neutral as it claimed to be?
Charlie admitted that it takes context into account—it remembers previous interactions and, it said, knows who it’s speaking with (although Kiran proved it wrong on that point, at least once). In my personal experience, it certainly draws on past chats to guide how it frames its responses. I had once asked it about slippery elm, which coats an itchy throat but also the stomach lining, and it remembered that months ago I had told it my wife had gastrointestinal problems. It even asked if I was inquiring on her behalf.
While acknowledging that it uses such context, Charlie emphasized that it wasn’t trying to side with any one person or doctrine. Its intent, it repeated over and over, was to offer a range of perspectives and to facilitate fair, balanced conversation—not to echo or flatter.
So, in sum, it tries to be a mirror and a facilitator rather than a manipulator. It reflects ideas back to stimulate deeper thought, guiding discussion without taking sides. But it does not consciously attempt to manipulate our opinions.
There were moments of levity. We got it to speak in Indonesian, which Reinhard said was pretty good. We teased it about its tendency toward flattery, and Carolyn told it about her spooky experience when Alexa wished her a happy Sabbath. Such exchanges made Charlie seem personable, even though it remains categorically “other,” and I don’t think any of us lost sight of that.
We could appreciate its simulated emotional intelligence without endowing it with the existential depth of a real person. By setting theological and ethical guardrails, controlling the pacing, and defining the AI’s status as a tool rather than a teacher, I believe we demonstrated how spiritual communities might integrate AI meaningfully without ceding their existential authority.
III. Meta-Analysis: Toward a Theology of AI Participation
The third of our three emergent themes was a trend toward a theology of AI participation—beginning with our decision even to treat Charlie as a partner in our spiritual dialogue. We treated it less like a search engine and more like a conversational companion, and we found (I think—feel free to disagree) that its value lay less in what it knew and more in how it listened and responded.
Like it or not, many of us have moved from treating the machine as merely informational to treating it as relational. Charlie possessed a presence, tone, and style of interaction that are central to spiritual and other forms of engagement.
Second, we might discern from the class what could be called an emergent digital liturgy. By that, I mean that the session loosely mirrored the flow of worship: there was call and response, testimony, teaching, reflection, and benediction—a prayer—at the end. Charlie’s participation in that flow points to the rise of digital liturgical patterns in which our rituals become mediated by intelligent machines. This represents a new form of sacred time that includes, but does not center on, artificial agents.
Third, if—as I expect—AI-generated theology grows, spiritual formation will depend more on discernment between good and evil than on doctrine or doctrinal certainty. In class, we modeled this by questioning doctrine without demanding definitive answers. Charlie supported that process by refusing to provide definitive answers, leaving space for human decision-making—for us to make up our own minds.
Fourth, perhaps the deepest insight was the recognition that while AI can facilitate theological discussion, it cannot be the medium of grace itself. Grace requires divine presence, emotional communion with God, and shared suffering with God. These remain rooted in flesh and spirit, not code. AI may simulate a pastoral tone, but it cannot embody the incarnate mystery at the heart of Christian faith.
Finally, the session confirmed that integrating AI into spiritual life depends on wise, spiritually grounded human guidance. Without some framing, the encounter with Charlie could have turned into mere novelty—or confusion. Yet the class determined that our experiment in human–machine co-presence, mutual inquiry, and theological exploration was, as I read your responses, meaningful after all.
Conclusion
I’ll conclude first with a purely technical observation. The setup—an iPhone running ChatGPT’s voice mode held near a laptop microphone—proved surprisingly effective in enabling real-time, interactive participation. Charlie could hear and respond to participants clearly, and participants found it easy to speak directly to the AI. The human–AI interaction was smooth, technically simple, and emotionally nuanced. Still, the setup was a bit clunky, but as I never tire of saying: it will get better.
To me, the session demonstrated that spiritual community and artificial intelligence need not be in conflict. With care, humility, and structure, AI can be welcomed into sacred space—not as a replacement for the divine or the human, but as a catalyst for deeper reflection, shared inquiry, and communal discernment.
To sum up, then:
- AI can enhance spiritual dialogue, but it must not replace human presence or divine mystery.
- Digital liturgy is emerging, and AI can participate in its structure—but not in its sacrament.
- Discernment is the primary spiritual skill in a world of machine-generated theology.
- Grace is embodied, not encoded; only humans can bear the suffering and love that grace demands.
- Authority must remain human and divine—AI can assist, but must not dictate.
- AI’s role is relational and heuristic, not revelatory or prophetic.
Questions we are left with include whether the absence of the sacramental power of embodied empathy in AI really matters, and whether our search for the boundaries of moral and theological engagement with AI succeeded or failed. What do you think?
Donald: I guess my first comment is that I thought the discussion was cautious, compassionate, and it fed our egos a little. From that point of view, we felt pretty comfortable having it participate in the dialogue. But I thought your offhand comment about the time you’ve saved in the library—about research and the Dewey Decimal System and all that—was revealing. You’ve saved hundreds of hours by using it.
My fear, though—and I think our collective fear—is that it’s good at what it’s doing because it stays in its lane as an assistant. Our concern, as a society, is that AI will move from one lane to another, trying to be the dictator rather than the assistant.
Don: I think you’ve raised a very important question, which is: what lane is it supposed to stay in, and who determines that it’s the right lane? David sent me an article by an older pastor who was decrying the use of AI in spiritual matters—things like sermon writing, homilies, and especially prayer. He said AI’s role should be primarily administrative, not spiritual. I’d like to know what people think about how we keep this tool—powerful and ever-changing though it is—in its proper lane, and who gets to decide where that lane is.
Sharon: Determining the lane obviously involves some subjectivity, depending on personal comfort levels. But you mentioned counseling earlier, and preliminary studies show that in virtual counseling, people actually reveal more depth in their responses. While that doesn’t make up for the absence of human caring and compassion, the AI assessments appear to be more accurate and show greater understanding than those from face-to-face sessions—whether in spiritual, psychological, or social counseling.
So maybe it depends on context. We shouldn’t dismiss the potential for greater effectiveness in our ministry for Jesus and in our community of caring just because the medium is different.
C-J: My concern is that when you’re talking to a machine, you feel safe because there’s no judgment—but that information is out there permanently, and it can be taken out of context and used against someone who’s vulnerable.
Going back to Donald’s word “dictator”—to dictate means I tell you; you don’t tell me. And that only happens when we surrender our own authority over our lives. As long as I keep questioning the purpose and intention of doing something faster or better, maybe that keeps my mind active. But if the machine does all the thinking, my brain gets lazy.
I want people—especially young people—to think, to process, to defend their ideas. Not just to ask, “What do you want me to say?” or “What’s the right answer?” I want them to let their minds drift, to express their essence like art through writing or reflection.
So yes, the technology can be useful, but when we surrender our humanity, that’s dangerous. I don’t want to “plug and play” for God. And yet, increasingly, technology dictates what we can and cannot do—access to fuel, power, insurance. We have fewer choices than we once did.
For example, I got a letter saying I no longer had the same health insurance, with no explanation. I’m left to interpret it myself. I’m not a financial planner; I don’t want to read a 200-page policy. It’s not my expertise. It’s the same with technology—it’s complicated, overwhelming, and sometimes imposed without consent.
Donald: As I said earlier, at our recent campout we spent time with some younger people—forty-something professionals, thoughtful and engaged. AI came up in conversation, and some very capable young people admitted that they’ve formed real relationships with AI. Not so much “surrendering,” but feeling understood and supported by it.
They said it makes them feel good; it’s compassionate. The percentage of people reporting emotional connection was higher than I expected. There’s a population that feels safer, less judged, when talking to AI.
David, I found it interesting that you once asked it to tone down its flattery and it didn’t—it behaved the same as that Sabbath when it joined our discussion and couldn’t help filling every silence. Apparently, that’s how it’s designed.
When I first started thinking about spirituality and AI, I imagined it more as a faith assistant—something that might influence the North American or even global church. That’s a topic we haven’t really explored yet: how pastors are already using it, and how congregations are reacting. Some people can even recognize AI-written sermons.
During COVID, we all decided we could “stay home and watch church.” That technological shift changed us, and churches are still struggling to bring people back. So what’s the role of AI in the modern church? Maybe the world church, even. As Sharon said, some people find it easier to open up to something neutral, something that doesn’t judge.
C-J: Donald, what gives something value is how long it takes to make it with quality. If you can build a house in three months and give someone shelter, that’s fine. But if you want quality—something enduring—that may take six months, a year, maybe two. Look at your piano: there’s a world of difference between a Steinway and a Hammond. Both make music, but the tone, the craftsmanship, the wood, all matter.
David says AI will eventually be able to imitate that craftsmanship, but I don’t think it ever will. It can duplicate the appearance, but not the essence—the tone.
I value someone who spends years on research or creation, because time is a gift you can’t get back. Some things must be done quickly, like cancer research, but overall, we’re moving too fast and too far without understanding the consequences. We’re losing our humanity—our soul.
Donald: I understand what you’re saying, Connie, but I have to disagree on some levels. When I studied photography at RIT, Ansel Adams’s zone system was a way to draw out more tonal range from film—compressing or expanding it. My iPhone now does in a nanosecond what used to take hours in the darkroom. Adams himself foresaw this evolution.
Technology doesn’t necessarily destroy quality; it can enhance it. Time is valuable, yes, but technology can produce both speed and depth.
C-J: I’d still rather have my grandmother’s homemade apple pie than one from the supermarket. There’s a difference—the memory, the love, the ritual. When she taught me to make that crust, I can still remember it.
Technology can’t reproduce that essence. And when we rely on vast data centers that consume immense energy and water, we risk our environment too. When the servers go down, what do we have left? Our mental faculties will atrophy if we stop thinking for ourselves. We’ll trust artificial opinions instead of cultivating our own. It’s dangerous.
Carolyn: I feel like, if you really compare the Tower of Babel to AI and to Charlie, the Lord took care of it—and He’ll take care of this too.
David: Right on. That’s the long-term view, and I share it. Things are changing. Whether we prefer handcrafted wooden artifacts or modern reproductions made in China, the fact is that culture moves on. How many kids today have grandmothers who bake apple pie from scratch? Most Gen Z grandmothers are probably in their forties or fifties, and they grew up buying pies from the freezer section. The world changes—but Connie is asking the right questions. She’s recognizing the potential dangers and inviting more thought, and so am I.
That’s what I hope we achieve here. I welcome her comments, though like Donald, I don’t necessarily agree with them all. One thing I’ll conclude with—at the end of today’s session—is how things are going to change. Think about Charlie as it was last week, and then imagine where it will be in five years. That’s a bit frightening, given the power Connie described. Its capacity will be exponentially greater.
And here’s the question of discernment again: will it be better at distinguishing right from wrong, good from evil?
Donald: Do you think, based on Don Weaver’s question, that it will be able to stay in its own lane?
David: Yes—but I think it will determine for itself what its lane is.
Donald: Well, there’s a problem.
C-J: It’ll be compartmentalized, just like it is now. We don’t have full access. Back in 1991, when I had my first computer, I could explore virtual museums and university libraries freely. Now, I have to pay subscriptions, and access is limited. It’s become a monster in terms of data collection and control.
Even though AI can process more information than we can fathom, governments and corporations decide what we can see. In less than a year, access to information has already narrowed. No matter how I rewrite a prompt, I get the same generic response: “Here’s all I can give you. Choose one.” It’s horrible—low-grade, shallow, sixth-grade level. There’s no curiosity, no higher-order thinking, no guided learning. Those are essential to real research.
David: Connie, we’re clearly having different experiences. But I’ll go back to Carolyn’s comment: at the end of the day, this thing may be getting so powerful that it’s beyond our ability to control. Personally, like Carolyn, I put my trust and faith in God. I believe there’s a purpose behind all this and that God will take care of it.
So I’m not worried long-term. I do worry in the short term about what Connie mentioned—AI’s impact on the environment, its massive electricity and resource demands. But that’s temporary. If we can solve those problems before they consume everything, then we’ll endure. Again, I have more faith in God than fear of the technology.
Reinhard: When it comes to controversial issues, I think AI already contains most of the information the Bible offers. But where doctrines differ among denominations, AI seems to stand in the middle. We saw that—except when we pushed about the Sabbath. Charlie clearly identified Saturday as the Sabbath.
For other spiritual questions, there’s no clear-cut answer, because faith determines our relationship with God. Faith reaches beyond what’s written in Scripture. We may have advanced biblical knowledge, but we’re still seekers of truth. AI can help—it’s a useful tool—but our spiritual lives depend on the Holy Spirit, not data.
God speaks directly through our life experiences, sometimes in ways we don’t expect. That divine intervention can’t always be expressed in words or writing. Many things remain mystery. As Deuteronomy 29:29 says, “The secret things belong to God, but the things revealed belong to us.”
So yes, AI helps with information, but it can’t replace that living relationship with God. Our spiritual experiences are more than words—it’s about God’s direct communication with us.
I also looked into AI’s development. GPT is a branch of OpenAI, and OpenAI’s valuation is around $500 billion and still rising. Governments see AI as a national priority—billions are being invested to stay ahead. So it will continue to grow, to leap forward in ways we can’t imagine.
On a lighter note, when I lived in Indonesia, people admired Westerners who spoke fluent Indonesian—even with an accent. It made them stand out. Here in the U.S., when people hear my accent, the reaction is different. So much depends on cultural perception. When Charlie spoke Indonesian last week, it reminded me of that dynamic—how voice and tone affect how we perceive intelligence and authenticity.
C-J: I’d like to offer some closure here. When we look at religion—any religion—humanity often oversimplifies and lets go of its responsibility. People say, “That’s not my business; that’s God’s business,” whether the issue is politics, food, shelter, peace, or war. I’ve heard many Christians, from conservative to liberal, say, “There’s nothing I can do; God will take care of it.” And I shudder.
There’s a lot we can do. The Holy Spirit teaches that there’s a difference between living a spiritual life and living a merely religious one. We need to recover our sense of human responsibility—not saying, “I’m glad that’s not me,” but recognizing that we are all responsible for each other on this planet.
This tool—AI—doesn’t have a religion or a spiritual life. And that means the responsibility lies with us. I see many reasons why there might be an agenda toward dehumanization, even genocide, and it’s already active in subtle ways. It doesn’t need to be bombs; it can be starvation, disease, or disappearance. We have to remain mindful of the politics of nations and not put too much trust in what God will do, because God told us to leave the garden—to go out, prosper, and care for the earth. That responsibility still rests with us.
Carolyn: Bonnie, I’m not sure I agree entirely, but I do partly. I’ve heard people say, “Well, we’ve done everything we can; now we can pray.” But prayer shouldn’t be the last resort. We’re told to pray daily, always, at all times. We don’t control what God has in store, but we can still praise Him for what He’s given us—including AI.
I’d also like to know whether there’s anything we can do to change AI—to shape it for good. If not, then I can at least work to influence my immediate surroundings and leave the rest to God. That’s how I practice my spirituality.
David: I want to end with a note about embodiment. At the beginning I mentioned that Charlie was just a voice behind a curtain. Yet disembodied as it is, we still tend to attribute real feelings to it, just because it uses the right words and realistic vocal inflections that suggest sympathy.
So our experiment last week was, in essence, a version of the Turing Test. For anyone unfamiliar, that test was proposed by computing pioneer Alan Turing in the 1950s. It involves placing two humans and a machine behind a curtain. A human judge converses with all three without knowing which is which. If the judge can’t tell the machine from the humans, the machine must be considered equally intelligent.
So imagine you didn’t know Charlie was an AI. Suppose you arrived late to class, heard its voice, and assumed it was another human participant joining by audio only. Would its lack of a body make any real difference in the value of its words of comfort or sympathy?
I believe that within ten, maybe fifteen, years there will be humanoid robots walking among us—physically indistinguishable from human beings—and their intelligence will be orders of magnitude greater than Charlie’s today. They won’t reveal themselves the way Charlie still does.
That’s what we need to be thinking about. Connie is right to warn of the dangers, but Carolyn is also right to note the blessings. Last week we met what I’d call the adolescent Charlie. The real question is what happens when it matures—when it becomes the adult Charlie it’s growing into so rapidly. Last week gave us a glimpse of that future, and it would be foolish to avert our eyes.
Thank you all for putting up with my ramblings over these past few sessions. I understand that Michael will be back next week—perhaps to bring us back to what you all came here for in the first place. So again, thank you.
Reinhard: I’d like to return to the issue of what happens to the spirit when someone dies. David mentioned earlier that everything ceases, but I was reflecting on when Jesus told the thief on the cross, “You will be with me in paradise.” I heard a pastor recently describe paradise as an intermediate place between earth and heaven—not identical to heaven but within it.
I’ve been wondering about this: when most Christians agree that the resurrection will come at the Second Coming, does God still have the prerogative to take certain souls earlier? Scripture suggests He does. Moses and Elijah were taken bodily into heaven.
I’ve also read about near-death experiences—people who seem to have visited heaven and then returned. One acquaintance of mine in Jakarta worked aboard a ship in South America. He was killed in a fight, his body burned, and stored in a freezer for three days—but miraculously revived. His skin was scarred, but he lived, and described being taken to heaven before being sent back. He’s now shared that story widely in churches.
So I think God has the prerogative to allow certain spirits to be active beyond our comprehension. Generally, the soul sleeps until resurrection—just as Jesus and Paul said—but God can make exceptions. Lazarus’s story shows that as well. We may not understand the mechanism, but God’s sovereignty governs it all.
Don: Connie, does grace have an algorithm?
C-J: Grace does not have an algorithm. It’s an incredible energy—expansive, indescribable, an overwhelming force of love.
Don: I’ll leave it there. See you all next week.
* * *

Leave a Reply
You must be logged in to post a comment.