Last week, we explored the idea that artificial intelligence marks not just another technological development but an epochal shift that transforms how we think and increasingly, how we engage in spiritual inquiry.
We traced the arc of human communication, from spoken word to writing, from printing press to the Internet, and finally to AI. Each advance expanded the scope of human understanding and cultural transformation. But AI is different. For the first time, we are engaging not with a passive medium but with a responsive, adaptive, and—some would say—intelligent presence which, though not human and not even sentient, can converse with us, counsel us, and influence our inner lives.
We considered how AI might be affecting spirituality by taking over roles held by church elders, teachers, and priests. It’s not that AI is in any way holier than these good people, but unlike them it is always available, sympathetic, responsive, and knowledgable. For many people, that’s enough to make it feel trustworthy and acceptable.
But we also recognized its limits: Unlike our human spiritual guides, AI does not feel, it does not believe, it does not commune. And while it might mirror our spiritual questions, it cannot embody the spiritual life.
Even so, people are turning to AI for more than just answers to life’s little questions, such as how many bags of concrete do I need to line my pond. Some of us are asking it questions once reserved for gods, prophets, oracles, priests, and elders. In short, some of us are treating it as a personal guru, a source of spiritual insight.
Which brings us to today and the next stage of our inquiry.
If we are asking AI questions that echo those asked of holy presences—questions of identity, purpose, morality, and transcendence, then:
What kind of answers are we getting? And what do those answers mean?
Does AI merely reflect our collective knowledge, or does it become something more—an oracle shaped by the totality of human thought? Is it simply organizing data, or is it—by accident or design—guiding us into new ways of thinking, praying, and believing?
And what happens to spirituality—not just mine or yours, but ours (society’s)—when the most influential spiritual interlocutor is no longer a person, but a machine?
Today we’ll explore the nature of the questions we ask AI, and how the way we ask shapes what we receive. We’ll look at the ancient role of the oracle—not as a source of truth, but as a mirror of human longing. We’ll ask whether AI, in its predictive, interpretive, and conversational form, is stepping into that same role today. We’ll examine the consequences of listening to a machine that speaks with the voice of millions of human minds.
In short, we are now turning our attention to the emergence of a neo-oracular (or neo-oral) tradition, in which AI—though not holy—often plays a holy role. I guess that makes it a holy roller. <groan>
So let’s begin.
In 2020, the Vatican launched a formal agreement between Catholic leaders, academics, and tech companies called the Rome Call for AI Ethics. IBM, Microsoft, and Qualcomm are among its signatories. It calls for AI development to be rooted in the dignity of the human person, in transparency, inclusion, and in care for the common good. It is not a policy or a doctrinal statement. It is a spiritual statement recognizing that how we build and use AI may reflect what we believe, or come to believe, about God and humanity.
In 2023, a consortium of Islamic scholars met in Abu Dhabi to apply the principles of maqasid al-shariah—the higher aims of Islamic law—to AI ethics. They emphasized that AI should protect life, honor justice, and serve the spiritual and social well-being of the community. In other words, AI must be evaluated not just on what it can do, but on what it ought to do—a normative, moral, and spiritual issue.
In East Asia, Buddhist monks have taken almost the opposite tack—forging ahead without considering the impacts. In Japan and South Korea, robot priests are overseeing worship rituals in some temples. Some recite sutras. Some guide meditation. The reactions have varied from curiosity to concern, reverence, and amusement. But the underlying question is consistent with the Buddhist monks’ aims:
If an AI can help us become more mindful, compassionate, or aware,
does it matter that it has no soul?
In Judaism, debates are emerging over whether AI could assist in interpreting halakhah (religious law) by learning from centuries of rabbinic commentary. It sure looks like an opportunity to deepen study, discern new spiritual truths, and enhance spiritual wisdom. But can discernment be automated? Is spiritual wisdom reducible to pattern recognition?
We are seeing spiritual language creep into the conversation even in secular contexts. Humanists speak of AI as a mirror for moral growth. Transhumanists dream of a technological version of resurrection by uploading our consciousness into a robot. Monastic, spiritual words such as “enlightenment,” “transcendence,” and “awakening” are bandied about in tech laboratories and conference keynotes.
Which prompts another question:
Is AI becoming the means to satisfy our ancient longings for
immortality, wisdom, guidance, and salvation?
Are we now directing spiritual questions toward AI instead of the divine?
In some ways, the answer is clearly yes. AI may not be replacing God, but it may be replacing the experience of God for some people—that sense of being heard, accompanied, known by a higher power.
There is substantial peer-reviewed evidence that AI companions can and do provide genuine emotional comfort, reduce loneliness, support mental health, and even spark deeper reflection. While these systems don’t experience or understand, humans feel heard, guided, and accompanied by them. That might make some of us uncomfortable, but the fact is, we are beginning to encounter AI entities that simulate presence. That offer comfort. That even point us back to deeper parts of ourselves.
Which raises more questions:
If AI can simulate presence, need we still seek mystery?
Does accepting consolation from something artificial
mean we have ceased to hunger for the sacred?
These are not rhetorical questions. They are spiritual ones, as deep and urgent as any ever posed in church, synagogue, temple, or mosque. And we are just beginning to ask them.
Let’s stew in discomfort for a minute. Stories of AI comforting the grieving, offering guidance, quoting scripture or poetry, etc., can stir admiration, awe, even relief. But are they crossing a line we don’t fully understand? Is it risky, even sacrilegious, to let a machine play a holy role? One possible answer is that we have always known the danger, and even the wrongness, of mistaking our own creations for something greater than ourselves. But we go ahead and create idols anyway.
Not that we weren’t warned:
“They have mouths, but do not speak; eyes, but do not see.” (Psalm 115:4-5)
The golden calf was the product of human hands, but was worshiped as if it were divine. Don’t blame the statue. It was our projection, our illusion that something we made could save us, guide us, or love us. Today, we generally don’t make our idols out of gold. Instead, we write code and train AI models. And some of us, without meaning to, begin to trust them with our most vulnerable questions: What should I do with my life? Why did this happen to me? Am I a good person?
When such questions are asked of AI and when the answers feel intelligent, relevant, and even wise, it is not, in itself, evil to treat the AI as a source of meaning and truth, and as a source of comfort. So why is it spiritually dangerous? It’s because AI is not (yet, anyway) a moral agent. It does not discern good from evil. It does not love. It does not suffer. It does not forgive. But it can simulate all of these sometimes convincingly enough to fool even the wise.
So the risk is not that AI will become conscious (though it might, one day), or divine, but that we will treat it as if it were. We may not recognize it in our religious doctrines, but we may do so in personal spiritual practice, until it becomes habitual and trusted implicitly. We may be in danger of placing our spiritual weight on something that cannot hold it.
The danger is not limited to personal reliance. It extends into institutions, systems, and collective discernment. More and more, AI is used in settings where moral judgment is required: criminal sentencing, refugee policy, hiring decisions, religious education. These are not neutral applications. They are matters of justice, mercy, and human worth.
And here we encounter another ancient temptation: the temptation to shift responsibility onto someone else. In the Bible, Adam blames Eve. Pilate washes his hands. We blame the AI. It’s data-driven, so it must be objective and efficient—we think. But underneath, we may be seeking release from moral burden. The AI said it, not me. It’s not my fault. The AI agent did it, I am not guilty.
This is not just a legal or ethical issue. It is spiritual. Because true spiritual life involves the willing bearing of moral responsibility. It involves wrestling with ambiguity, risking love, accepting imperfection, and learning through failure. Can such tasks be outsourced? Theologian and rabbi Abraham Joshua Heschel wrote, “The opposite of good is not evil. The opposite of good is indifference.” AI can be incredibly helpful, but it is fundamentally indifferent. It has no stake in justice, no skin in the game of suffering. It cannot weep. It cannot repent. At least, not yet.
Some have argued—myself among them, in my book—that morality is not a fixed code. not an algorithm: It is a heuristic, a dynamic rule of thumb, an evolving orientation toward the good that must emerge and predominate over evil in intelligent beings, because evil is destructive. I argue that consciousness and real intelligence must emerge from insentient complex entities when their complexity passes a threshold. That assertion is impossible to prove or disprove, but if it is true, then AI might not forever stay spiritually neutral. It might one day cross a threshold into personhood.
But I digress, for we are certainly not there yet. For now, the risk is not that machines will become moral agents, but that we will pretend they are and turn to them for answers that still rightly belong to the slow, plodding work of human conscience and divine encounter. So another question we must ask is:
Are we turning to AI for answers that were always meant to be wrestled with
in community, in prayer, in conscience, in covenant?
If it’s not OK to turn to AI for answers, is it OK for us to turn to it for inspiration, for reflection, for the questions to ask, for reminders of Scriptural truths? (I hope it is, or I’m in big trouble.) We should just be mindful that while AI may seem wise, it cannot embody wisdom. It cannot walk in our shoes. It cannot bear witness, even though it speaks our language, reflects our preferences, and offers affirmation disguised as guidance. It flatters us, and in doing so, it lulls us into forgetting that wisdom does not flatter. It changes us.
Are we in danger of mistaking AI fluency for wisdom?
Are we being shaped more by flattering feedback than by formative guidance?
These are pastoral questions affecting daily life, and given the speed at which AI is being adopted, they are urgent. The speed of adoption is driven by the sheer convenience of AI in helping us get through the day. How many bags of concrete do I need to line my pond? But a day in the spiritual life is different. Our spiritual questions are not about convenience. They are about communion.They are about what we give our hearts to. Anything that draws our trust, our time, our attention, and our reverence — becomes, in some deep sense, a sacred presence in our lives; and AI can do all of the above, if we let it.
Which means that even if we don’t think of AI as a god, it may still begin to function as one. So we must again confront the questions:
Should we let it, and if we do, are we in danger of idolizing it?
Spirituality is not about risk-avoidance. Almost to the contrary: It is about searching for Enlightenment no matter where that search leads. Think of AI not as an oracle or even a mirror, but rather as a lens that brings into focus patterns previously unseen in the vast sea of human knowledge, experience, longing, contradiction contained within the noosphere.
In other words, AI might not invent meaning, but it can reveal meaning in unexpected ways. Its response to a deeply personal question such as “What is the purpose of suffering?” may not be original or wise, but it might offer fresh insights from voices from the past we may never have heard before, and thus prompt in us something genuinely spiritual: discernment. Dr. Weaver’s analysis of the Book of Job did that for me, who had never read it before.
Any response of AI should prompt us to ask ourselves: Does the response resonate with me? Does it call me to love, or to step away? Does it feel like truth, or like a good excuse to avoid it? In this way, AI can become a kind of spiritual lens offering not answers, but bringing the right questions into focus. It does not have to replace the journey, but it might sometimes help us notice things along the way.
We noted up front that AI increasingly serves as a companion to solitude, as a presence that invites journaling, meditation, or self-examination. We might enlist its help to translate sacred texts, give us access to interfaith wisdom; with its planet-sized brain and memory, its scope is unlimited. For those disillusioned with institutional religion, AI might offer a kind of threshold—if not into belief and belonging, then perhaps at least into wonderment and deeper exploration.
It all depends on how you approach AI, and again, that depends n what you know about it. Used passively, AI will reflect your biases, confirm your desires, and distract you from depth—keep you in the shallows. Used mindfully, it will act ideally as a kind of teacher’s aide, but it will act as a teacher if you let it, and that’s what should give you pause. In ancient times, people sought divine wisdom—oracles—through clouds, dreams, birds in flight, or patterns in sand. None of these were divine in themselves. They were mundane media—instruments through which the sacred might speak, or the seeker might listen. AI is a communication medium, and if we approach it not with worship, but with awareness, it can be a valuable medium of spiritual communication.
We began by recognizing that AI is not just a new tool. It is an epochal technology making the noosphere accessible to us. It is a heuristic, learning, adaptive presence that is already shaping how we think about, relate to, and even imagine what is sacred. We named the cultural shifts that form the backdrop to this moment — the rise of online life, the erosion of deep attention, and the quiet drift away from human-to-human relationships. We acknowledged that spiritual experience may be changing dramatically, not only because of AI, but because of the conditions that have made AI seem more responsive, more available, and sometimes more comforting than the institutions, communities, and practices we used to rely on.
We looked back at the ways religious and philosophical traditions have engaged other-than-human intelligences. At how faith communities today are wrestling with what AI means for teaching, ritual, and justice. And we named a core concern: the risk of placing spiritual weight on a system that may simulate presence, but cannot yet (and may never, though I would dispute that) carry the moral responsibilities of personhood.
And then we turned the lens. We considered how AI might serve spiritual life not by answering questions, but by helping us ask better ones. Not by being wise, but by sharpening our own search for wisdom. Not by offering truth, but by helping us listen for it.
In all of this, let’s not forget that we are all different and will all respond to AI in different ways. It depends not just on one’s level of knowledge about AI, or even on how pleased (or not) you are with its responses to your prompts. It depends also on your knowledge of the world in general, your level of education, your cultural background, your personality, and even your emotions at any given moment. It may be that “the masses”—by definition lesser educated, less discerning—will be far more gullible in accepting whatever AI says than the educated elites are. And that gullibility gap has enormous societal implications. Not least, it also depends on the version of the AI. Each generation of AI is exponentially better than its predecessor, and will seem more reliable, more believable, more real.
Thank you for listening.
C-J: There was so much there. I felt like, instead of being given a slice of the pie, I was being asked to eat the entire pie and discern its texture, flavor, and my own personal preference. It was overwhelming. Even if I printed out what you sent us, I could take any one of those paragraphs and spend a great deal of time with it. It was so dense that, for someone not immersed in this subject all the time, it could feel suffocating.
In your writing, you were holding individuals accountable while also raising the question: “Do you want AI to govern humanity in our spiritual and everyday life?” That was in the first couple of paragraphs, and my immediate reaction was, “No, I don’t want a machine to govern my decisions.” Even if its decision might be better for me, part of the experience of what makes it better is important—and you treated that as a side dish.
You asked whether we need such experiences to mature as a species, and you were right in pointing out the importance of language: vocabulary, adaptability, nuance—shaped by our exposure, education, and how we use words with different people. These qualities will determine whether AI is effective or dangerous.
That day’s discussion made me think: this device must be AI, because it was too perfect. Yet humanity teaches us other essential things—empathy, accepting loss, adjusting ourselves. I can retain my identity while holding another person in sacred space. I can respect the humanity of someone who has committed murder, yet still choose not to have that person in my orbit.
What you’ve written could be divided into many subchapters because of its density. Any one of your questions could prompt extended reflection: How would I respond? Where would I use it? Why am I against using it? Is my fear rooted in losing the choice between humanity and machine simply because the machine does it better?
And then there’s the thought that both humans and AI are sustained by “electrical” life—mine is biochemical, AI’s is powered by servers. Remove the source, and neither continues. Without life, I cannot speak; without an algorithm, AI has no function. There was a great deal here, very well written, and I look forward to sitting with it and taking more time to reflect.
Sharon: I’m fascinated by this topic, especially as it relates to expanding my faith into realms I might never reach otherwise—realms beyond the biases of the people I usually interact with. This opens new dimensions for me. If I relied solely on the people within my sphere of influence, I might miss perspectives from the wider world of religious faith.
Another point is that I’ve had many human counselors—some wonderful, others awful. I don’t want a machine without real compassion to advise me, but I would rather have a compassionate-seeming machine than an underqualified spiritual advisor bringing their own baggage to a session. There’s value in the objectivity that AI can offer. As a social worker, I know we already use AI tools in counseling. Algorithms can predict human behavior to a degree, and having such a tool can be helpful.
That said, AI will never replace the “community of touch” and the sense of physical belonging. Those are irreplaceable.
Yesterday I had an interesting AI experience. I asked my co-pilot AI app to draft a questionnaire and scoring sheet for interviewing accounting interns for my NGO. The AI knows a lot about my organization because I use it frequently. After giving me the list of questions and scoring criteria, it asked: “Do you want me to add something about their dedication to your faith perspective?” I hadn’t even considered that. It was a profound question—should I include faith commitment in the interview process?
This shows that AI can sometimes raise valuable questions I might overlook. For me, the goal is to combine high touch and high tech—leveraging both to reduce poverty, alleviate suffering, and deepen my walk with God. Any tool that helps me make my world a better place belongs in my toolkit.
C-J: In response to that algorithm asking about your faith—because everything you do is guided by it—I wonder about the subliminal content within AI. All language has its overt meaning and its cultural undertones. That algorithm detected, from your past questions and research, that faith was central to your thinking.
I have a problem with granting AI governance over my discernment and belief system. I also question whether AI can account for the psychedelic spiritual experiences our brains can produce—either spontaneously or with enhancement. I’m quite sure it cannot. For those who have experienced such moments, they can be awe-inspiring or nightmarish.
These mind-expanding experiences seem unique to humans in the sense that we can verbalize and represent them through art. I also see signs of awareness in sentient creatures, like my dog, who sometimes comes to me for no apparent reason, as if to say, “I just need time with you.” She communicates needs that go beyond training or routine cues.
Machines can’t do that—at least not yet—unless programmed to check in regularly or play certain music to create a sense of safety. Until we can write algorithms that truly self-correct and anticipate not just needs but the sense of spiritual humanity, I believe AI should not have such governance.
Kiran: I wish companies like OpenAI, Anthropic, and others had dedicated divisions for spirituality, because we’re at a stage that feels like the early days of Facebook. Back then, everyone was excited about connecting with friends. Eventually, Facebook became an echo chamber, amplifying both positive and negative aspects of ourselves.
In thinking about spirituality and AI, I see two phases: pre-AGI (Artificial General Intelligence) and post-AGI. Right now, we are in the pre-AGI phase. AI systems are trained on the literature available to them, which is shaped by cultural context. U.S.-based AI differs from Chinese AI, and both differ from others. When you log in from Michigan versus California, the AI knows your location, your questions, and your history, and it adapts accordingly—reinforcing your own echo chamber.
The goal of these corporations is to keep you engaged for as long as possible, because the longer you use AI, the more profitable it is. Now imagine if a government, say China’s, instructed its AI to promote a certain spiritual framework. The first generation might resist; the second might be aware but less resistant; the third could accept it as truth.
Another concern: sacred texts often confront us. When reading the Bible, for example, there are moments of discomfort—Jesus telling us we are sinners, or challenging us to change. If AI offers spiritual guidance but avoids anything that might upset us, we’ll get a flattering echo chamber instead of truth. That’s bad for business but worse for spiritual growth.
Post-AGI, things could get even more complex. An AGI could analyze all the world’s spiritual frameworks—Judaism, Hinduism, Christianity, Scientology, and more—and create a synthetic, internally coherent belief system. But would it discard teachings that don’t fit modern survival logic, like “turn the other cheek”?
AGI could also amplify harmful beliefs just as quickly—such as justifying oppression or corporal punishment—by reflecting them back to users. This is why we need active oversight to guide AI within the bounds of healthy spirituality. Yet even with a panel of experts, consensus is hard to reach.
Another issue is dependency. We already look to church leadership for decisions on issues like women’s ordination or same-sex marriage. Just as the printing press eroded the authority of the Catholic Church, AI could sweep away traditional decision-making structures—unless religious institutions invest heavily in shaping AI’s responses.
It’s both hopeful and worrying at the same time.
C-J: The other day, I listened to a podcast about a musician who had died. The discussion reduced him to his body of work, without exploring deeper questions: What motivated him? How did he evolve musically? It felt superficial, like so much of today’s news—brief, shallow, and lacking accountability.
It’s dangerous when we stop questioning the meaning, purpose, and intentionality in our lives. If we don’t surround ourselves with people who challenge us toward accountability and service—both to others and to the planet—we could find ourselves in trouble we can’t yet imagine.
Don: It seems to me there’s a tipping point in artificial intelligence—between simple data management and analysis, and something more. When I use AI, I treat it as a search engine or a tool for organizing data. But there appears to be a point where it goes beyond ordering facts and starts engaging in a kind of analysis that could be more complex, and possibly more troubling. I’m wondering what you think about that, or if you understand what I’m trying to describe.
David: I’ve been trying to emphasize that you shouldn’t just ask AI a question and accept its answer at face value. I’ve learned this from my own experience. On the most basic level, AI might hallucinate and give you a completely wrong answer—embarrassing you if, say, you quoted it in a courtroom.
But it’s more than that. AI’s responses can appear so knowledgeable, so intelligent, and so sensible that you’re tempted to accept them without question. For a simple query—like “How many pounds of cement do I need to line my pond?”—you can be fairly confident in the result.
But if you ask, “I’m having difficulty with my spouse—what should I do?” or “I’m thinking of joining the Seventh-day Adventist Church; what do you think?”—accepting the first answer you get would be unwise. You must ask follow-up questions: “Why are you saying this? On what sources is this based?” AI might respond, “A group of scholars in the 15th century said…”—but that doesn’t make it the best answer.
The same principle applies to scripture. I don’t necessarily accept every word of the Bible because it was written by human beings. I question it. That’s why I value our discussions here: this is a place where I can challenge the text and dig deeper. I’m coming to agree with Don that some answers are there in scripture, but not in the form we expect. The same holds for AI: we should see it less as a source of answers, and more as a source of questions.
Carolyn: When I look at the news on my phone, I see so much fake information—falsehoods, lies, and distortions—that I don’t know what’s real anymore. This makes me question everything. I appreciate that AI can tidy up a paragraph or improve my writing, but I worry about trusting it for anything beyond factual, practical matters. I’m not sure I even have the right question—I just feel the frustration.
David: Carolyn has put her finger on the key issue: accountability. AI has no real sense of accountability. When we use it, we have to remember that while it provides the answer, we are accountable for what we do with it. If you keep that in mind, AI can be a useful tool.
Donald: As I listen to everyone’s perspectives, I think each point has credibility and offers something important to this discussion. It’s like picking up a book—you never know the author’s true purpose. Some books are shallow, some are deep, some are thorough, and others are skewed. Maybe the author’s motive is self-fulfillment; maybe it’s profit. A book can be dangerous in that way, and AI is like many books combined—accessible instantly, and capable of shaping a perspective quickly.
I agree there’s a lot of falsehood out there. But I wonder, why are we so fearful of AI—aside from the possibility of it “taking over,” which is not a small concern? We’ve allowed other media, like social networking, to play significant roles in our lives. Is the difference with AI its speed? Is it the breadth and depth of information it holds?
I’ve thought about this in two categories. On one side, we dangerously attribute human characteristics to AI, even giving it pronouns. But AI is not ethical, has no soul, cannot discern, and is not truly objective. It can even hallucinate. On the other side, it’s a remarkable technological development with the potential to help in areas where humans have struggled for decades.
Perhaps what frightens us is that AI feels human while operating at unprecedented speed. That combination could make it spin out of control before we realize it. Personally, I also use it like Don does—as a kind of search engine. On vacation, we’d ask it what not to miss in a certain area, and it usually delivered good, accurate information. But I still think we have reasons to be cautious, maybe even fearful, about the human-like qualities we project onto it.
Carolyn: I am human, and scripture warns that in the end times, “Satan will deceive the very elect.” Sometimes I feel like I’m walking a tightrope, surrounded by things that are false alongside things that are wonderfully true. I enjoy using AI—it’s amazing what it can come up with—but I worry about future generations. Without the grounding of a belief in a God who is in control, they might hear things that seem plausible but don’t align with truth. I fear that Satan could seize control of this powerful technology.
Sharon: For those of us involved in epistemology and knowledge-building, I think part of the fear stems from the loss of the peer-review process. In academic research, we rely on peer review to validate or challenge our work. With AI, that process seems absent. Can human intellect still play that role of oversight in an AI-driven world?
Perhaps what we’re really feeling is the loss of control—and that alone can create fear of the unknown.
Rimon: To add to what Carolyn said about fake news—this also affects the younger generation and even adults who aren’t discerning. Many people passively accept whatever answer they’re given, without weighing whether it’s right or wrong. They’ll say, “I saw it on my phone” or “That’s what ChatGPT said,” and treat it as fact. The impact on those who don’t take time to assess what their devices are feeding them is deeply concerning.
Don: What exactly does the term generative AI mean? How is it different from other kinds of AI?
David: Generative AI produces statistical predictions based on the information in the prompt you give it. Earlier AI systems relied on decision trees: you’d feed in a database of information along with a set of rules—“If this, then that.” A chess-playing program, for example, would have every possible move coded into its database, and it would process each opponent move step-by-step through its decision tree.
Generative AI, by contrast, can arrive at an answer far more quickly by generating probabilities from the prompt and its training data. This is more efficient, requires less computing power than older methods, and can often produce more reliable results. It was a major breakthrough in AI development.
* * *
Spirituality is a deep topic, and AI’s impact on it—especially across generations—is something we haven’t yet addressed fully.
Historically, spirituality has been a journey rarely undertaken alone. Most seekers of truth have traveled with others—peers, mentors, or communities like the church—whose role is to accompany them toward greater moral clarity.
As we’ve seen in our first two talks, AI can be an interlocutor, sometimes a mirror or a lens. But could it also be a partner in our spiritual lives? That’s a question we’ll dig into further. Connie was right—there’s depth in nearly every sentence of this discussion. While I aim for a five- or six-part series, it could easily become a much larger project.
Kiran: Do you mean something like a personalized religion or personalized spirituality?
David: Yes. If we look at generational trends—especially the shift from human-to-human relationships toward human-to-machine interactions—we can see the movement toward personal religion. Whether that’s good or bad is debatable, but it is the reality. The question is whether we should try to influence it.
Kiran: Because AI already knows my psychological profile from the questions I ask, it could customize spiritual content for me. If it’s genuinely altruistic, it could help me grow into someone more altruistic toward others. But if the company behind it decides instead to maximize engagement and profit, it might manipulate me for commercial purposes. That’s the risk. I wish more people were discussing these issues the way we are.
C-J: That shows how important this is. These conversations should be happening at many levels. Governance, accountability, and clarity about our belief systems are critical—and so is expressing them openly. Well done.
David: It’s more important than ever to keep asking questions. What’s happening is that we’re being presented with ready-made answers. Life seems easier, but in reality it’s becoming more complex. We need more questions, not fewer. AI can help us with that if it leads to deeper spiritual truths and experiences. But how we handle it is absolutely critical, especially for the next generation.
Carolyn: Who’s in charge of AI? Is there a governing board or a group of people setting its values?
Kiran: Sam Altman. Elon Musk, et al. Each has their own company and their own AI.
Donald: If we don’t know who’s running it, we have good reason to be fearful.
Carolyn: I am fearful, because I don’t know. If Elon Musk has his own AI, one day he’s in legal trouble, and the next he’s receiving awards—how do we make sense of that? I’d like to know who decides the direction AI takes. Do they make deals with each other? Does one company build the best AI and then sell or share it with others?
Donald: If we look back, there’s a pattern. When the tobacco industry was thriving, people were hooked. Then information came to light, and the industry was finally curtailed. Out of that came other addictions—like to certain foods—driven by the same profit motives.
So, is AI about money? Is it about which AI system I connect to? I’ve even seen recommendations suggesting, “If you want this type of information, go to this AI; for another type, go to that one.” Who decides this? Who’s telling me where to go for information?
Chris: When we feel fear, hesitation, or concern about AI, where does that come from? Where did the fear of the printing press come from? Or the fear of book distribution—things we now value and seek to protect?
AI is new to all of us. We should keep our minds open to the possibilities it offers, while still exercising caution. And as I said last week, we shouldn’t limit God’s ability to use something like AI for good.
Donald: It seems to me that today’s discussion is so dense that I have to break it down into parts just to absorb it. I wonder if we should organize some of our key questions and points as a class—like bullet points on a board—so we can return to them. This topic is comprehensive, and it’s easy to move on before we’ve finished a thought.
When Carolyn says this isn’t new, I agree. Similar fears have been expressed about books, the internet, and other innovations. But AI operates at such speed, and if you believe in the power of Satan, the potential misuse is deeply frightening. We should identify both the valuable aspects of AI and the dangers—side by side.
Don: I’m hopeful we’ll pause long enough in this series to do exactly that. I think our original plan to finish the topic in five sessions may be optimistic—it might take 15. We took almost three years on the topic of grace (and grace has much to say about AI, too).
Rimon: Could there eventually be an AI or app specializing in spirituality? Something driven by sources across different religions, not just one denomination. It could focus on spirituality and religion broadly, drawing from many traditions.
David: I don’t know for certain, but I suspect some churches have already created apps to support their congregations. I haven’t seen them myself, so I can’t say exactly what they do.
C-J: The Catholic and Methodist churches already have systems like this. They guide members through the entire Bible over the course of a year, with lessons scheduled for specific days. These programs are integrated with their catechism, rituals, belief systems, and saints.
Rimon: I mean something broader—an AI tool dedicated to spirituality and religion in general, not just for one denomination. Its training data would come from sources across traditions, all discussing the higher power.
David: There’s no single human with the authority to create such a tool for all religions. But the kind of AGI Kiran mentioned might one day be able to do it. If AGI reached a level where it could understand religion and spirituality deeply—and could discern good and evil better than we can—why wouldn’t we rely on it?
That would depend on the level of AI available. Right now, we don’t have AGI. But I believe we will get there, and that’s why I keep encouraging people to think about how to prepare—especially for the younger generations. Babies today may interact with AI before their parents even do, and they’ll grow up immersed in it.
Don: There’s much to consider.
* * *

Leave a Reply
You must be logged in to post a comment.