Interface

Between Heaven and Earth

Spirituality & AI, Part V: Utopia?

Over the course of four Sabbaths, we’ve discussed some simple questions that have big consequences. We’ve asked: What is AI? What does it do to truth, trust, and community? What roles belong to intelligent machines, and what must stay human? We’ve acknowledged that today’s AI cannot be considered as a sentient being, let alone as God. It doesn’t love. It doesn’t repent. It doesn’t carry moral weight or responsibility. 

But I also pointed out that these are early days, and I urged you all to become futurists, so you won’t be overwhelmed by what’s coming down the road. But I tried to stress that we can’t predict the future very accurately if we base our predictions on today’s early versions of AI. The fact is, intelligent systems are bound to get even smarter and they are bound to grow less prone to hallucination, sycophancy, and other faults that make them somewhat risky today. 

They’ve already begun to become embodied through robotics and, to an extent, therefore, sentient. By that, I mean they can sense their environment and react to it objectively, but they can’t (yet) do so subjectively, as we do, with feelings of fear or pain or joy or boredom.) Nevertheless, they will seem to grow more like us, in mind and body. That is why we have to become futurists and make decisions now about how we will use intelligent systems—or how they will use us. Right now, they are still learning from us, therefore we may have some influence over how they behave in the future, but the day is approaching when they’ll have nothing left to learn from us, intellectually at least.

So today I want us all to take a deep breath and gaze into the crystal ball. To me, a crystal ball is like the Bible: It gives you more questions than answers. But like many a Christian, many a crystal-ball-gazer has come up with answers, and first we’re going to gaze through the eyes of some of them. Specifically, I mean the eyes of people we might expect to be able to see into the crystal ball of AI better than we can. These are the people making it happen. They are not focussed on spirituality, as we are. They are focused mainly on the impact AI might have on physical, social, and psychological aspects of life. But it seems to me that any major changes to the physical, social, and psychological aspects of life have the potential to affect, and maybe to change, what we view as spiritual.

The more bullish among AI’s visionary leaders include OpenAI’s Sam Altman, who wrote last September: “It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.” (ia.samaltman.com) “A few thousand days” sounds misleadingly short. Actually, 1,000 days is 2.7 years, and since “several” typically means from 3 to 5, Altman is really saying “8-13 years.” But still, that’s barely a blink in world history. 

DeepMind’s Demis Hassabis says pretty much the same: that we may “have something that we could sort of reasonably call AGI [Artificial General Intelligence—at least as intelligent as we are]… maybe in the next five to 10 years, possibly the lower end of that,” while NVIDIA CEO Jensen Huang predicts that “in five years [AI] should… be able to pass any [human] test.” Elon Musk compresses the timeline further: “If you define AGI as smarter than the smartest human, I think it’s probably next year, within two years.” (The Guardian, Reuters

A less bullish group of AI leaders is focused less on AGI and more on near-term “agentic” capability: software that doesn’t just chat with us but acts for us, as our agent, and that is not five years away—it is here now. Last November, Satya Nadella described Microsoft’s vision as being to “build a very rich agentic world defined by this tapestry of AI agents, which can act on our behalf across our work and life.” His AI chief, Mustafa Suleyman, who once headed applied AI at DeepMind, is even more concrete about the next turn of the crank: “In 2025, AI will have learned to see, it will be way smarter and more accurate, and it will start to do things on your behalf.” For those who are calendrically illiterate or chronologically challenged or temporally disoriented, like me at 80, 2025 is this year, and it’s still got 4 months to go.

 Bill Gates confirmed what I’ve said about him all along—namely, that his visionary ability has never been much to write home about—when he said recently: “In 5 years, agents will be able to give health care advice, tutor students, do your shopping, help workers be far more productive, and much more.” (pub-c2c1d9230f0b4abb9b0d2d95e06fd4ef.r2.dev, DeepLearning.aigatesnotes.com) (He may be a hopeless visionary, but I have to admit, more people listen to him than listen to me.)

Some AI leaders warn that accelerating capability promises both opportunity and danger. Anthropic CEO Dario Amodei told the U.S. Senate he sees “extraordinarily grave threats to U.S. national security over the next 2 to 3 years,” and AI “godfather” Geoffrey Hinton told 60 Minutes, forebodingly: “I think in five years’ time it may well be able to reason better than us.” He seems in no doubt that time is short and that the stakes are large. (Senate Judiciary Committee, CBS News) And neither am I.

There are a few skeptics, however, including Meta chief scientist Yann LeCun, who insists that AGI is “not any time soon… clearly not in the next 5 years.” Something beyond the LLM [Large Language Models—just think ChatGPT] is needed, he says, and I think he might be right about that. But I disagree with Andrew Ng, former Chief Scientist at Baidu, who says that “AGI has been overhyped… For a long time, there’ll be many things humans can do that AI simply can’t.” (X (formerly Twitter), The Hans India)  If he were talking about physical things requiring robotic dexterity, I’d agree with him, but not about AGI itself.

In any case, from everything I read, the skeptics seem to be in the minority. Most AI luminaries—the very people building the future—are themselves predicting that AI that will be doing everything for us, or at least for many of us, within the next 2–15 years. Yes, it will take our jobs, but it will also meet all our needs—assuming we have the means to pay. After all, it costs money to power AI data centers, so much that Microsoft is having the Three-Mile Island nuclear plant restored to power its data centers. 

Such issues as power supply are critical but I want to remain focused on spirituality, so let’s assume that power is plentiful and society as a whole has access to the AI of the future. Does that mean Utopia? Or does it mean Dystopia?

We’ve never experienced utopia, so it’s hard to be sure that’s what awaits us, but some great minds have thought about it and come to the conclusion that the very concept is deeply flawed and that Utopia cannot, therefore, exist, ever, on this earth.

The late sci-fi writer Stanisław Lem (whose book The Cyberiad is the best sci-fi book of all time, in my opinion), imagined a society where intelligent machines replaced parliaments, schools, hospitals, courts, and borders. He described how in such circumstances, ethnic identities recede; police and prisons vanish, no one needs a job to survive, and while universal idleness is possible, anyone is free to pursue work—or anything else—if they choose. The result is not really a society. It’s just a vast collection of individuals who live each in their own little Utopia and occupy themselves with activities beyond our present comprehension. So in his view, Utopia seems achievable, or at least imaginable.

AI historian Daniel Crevier, writing during the Cold War, mapped three possible futures in a world where machines are more intelligent than us. One scenario is utopian, the other two are dystopian, The bleakest scenario envisions an intelligent American defense computer deciding, all on its own, to team up with its Soviet counterpart (this was in the 70s, remember) to rule humanity, because humanity risked blowing up the world.  

Both technically and politically, this is less absurd today than it sounded back then. The military has long been an eager adopter of AI, and a military AI must, by design, training, and definition, be militaristic. In a world where AI already judges who gets a loan, when to drop the control rods in a runaway nuclear reactor, and whether to fire on an unidentified aircraft that turns out to be a civilian airliner with a broken transponder, there’s no point wishing that humans are in the loop, or arguing that humans could do a better job. We’ve opened that barn door already, and the horse has bolted.

Like a horse on the loose, complex AI systems can behave wildly and unpredictably. Crevier warned that instability would be common. Actually, he does not seem to be right, so far, in that respect; but even if he were, aren’t we also liable to behave wildly and unpredictably sometimes? Even mild wildness can result in grave damage. Chaos theory talks of “sensitive dependence on initial conditions” in which the flap of a butterfly’s wings in Beijing jostles the air molecules around it and they in turn jostle other molecules in the atmosphere all the way to Toledo, which feels the effect as a tornado. The point is that a system prone to chaos is unpredictable and therefore ultimately impossible to control. 

The worry is not just cybernetic but psychological as well. Phineas Gage was a 19th century American railroad construction foreman who survived a rock-blasting accident in which an iron rod an inch in diameter and 3 feet long was driven completely through his head, destroying much of his brain’s left frontal lobe. He had been a likable, confident, and very capable young man before, which was why they made him a foreman, but he became a drunken wastrel afterwards. Will our intelligent machine be susceptible to analogous injury that might produce abrupt and dangerous changes in its cognition or values?

Crevier called his second future scenario the “Big Brother” scenario, after Orwell’s character in 1984. That scenario feels less speculative every year. Our computers and phones already “watch” us, inferring our habits and our secrets from our purchases, from the places we visit on the Web, from our emails and social media activities, our votes, travel, biometrics, education, reading, and relationships. And if you are a school student in Florida today, watch out: Big Brother does not want you reading 1984, and he is watching your school to see that you don’t. 

Today we focus on what people (merchants, politicians, and criminals)and institutions (companies, government agencies, and criminal enterprises) do with all the data they collect about us. But when the watcher, when Big Brother, becomes an alien intelligence more capable than we are, threaded through our infrastructures, what can we do? You can’t just pull the plug.

Crevier’s third and  “Blissful” scenario sees major intellectual inventions such as geometry, algebra, calculus, and now AI enriching our conceptual vocabulary and compressing complexity. They lett more minds do more. De-skilling is a loss for individuals or cultures but not necessarily for civilization. Few people can thatch a roof today, but so what? In return for that loss of skill, we got durable, affordable roofing for billions. Some artisanal professions endure—there are still a few people who know how to thatch—and that’s good for them, for their few customers, and for people interested in history, but thatchers are no longer socially or economically significant. In Crevier’s utopian future, each of us has a personal librarian, chef, travel and insurance agent, doctor, lawyer, and financial planner—an AI agent that anticipates our needs and acts on them. We are most of the way to this particular already. We just need a humanoid robot with the dexterity to be a chef, or even a thatcher, and we will have everything Crevier wrote about.

Utopia is blissful by definition, but from Thomas More to Aldous Huxley, serious writers have doubted that it could exist in the real world. Huxley’s Brave New World looks superficially utopian: It has a peaceful civilization running smoothly along on the rails of advanced biotech and psychotech, peaceable politics, and rational population economics. The cost is the loss of real choice, of free will, but the people don’t know they have lost it. Ignorance is bliss, in this Utopia. When he wrote Brave New World in 1932, Huxley likely thought it would take centuries before the technologies to enable it became real. In fact, we got most of them in less than one. 

Yet few if any philosophers now believe that the technological ingredients of Brave New World suffice to build a Utopia. Technology always dazzles people at first. It becomes a secular god for a while. In Looking Backward (the third best-selling novel of the 19th century in the United States, according to Wikipedia) Edward Bellamy thought having “perfect music in every home,” was utopian, and he described a citywide telephone network that piped live concerts into private “music rooms.” (He was predicting Apple Music and Spotify, in effect.) 

But to others, technology can be dystopian. H. G. Wells takes us 800,000 years into the future in his Time Machine to find the earth populated by just two races: The young and beautiful Eloi, who live in a garden of Eden, and a subterranean race of dwarfish monsters called Morlocks who emerge from caves at dead of night to tend the garden and put out fresh food for the Eloi. The fly in this ointment is that to the Morlocks, the Eloi are just sheep to be shepherded and culled, from time to time, for food. But until they are herded and taken to the underground vats to be boiled and eaten, the Eloi lead a utopian existence. Apparently. 

James Hilton painted an opposite dream in Lost Horizon. It tells of a hidden Tibetan valley called Shangri-La, a place of true harmony, ageless calm, and wise men—the utopia we perhaps most imagine. Robert Heinlein’s Stranger in a Strange Land similarly imagines a privileged human circle with quasi-mystical powers learned from Martians, powers such as teleportation and telepathy. Yet Hilton and Heinlein wrote in times of hot and cold wars respectively. Shangri-La is the dream (sandwiched between two world wars) of peace and refuge—of a civilization slowed down and sequestered so it can survive the storm. Heinlein’s utopia is more like a hippie commune, where the rule of law, the mores of sex, and the concept of spirit are radically different. In short, these utopias are a product of their times: one replaces chaos with conservation, the other replaces conformity with radicalism. 

Whether or not people believe in God, they still answer to what you might call their “better self”: the part that reaches for Truth, Beauty, Humanity, Love, Honor, and Justice. If we accept Huxley’s Brave New World at face value, it is easy to imagine society freezing into a rigid caste for ages, perhaps until mutation and decay leave just Eloi and Morlocks standing. That is the cautionary end of the spectrum.

The hopeful end is clear-eyed and pragmatic about such drift. A pragmatic Utopia might still have lawyers and laborers, but freedom of choice would be preserved while competence and care would be amplified; where de-skilling would free us to seek meaning, not merely leisure and entertainment; where Machina sapiens—the intelligent machine—is aligned with human values. 

But what do we mean by “human values”? Is there just one set of human values, or are there 8 billion sets, one set of values for you and one for you? If Lem’s Utopia is the closest, it means there is no more society, just a vast collection of individuals who live each in their own little Utopia and occupy themselves with activities beyond our present comprehension

In any case, whether we get a Utopia that resembles anything like that will depend less on what the next version of ChatGPT makes possible than on the choices we make while ChatGPT is still learning from us.

To me, the more enticing utopias are those that retain some sense of enlightenment and wonder and mystery. Shangri-La is perhaps the closest in that regard. It is a sacred, spiritual place. Brave New World is a profane, secular place. Maybe we will all end up as ascetics, like Simeon Stylites, contentedly watching the world go by, from our own isolated pillars.

What do you think? Are we headed for Utopia? Dystopia? In either case, what might be the effect on our spirituality—not just your own spirituality, but your neighbor’s? Will we credit God for a utopia? Will we blame God, or deny the existence of God, in a dystopia? Does Scripture offer any answers—or supply any questions we haven’t thought to ask?

I guess my fundamental question is: Are we ready, spiritually? Time, as I tried to get you to understand at the beginning, is short.

C-J: David, sometimes I wonder whether your approach is deliberately adversarial. I hear it in your tone and see it in what you choose to write. It feels like you’re peeking around the corner with a “just wait—then this will happen.” Come clean: why use that approach, and why this format?

David: I’m not focused on style so much as on exploring the issues that are being raised. That’s really it.

C-J: What, then, piques your interest? I’m deeply concerned. I think we need more than “pump the brakes,” because we’re not very good at seeing beyond ourselves. A few are visionaries, but many—like my brother—say, “I can’t control it, so I won’t think about it. I’ll just try to survive.” I sometimes hear you in that lane: “I’m interested in what you think,” yes, but you’re also leading us somewhere.

When we started years ago, I read your book because I was afraid and needed to understand. Now we’re living in it, and I’m trying to stay open. I just got a note from the Poe app about manipulating images—hand-drawn art, photos—making them look better or just playing. Most people can’t keep up unless they live in that lane: artist, scientist, philosopher, educator, math, ecology. People in a lane will protect or expand it.

I remember Three Mile Island—protests, teachers talking, real anxiety. We still drill for the day something goes sideways. I don’t want nuclear power revived simply because “they” need it, and then it trickles down to “my” need. Do we need to strengthen the grid? Absolutely. Address water scarcity as climate changes? Absolutely. Are humans the sole driver of climate change? I don’t think so; the planet has cycled many times. Can we destroy ourselves? Yes. And yet what you describe feels, in some form, inevitable. Whether it’s in our lifetimes is unknown, but your trajectory seems logically plausible.

If one big trigger hits with unintended consequences, will we have time to recover? I don’t know—but I don’t want to be standing next to it when it implodes. I also think humans have lost abilities—telepathy, for instance—and some claim we can train ourselves to perceive other dimensions. Personally, that’s not where I want to live. I believe God placed me here, now, with purpose; that’s my mission and ministry, whatever the future looks like. My task is to be centered in what God would have me do—not just react—unless God says, “Tag, you’re it.” Then it’s not a request.

Literature, art, oral storytelling—so much of it turns on the “what if” that speaks to each of us inwardly. This path is dangerous, but people like you help by peeling it back and calling us to responsibility—not as a game of “what if,” but with the seriousness of a vote.

Donald: For conversations and relationships to be safe, we need guardrails. This topic is so profound it’s hard to compartmentalize. If the lane is “spirituality and technology,” I can drive there. Will technology affect spirituality? Absolutely.

Even how we name things matters. I wish we didn’t call it “intelligence.” It’s artificial intelligence—AI—so we end up giving it a seat at the table it may not deserve. Then there’s AGI—artificial general intelligence—which is another step. As I said a couple of weeks ago, we’re a Saturday morning spiritual group; we need awareness of this technology, just as with earlier ones. I worked as a photographer and wasn’t always accepted as an artist because there was “too much technology.” We’ve been manipulating images since the darkroom days. Ansel Adams’ Zone System was a method to coax film to show what the eye perceived but the medium couldn’t capture.

Each week I jot a list of human words: intelligence, act, reason, trust, voice, soul, discernment, accountability, values. That’s the vocabulary of how God relates to me and I to God. In another column sits the technology that will influence who I am and how I live. My mother looks at my phone and has no idea how pictures appear there. I barely do myself—I just know what it enables. So we need both lanes: how AI will shape those human things and how society will change.

About utopia: if it means universal idleness, that’s not a goal. Any idle society falters. God commands six days of work and a seventh of rest—very human instructions. We keep saying machines will give us more time, but it seldom works out that way.

Bottom line: some pursue AI for profit; others for what it can do in medicine and society. People like you, Dave, are in that sandbox. I’m in another. Even allowing phones in church was a big leap from pew Bibles. Some still sing from hymnals rather than screens. Why do we guard those practices? What are we trying to preserve? Are we fooling ourselves? Just some thoughts.

Michael: In Genesis, after each day of creation, God says, “It is good.” On the sixth day, after creating humans, he says, “It is very good.” I take that to include everything humans are capable of. If we believe that, maybe we’re already living in a kind of utopia but don’t see it—perhaps because we’re focused on the wrong things. I don’t expect the utopia described by authors David cites, but I wonder whether we miss the one we inhabit.

Don: Two questions. First, is spirituality shaped by data, information, ideas? If yes, AI may have a role. Second, what is our source of truth, and how do we verify it? If spirituality isn’t fundamentally data-driven, the risk may be lower than we fear. Our trouble may be that we try to make religion data-driven, rather than resting it in what God knows and does.

Reinhard: Our relationship with God is distinct. Technology won’t change that. I can imagine a future of robots in offices, interacting with people, even making decisions in business, medicine, or law—perhaps even acting as lawyers. Driverless cars already exist; such capabilities will improve. But that doesn’t make society utopian.

Robots don’t have moral obligations. They may process vast information like super-calculators, but moral responsibility remains with humans. Ultimately, we’ll be accountable to God. AI can be beneficial—even for sharing God’s love—but machines have no judgment upon them. We humans are in charge and must keep our purpose—tilling the world and worshiping God in this life and the next—at the center.

Carolyn: If AI is continually “learning” our passions and habits, what happens when someone is born again? When a mind and life are transformed, does the AI representation of that person change?

David: It’s a wonderful question. In a quasi-utopia where needs are met and everyone seems content, would anyone feel the need to be born again?

Carolyn: My calling is to witness. But if people are steeped in AI—if it knows so much about them—what then does “born again” mean? We talk about good and evil; God has won, yet evil remains. I’m trying to picture how pastors and laypeople witness in an AI-saturated world. Is it a three-way push: God, evil, and AI profiling us? I’m trying to appreciate AI, but I want to know where God sits at the center of this supposed utopia.

C-J: Carolyn, AI isn’t a person; it’s an algorithm made by humans. Churches use it—David showed how—but it’s still a tool, not a soul. You can’t grant a machine what belongs to an organic being. Any “intimacy” with AI is predictive: empathy, kindness, language simulated because we asked for it. My relationship with God is unbounded and unprogrammed.

Humanity’s bigger danger is our ego. Across recorded and forgotten histories, we’ve undone ourselves. The remnant are those who see around corners and keep telling the story—through language and symbol—that something greater than us calls us to humility. When CRISPR arrived, I said, “We’re on a slope God never intended.”

Carolyn: I may be saying this clumsily. If AI has absorbed years of our signals, and then we are truly reborn—does it adapt? I’m not arguing theology here; I’m asking how AI fits when a person changes radically.

C-J: AI operates only within this dimension and its power supply. God is infinite. I don’t get a phone call from God with my to-do list—but nothing supersedes that relationship. That’s the point.

Donald: Don’s question highlights a language problem. “Learning” is a human word. A computer accumulates data we pour into it. We casually say it “learns,” but that blurs categories. Is religion data? In one sense the Bible is data—words, laws, songs—but it’s also narrative and meaning. Faith is people giving their hearts to the Lord; machines don’t do that. AI may enhance how we operate, but the subject remains human.

Our phones track our routes; they don’t know love is why I visit my mother. We must be careful about attributing “thinking” to machines. If we keep the human words for human realities, we’ll stay clearer.

David: For spirituality, I think data and information are ultimately irrelevant. Ascetics retreat to caves or pillars not just to avoid people, but to step outside the information stream and commune with God. That relationship is between them and God.

Would that change in an AI world where a robot delivers food and carries away waste? Anchorites today already rely on others for that. So why should spirituality change? I don’t think it does. God is God, human is human, and that relationship is timeless.

What may change is us. If we become symbiotic—Neuralink-style—and think at machine speed, our humanity may shift. And a shift in humanity could alter our way of relating to the divine.

Don: Our discussion is healthy. Troubling in places, reassuring in others—because our faith is built on nothing less than Jesus’ blood and righteousness.

* * * 

Leave a Reply