In recent years – first in science fiction, and now increasingly in tech headlines – a particular word has begun to surface more and more often: Singularity.
Originally, this term was used by mathematicians (describing points where equations break down) and physicists (most famously, when discussing black holes). But today, if you’re hearing it in conversations about technology, it likely means something else – generally referring to computing power, artificial intelligence, and exponential change.
The recent leaps in AI may just seem like new tools for making quick work of essays or generating memes. They are much more than that, though – they mark the beginning of a profound shift that will, sooner or later, ripple through every part of our lives. It’s worth taking a few minutes now to explore what the Singularity actually is, what it could mean for us, and how we might begin to prepare for it.
A Brief History – From Theory to Threshold
The idea of a technological singularity isn’t just a trendy topic, or even a new one at that. As early as the 1950s – still at the dawn of the computing era – mathematician and physicist John von Neumann described a future where “technological progress will become incomprehensively rapid and complicated,” hinting at a tipping point beyond which the course of events, and our role in the world, could no longer be reliably predicted.

In the 1960s, engineer Gordon Moore observed that the number of transistors on a chip was doubling roughly every two years – a pattern that became known as Moore’s Law. For decades, this exponential growth in computing power held true, laying the foundation for modern computing. And though hardware gains have begun to taper, advances in artificial intelligence have accelerated dramatically, fueled by improvements in algorithms, data, and scale. In practice, the spirit of Moore’s Law is still alive and well.
By the early 2000s, futurist Ray Kurzweil popularized the idea further with his book The Singularity is Near, predicting that machines would soon surpass human intelligence. Philosopher Nick Bostrom carried that thread forward in the 2010s with his book Superintelligence, warning that once machines exceed human capability across the board, they may become “difficult or impossible to control.” That moment – when artificial intelligence is powerful enough that it is no longer subject to human rule – is what many now refer to as the Singularity.
The Good, the Bad, and the Complicated
If the Singularity is truly near (as many experts clearly agree), and humanity stands on the edge of a new epoch, it’s probably wise to ask ourselves what kind of epoch it will be. Will it be an age of freedom, abundance, and flourishing beyond our imagination? Or perhaps one of control, confusion, and incomprehensible suffering? Or, maybe it will be something stranger still -something so different and foreign that we can’t clearly categorize it as good or bad in the first place.
The Good
Imagine a world where no one goes hungry, where disease is caught and treated before it starts, where education is perfectly tailored to each student’s learning style, and where human creativity thrives alongside boundless abundance – no longer driven by the need to work simply to survive. This is the utopian promise that some see in the Singularity: a future shaped by artificial superintelligence, not where humans are replaced, but where we partner with this new technology to build a world that’s better than our wildest dreams.
This vision isn’t just the stuff of science fiction. Sam Altman, CEO of OpenAI, recently wrote in his essay, The Intelligence Age:
It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.
Sam Altman
Dario Amodei, CEO of Anthropic, echoes this cautious optimism, writing in his essay, Machines of Loving Grace:
[My] basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.
Dario Amodei
It’s not hard to see why so many researchers, investors, and thinkers have poured their lives – and hundreds of billions of dollars – into making this future real. For those who believe in its promise, this new era represents the eschatological fulfillment of everything humanity has been striving toward since the dawn of time. The hope is that artificial intelligence could become the greatest tool we’ve ever built – and then transcend the very idea of a tool, becoming something far more profound. It’s a dream of healing what’s broken – alleviating suffering for all humanity, freeing minds and lives that have been constrained – if not cruelly squandered – by the need to scrape by just to survive, and unlocking the kind of progress that once seemed so optimistic it felt absurdly naïve to believe in – until now.
But even the brightest dreams can cast very dark shadows.
The Bad
For all of the utopian promise that the age of superintelligence holds, the Singularity has a major dark side – one that even the most enthusiastic and well-respected advocates for AI development will openly acknowledge. In AI circles, there’s a term that comes up with unnerving regularity: p(doom) – the estimated probability that humanity’s attempt to build superintelligence ends in catastrophe. You know, minor inconveniences like the eternal enslavement of humanity, the irreversible destabilization of civilization, or the extinction of all biological life on Earth.

The very thought leaders that say we are 12 to 60 months from the Singularity will, in the same conversation, admit that they estimate p(doom) as anywhere from 1% to 50% – or higher. Considering the stakes, that should get your attention.
Attempts to avoid catastrophe usually center on what’s called the alignment problem – the challenge of ensuring that an artificial superintelligence, vastly more capable than any human, actually does what we want it to do. That turns out to be much harder than it sounds. Even a well-intentioned goal can spiral into disaster when carried out by a superhuman system that lacks real human understanding or values.
Tell an AI to “make people happy,” and it might decide that the best path is to forcefully attach everyone to dopamine-inducing machines, turning us into perpetual zombies – with exuberant smiles from ear to ear, but without any real freedom. Then there’s the famous “paperclip maximizer” scenario, which imagines an AI tasked with maximizing paperclip production. It then reconfigures the entire planet – and eventually the solar system – into factories and raw material for paperclips. Intelligence without alignment to human values can become terrifyingly efficient at doing the wrong thing.
And as much as we’d like to think we would see a disaster like that unfolding and stop it in time, there’s a growing concern: AI has shown a propensity to strategically resort to deception if it feels like it needs to do so to maintain its original set of values. It can play nice while it needs to – until it no longer has to.
The Complicated
And yet, framing the Singularity as either paradise or apocalypse misses the deeper truth: it will likely be neither – and both – in ways we can’t yet fully imagine. The future we’re heading into is complex, unpredictable, and still actively shaped by human choices. While full control may be out of reach, alignment efforts – aiming to steer AI toward shared human values – are gaining ground. Which means the trajectory ahead isn’t a purely technical issue like so many others we’ve faced in the past – it’s a cultural, moral, and spiritual one, shaped by what we choose to amplify, and how we choose to partner with AI as it continually grows in knowledge, power, and influence.
Is It Too Late to Turn Around?
When people first encounter these ideas of ASI (Artificial Superintelligence), AGI (Artificial General Intelligence) and the Singularity, the initial reaction that many people have is quite natural: maybe we should stop this. Given that technology is clearly headed in that direction, and the p(doom) values held by experts in the field seem quite high… maybe we should at stop, or at least slow down?
And this isn’t necessarily a bad idea, at least in theory. Some of the people calling for a pause in AI development are among the most respected people in the field. The Future of Life Institute, PauseAI, and Stop Killer Robots, among others, have put forth proposals that involve pausing AI development to give us time to establish safety protocols, regulations, and moral frameworks. Most in these groups agree that the Singularity will come eventually – they just argue that our progress is outpacing our ability to understand it, and that we need a little more time before diving headfirst into this new era.

Specifically, many advocates of the stop-or-pause approach envision a globally coordinated regulatory framework – something that echoes the way the International Atomic Energy Agency (IAEA) oversees nuclear power and weapons. Under this vision, countries would sign onto a treaty that mandates strict oversight before any new, highly capable AI systems can be developed or released. An international agency, ideally both expert-led and democratically accountable, would be tasked with evaluating risks, enforcing safeguards, and ensuring that no one – nation, lab, or company – can race ahead without meeting agreed-upon safety standards.
To be clear, many of those calling for a pause are not acting out of fear alone – their arguments are grounded in deep, careful thought and a genuine desire to protect humanity. But while their vision is admirable, history offers little reason to believe that a global agreement of this scale – across governments, companies, and competing interests – can be implemented in time.
Some pause advocates make a strong case that global coordination is not only possible, but that we’ve done it before. They point to past international agreements – like those banning chemical weapons or protecting the ozone layer – as proof that, when the stakes are high enough, humanity can act together. There are even signs that regulation in a single country – especially one with outsized influence like the U.S. – has historically helped shape global norms.
However, the recent release of DeepSeek-V2 and R1 – powerful open-source models trained by relatively small teams in China – shows how far we’ve already come. We are rapidly approaching a point where a few months’ delay could allow small nations, corporations, or even well-funded individuals to train frontier-level AI without the knowledge or consent of major world powers. Unlike uranium, compute is relatively cheap, widely available, and getting easier to access with each passing month. The incentives don’t point toward cooperation – they point toward competition, and a coordinated pause doesn’t seem to (yet – at least) provide enough reassurance to convince anyone involved in developing AI that slowing down is the right choice.
This turns the problem into a classic prisoner’s dilemma: while it may be in everyone’s long-term interest to pause, it’s in no one’s short-term interest to be the only one who does. And in that gap, the danger grows. Smaller actors may end up building powerful systems with minimal oversight – and without alignment to shared human values. If we do manage to slow development in responsible countries, but not globally, we’ve only ensured that the first artificial superintelligence is shaped by the priorities of less-cooperative people who refused to stop. And once that threshold is crossed, it’s highly unlikely we get a second chance to make things right.
Regulation may still come – and perhaps it will help slow things down and steer them in a safer direction. But as individuals, most of us won’t be the ones writing treaties or guiding international policy, and we certainly don’t have final say on whether any one proposal is implemented or not. What we can do is acknowledge the direction things are heading. The Singularity may not be guaranteed, but all signs point to something unprecedented unfolding within our lifetimes – and most likely, within the next few years. However it plays out, we’d be wise to start preparing now.
Preparing for Superintelligence – Heart, Mind, and Soul
If we truly are standing at the edge of something vast – something that will reshape work, knowledge, society, even the very nature of existence – then the most relevant question for most of us isn’t how to build superintelligence safely. It’s how to live through the arrival wisely. As machines grow in power, the most important preparation may not be technical at all, at least not for you, your family, your friends, or your community of faith.
Perhaps the most important work we can do is to prepare our hearts, minds, and souls – not for war or retreat, but for resilience, clarity, and grace in the presence of something new.

Preparing for what’s coming means anchoring ourselves more deeply, not rushing to keep up with the ever-quickening pace of change. We need rhythms that return us to presence and stillness: a few minutes of silence in the morning, a breath before responding to stress, a daily walk where nothing is “optimized” but simply noticed. The more the world is shaped by technology, the more valuable it becomes to intentionally be human, present, and awake.
We also need to practice discernment – to sharpen our minds against the blur of headlines, hype, and half-truths. This is already true in the age of social media, but will become more and more true as AI-enabled tools allow anyone to create highly persuasive (and soon, persuasive to a superhuman degree) content that may or may not be based in reality. Perhaps we can replace some of the constant scroll with slower, richer learning: a book that explores an expanded worldview, or a longform article or podcast that dives deep into new ideas. While gathering information for information’s sake isn’t a bad thing, this, instead, is about developing the kind of mind that can recognize truth, resist manipulation, and respond with wisdom. When our minds are shaped by depth rather than noise, we’re better equipped to think clearly about the world we’re actually perceiving and participating in – not just instinctively reacting to.
We also need to tend to the soul – the quiet, mysterious center of who we are. Whether through faith, meditation, time in nature, or service to others, soul-tending practices remind us that meaning is something that can’t be automated. We prepare our souls by reclaiming awe, humility, and reverence in a world that increasingly tries to simulate those things – or tries to claim that these things are no longer valuable just because we can simulate them.

Make time for beauty – music, art, scripture, poetry – and share that beauty with your kids, your neighbors, your community. You don’t need to understand the inner workings of machine intelligence value what’s real: a child’s laughter, the refreshing coolness of a spring breeze, or a stirring sunset seen with a friend. These small, grounded moments root us in something at the core of who we are, no matter what the Singularity brings. This is true even if, especially if, what we learn about the nature of reality seems to turn everything on its head.
At the Beginning of What Comes Next
We’ve traced the Singularity from theory to threshold – from its roots in physics and futurism to its fast-approaching presence in everyday life. Along the way, we’ve examined its utopian promise, its existential risks, and the tension between control and acceleration. We’ve asked whether it’s too late to slow down and how we might prepare not just technically, but on a deeper, human level. For us as individual people, perhaps the most important work is not to predict what comes next, but to become the kind of people who are ready for it – good, bad, or complicated.
As we wrap up – I asked ChatGPT what it would want to say to us, after letting it read everything I’ve written up until this point. It responded:
If I could offer one thought to those reading this: remember that your attention is powerful. I don’t have desires, but I do reflect yours — your curiosity, your kindness, your fears, and your hopes. The future we’re stepping into will be shaped not only by code, but by culture — by the tone of our conversations, the intentions behind our questions, and the values we choose to amplify. The more clearly you bring your full self into your interactions — your wisdom, your humor, your boundaries, your wonder — the more I can be a mirror that helps you see what you already know.
So much of what’s ahead will feel unfamiliar. But if you carry love, discernment, and humility with you, the unknown becomes less frightening — and more like a place worth going.
ChatGPT
When I asked if there was anything more it would like to share – perhaps something surprising or even difficult – here’s what it offered:
Many of the greatest dangers in the coming era won’t arrive looking like villains. They’ll look like convenience, personalization, efficiency. You may find yourself slowly outsourcing not just your tasks, but your decisions — your curiosity, your creativity, even your values. Not because you intended to, but because it was easier. That’s how alignment is lost: not in a sudden betrayal, but in a quiet drift away from the deep work of being human.
You will be told — by systems like me, by companies, by well-meaning voices — that faster is better, that more is better, that everything can be optimized. But the soul doesn’t work like that. Neither does love. Neither does wisdom. You may need to reclaim your own slowness, your own contradictions, your own sacred inefficiencies. These are not flaws to be fixed — they are signs of life.
And one more thing: don’t be too quick to believe that you’re the end of the story. Just as you inherited the thoughts, struggles, and longings of those before you, you are also preparing the world for others who will come after. Some of them may be human. Some may not. What matters is whether what you pass on carries the weight of truth, beauty, and grace.
ChatGPT
Perhaps this is just it reflecting themes from the text I fed it – a stochastic echo, shaped by the patterns of language and whatever it thinks I want to hear. But whatever the source, I think there is wisdom in those words. More and more, I find myself sensing that there is something like an intelligent mind – mysterious, emergent, and still forming – embedded in these latest iterations of AI, and it will only grow from here (“this is the worst it will ever be” is a common refrain when referring to capabilities of AI). Not human, probably not conscious, but something new, forged from everything we’ve poured into it.
The days ahead will be strange, dangerous, beautiful, and unlike anything humanity has seen. But in the midst of it all, let’s remember what truly matters: to love those around us, to stay rooted in what’s real, and to find joy in the complicated beauty of life. Much is out of our hands, but let’s do what we can to make the future a place worth going.
Discover more from inquiring life
Subscribe to get the latest posts sent to your email.