The Religion of Progress
- Nitesh Daryanani
- Apr 2
- 12 min read
In an age saturated with data, innovation, and technological acceleration, it’s easy to believe that science has supplanted religion. We treat them as opposing forces: one based on reason, the other on faith. But this framing is both historically inaccurate and philosophically limiting. The real story is more complicated—and more revealing. To understand where we are, we need to understand how science and religion were once intertwined, how science has taken on religious dimensions of its own, and why ethics—the shared terrain between them—is the discipline we urgently need to resurrect.

I. The False Divide
The modern “conflict” between science and religion is a relatively recent construction. For most of history, science and religion operated in distinct but complementary spheres. Science explained how the world worked; religion asked what it meant. Astronomy told us how the stars moved; theology considered whether we had a place among them.
Even the ancient Greeks—whose legacy of reason we so often invoke—held the divine in profound esteem. For them, the pursuit of knowledge was not a rebellion against the gods but a reflection of them. Plato’s realm of forms, Aristotle’s unmoved mover, and the mythic logic of Homer and Hesiod suggest a worldview where intellect and the sacred were deeply entwined. Even later figures like Aquinas and Newton saw no contradiction between empirical inquiry and metaphysical meaning. Reason was not an escape from the divine, but a path toward it.
Many of the most important breakthroughs in science emerged from religious contexts. Galileo Galilei, a devout Catholic, famously wrote that the Bible “teaches how to go to heaven, not how the heavens go.” He believed that science and scripture need not be in conflict. When Scripture appeared to contradict scientific observation, he argued, it was because Scripture was written to be accessible to ordinary people—not to provide literal scientific explanations. Newton, who spent more time writing about theology than physics, believed that uncovering the laws of nature was a way of discerning God’s design. Even today, prominent scientists like Francis Collins, the former director of the National Institutes of Health and leader of the Human Genome Project, openly profess their Christian faith. “The God of the Bible is also the God of the genome,” Collins has written. “He can be worshipped in the cathedral or in the laboratory.”
The coexistence extends beyond scholarship into myth, story, and culture. J.R.R. Tolkien, a devout Catholic and close friend of C.S. Lewis, saw myth not as deception but as a way of accessing truths too deep for literal expression. The imaginative world of The Lord of the Rings is a vivid example—a fantasy that resonates because of its mythic structure, not in spite of it. Another is The Chronicles of Narnia by C.S. Lewis, where Christian themes of sacrifice, resurrection, and redemption are made accessible through the adventures of children in a parallel world. These narratives draw directly from Christian mythology, yet speak in a language that transcends religious doctrine, offering moral depth and emotional truth to a wide audience.
These stories remind us that meaning isn’t always found through precision or proof, but through the struggle against the unknown. In The Lord of the Rings, this struggle takes the form of a humble hobbit bearing a great burden—a journey of temptation, sacrifice, and ultimately, grace. Evil cannot be overcome by power alone, but through mercy, courage, and fellowship. Similarly, in The Chronicles of Narnia, Aslan’s self-sacrifice and resurrection reflect themes of love, justice, and divine mystery. Both works teach that meaning is discovered not by mastering the world, but by participating in it with humility and faith. That theme is central to both faith and science, where trust, experimentation, and perseverance shape our understanding even in the absence of certainty. Religion, in this sense, doesn’t compete with science; it offers a different language—one that speaks to human frailty, courage, and hope.

So why the perceived conflict? The idea that science and religion are locked in existential opposition is a modern myth—one I cannot fully untangle here, but intend to explore more deeply in future posts. It was not born from logical necessity or philosophical rigor, but from the cultural politics of the twentieth century, particularly in the wake of world wars, the Cold War, and the rise of mass media. Simplified narratives became tools of stability, even control. As Adam Curtis explores in HyperNormalisation, institutions perpetuated myths not because they were true, but because they were useful: they made a chaotic world seem manageable.
The science-religion binary emerged in part to draw ideological boundaries: modernity against tradition, reason against superstition, the West against its imagined others. But this framing has more to do with tribal allegiance than intellectual integrity. Religious institutions have, at times, resisted scientific insight out of fear of losing authority. Conversely, scientific institutions have often dismissed spiritual perspectives as relics of ignorance. These clashes are less philosophical disputes than battles over power, identity, and legitimacy. And the tragedy is that they obscure the deeper, more generative tension between two enduring ways of seeking truth.
John Dewey recognized this danger when he wrote that “science and religion are not competitors for the same prize but rather, if rightly interpreted, are complementary factors in a complete human experience.” But when education loses sight of lived experience and becomes dogmatic, it begins to resemble religion in the worst sense—“a set of beliefs to be accepted, rather than methods of inquiry to be tested and revised.” Our schooling begins to treat knowledge as revelation, handed down from authority rather than discovered in dialogue.
II. Science as Religion
While science may not have destroyed religion, it has in many ways replaced it—not as a competing set of facts, but as a new structure of belief. The irony is profound: what began as a method of inquiry designed to free us from dogma has itself hardened into a kind of secular faith.
At its best, the scientific method embodies intellectual humility. It demands evidence, invites revision, and embraces uncertainty. It recognizes the provisional nature of all knowledge and celebrates the process of discovery over the comfort of certainty. This approach echoes Dewey’s vision of engaging with the world through experimentation: an open-ended process rooted in lived experience rather than received wisdom.
Yet today, this spirit of inquiry has been eclipsed by a different posture—one that asks for allegiance rather than engagement. Increasingly, the phrase “follow the science” functions less as an invitation to think than as a declaration of truth. It offers not a method, but a message; not skepticism, but submission. Science, once a practice of questioning, has for many become a source of answers too sacred to challenge.
This shift mirrors the rise of a new secular orthodoxy. Yuval Noah Harari has written about the quasi-religious character of techno-humanism, where the goal is no longer understanding but transcendence—immortality through data, design, and control. Into this cultural atmosphere step figures like Neil deGrasse Tyson, who declared, “The good thing about science is that it’s true whether or not you believe in it.” It’s a line that sounds persuasive until you hear its undertone: believe, or be left behind. The humility central to scientific practice is replaced by certainty, and doubt is recast as heresy.
Richard Dawkins exemplifies this posture. His crusading disdain for religion and his elevation of science to moral superiority turn inquiry into ideology.

With books like The God Delusion, he invites not understanding but conversion. This kind of science evangelism does not just marginalize belief—it mimics the dogma it claims to oppose. Dawkins has compared religious belief to a virus of the mind and once declared that raising children in a religious tradition is a form of child abuse. These aren’t simply arguments—they are provocations, often framed to humiliate rather than persuade. His tone turns what could be a thoughtful critique into a campaign of scorn, alienating those who might otherwise be open to dialogue.
In this context, “follow the science” has become a cultural catechism—especially on the political left. It serves as a counterweight to the religious identity of the political right, offering its own version of moral clarity. We see this in contemporary debates where scientific authority is invoked as the final word: climate change discussions where complex policy questions are reduced to “believing in science,” COVID-19 policies that shifted dramatically while maintaining the same appeal to scientific certainty, or AI safety conversations where technical experts are treated as modern-day oracles. Even phrases like “Trust the experts” and “I believe in science” appearing on yard signs and bumper stickers reveal how scientific authority has been transformed from a method into a tribal marker. But this clarity often rests on a presumption of neutrality, not on dialogue, accountability, or reflection.
And neutrality is fundamentally not the same as moral depth. Neutrality claims to stand above the fray, presenting facts without values, data without interpretation. But this supposed objectivity often masks profound moral assumptions. When we surrender ethical judgment to “what the science says,” we forget that science itself emerges from human institutions with their own biases, blindspots, and incentive structures. The invocation of “the science” can obscure the ethical terrain beneath our knowledge systems: the political economy of research funding that determines which questions get asked in the first place; the alliance between scientific authority and corporate interests that shapes which findings receive attention; the methodological choices that privilege quantifiable outcomes over lived experience; and the systemic preferences for efficiency over justice, control over care. Science can tell us what is possible, but it cannot—on its own—tell us what is right.
When science loses its humility and ethics is sidelined, what emerges is not just bad science, but something resembling religious fundamentalism. It is science mythologized—complete with rituals, prophets, and eschatologies. We no longer merely observe and test; we construct temples of certainty: dashboards, models, algorithms. We seek salvation in predictive analytics and worship at the altar of innovation. The rhetoric shifts from inquiry to prophecy, from “this is what we know, with these limitations” to “this is what will save (or fix) us.”

This transformation is perhaps most visible in our tech visionaries and the ecosystems that form around them. Consider Elon Musk. He doesn’t just sell technology—he delivers prophecy. Colonizing Mars, merging minds with machines, erasing death itself. These aren’t mere visions of progress; they are narratives of redemption. His followers don’t admire his engineering acumen; they subscribe to his vision of human destiny. And they reveal the depth of our collective faith in the myth of progress: we live in a culture that earnestly believes we will colonize Mars in our lifetime, while some still debate whether we landed on the moon sixty years ago. It’s not evidence that sustains this belief—it’s a salvific narrative, a techno-utopian gospel that promises escape, transcendence, and immortality through machinery. It is not empirical certainty that drives this belief—it is a longing for transcendence, a mythic narrative of escape and salvation disguised as engineering.
This fusion of technological prediction, capital, and quasi-religious fervor constitutes what I call the religion of progress. It promises stories of salvation, belonging, and destiny—backed not just by rhetoric, but by the engines of capital. Finance has become the reward system for this belief structure: the mechanism that converts scientific prophecy into personal hope. Venture capitalists, tech stocks, and retirement funds all channel our resources toward these promised futures. We spend our entire present dreaming of, working for, and putting away money for a future that is sold to us as certain. A future where we will colonize Mars, upload our consciousness, and live forever.
These myths, backed by scientism and bankrolled by finance, seduce us not because they are likely, but because they allow us to avoid the uncomfortable realities waiting to be resolved in the here and now. Every passing day of inaction only deepens those unresolved tensions—environmental, social, existential—until they harden into crises. We have built an entire economy on the refusal to reckon with the present. And that is the danger: not that science is wrong, but that we use it to escape the responsibility of living well now.
This is the core contradiction of our religion of progress. We claim the mantle of rationality while indulging in fantasies of technological salvation. We invoke neutrality while embedding our deepest values in algorithms and research priorities. We preach innovation while reinforcing systems that resist fundamental change. And in surrendering our ethical judgment to “what the science says,” we abdicate precisely the moral responsibility that makes us human.
Science gives us tools, but not meaning. It can describe our world with precision, but cannot tell us how to live in it with care. The scientific method, divorced from ethical inquiry, becomes a technique without a telos—a how without a why. And like faith without ethics or science without humility, it drifts into a form of certainty unmoored from conscience.
We still fear. We still hope. We still long. These fundamentally human experiences remain unchanged by our technological sophistication. And while our tools become more advanced, our questions grow only more difficult. How do we make sense of these feelings? What constitutes a good life? How do we balance individual desire against collective need? What responsibilities do we bear toward future generations? These are not technical problems but ethical ones—not puzzles to solve but dilemmas to navigate. What we need now is not more innovation, but a more honest reckoning with what it means to be human in a world we have both the power to transform and the wisdom to preserve.
III. Ethics: The Forgotten Discipline
This is where ethics comes in. Ethics once served as the connective tissue between facts and values, between the known and the felt. It helped us translate knowledge into responsibility, and uncertainty into action. It asked: Given what we know, how should we live? Given what we can’t know, how do we live anyway?
Today, ethics exists in several forms—each important but also incomplete. Academic ethics, particularly within philosophy departments, offers rigorous thought on moral reasoning and justice. But this work often remains abstract, inaccessible, and disconnected from the real-world decisions shaping technology, law, or governance. As Dewey might argue, it risks becoming a finished doctrine rather than an evolving practice grounded in lived experience.
In the corporate world, ethics has entered the conversation—especially in sectors like artificial intelligence. Companies like Google, IBM, and others have published AI principles emphasizing fairness, accountability, and transparency. Government bodies and international organizations like UNESCO have developed sweeping ethical frameworks. Yet these initiatives often serve more as reputational insurance than as serious constraints. A 2023 Stanford policy brief noted that while many companies release AI ethics statements, few demonstrate operational changes aligned with those values.
Ethics, in this context, is often treated as a matter of compliance, branding, or legal risk. It is reactive, not generative. It is designed to preempt criticism, not to deepen understanding. And in many cases, it is outsourced—entrusted to committees or guidelines, rather than embedded into the moral imagination of institutions. Too often, modern ethical debate centers on tidy thought experiments like the trolley problem—where the stakes are real, but the thinker is removed. Or it appears in viral controversies over cancel culture, tech regulation, or whether billionaires should exist, where moral questions are flattened into hot takes or partisan talking points. We deliberate in abstraction, weighing lives like variables, or we shout from ideological trenches without a shared language for judgment.

But real ethics is not about performative gestures, risk management, and hypothetical omniscience—ethics is responsibility. We are not spectators of dilemmas; we are operators of the switch. Ethics is not a matter of maximizing outcomes from a distance, but of acting with care under conditions of uncertainty, with imperfect information and real consequences. It is a philosophy of participation, not detachment. And if science has taken on the mantle of religion, then ethics must reclaim its role as the moral imagination that keeps both honest. Ethics reminds us that knowledge is never neutral, that progress is not always humane, and that how we use our tools matters as much as the tools themselves.
Recently, a friend asked me to serve as an ethics advisor for his legal tech company. That invitation gave this reflection sharper focus. It reminded me that ethics is not abstract or hypothetical—it’s a necessity. It is the discipline that keeps both science and faith accountable. It demands that we ask, again and again, not just what we can do, but what we should do.
Ethics is what guards against certainty. It restores moral imagination to systems that might otherwise operate on inertia or impulse. It is the voice that says: not all efficiency is humane, not all progress is just, and not all knowledge is neutral.
This is what Dietrich Bonhoeffer saw when he spoke of a “world come of age,” where humanity could no longer rely on divine intervention to resolve its crises. Faith, for him, meant responsibility. Action. A readiness to bear the weight of moral judgment without the comfort of absolute answers. Albert Camus, too, refused to look away. In his speech La Crise de l’Homme, he warned against the silence that follows sanitized violence and bureaucratic abstraction. He called us back to dignity—to clarity, even when clarity is painful.
What gives me hope is that I see people trying. Not just to follow rules, but to engage with the moral complexity of the world. This is what ethical practice looks like: not abstract deliberation or virtue signaling, but the steady, imperfect work of living responsibly among others. It means listening before judging, holding convictions with humility, and making decisions that acknowledge both consequence and conscience.
Ethical courage may be rooted in spiritual traditions, philosophical convictions, ancestral memory, or simply the resolve to care for future generations. The goal isn’t to impose a single moral system, but to create space for honest reckoning. People arrive at integrity through many paths—religious, secular, or otherwise—and ethical communities must be spacious enough to reflect that diversity. A platform for ethical practice should support people in living into their full selves, with all their doubts, convictions, and sacred questions intact. The invitation from my friend—and our project at regarder—are part of that hope: a chance to help build systems of judgment that are careful, imaginative, and humane.
The future doesn’t need more control. It needs more conscience. Not more certainty—but more courage. That, I believe, is a kind of faith worth recovering.
Comentarios