AI, Abundance, and the Discipline of Being Human

Ben Sasse has always been an unusual figure in American public life. Before most people outside Nebraska knew his name, he had already moved through a sequence of worlds that do not often overlap comfortably: government, academia, and party politics. He studied at Harvard, St. John’s, and Yale, served in the George W. Bush administration, ran Midland University, and then represented Nebraska in the U.S. Senate from 2015 to 2023. After leaving the Senate, he became president of the University of Florida, only to resign in 2024 after his wife’s epilepsy diagnosis. In December 2025, he announced that he had been diagnosed with metastatic stage-four pancreatic cancer. 

That biography matters because it frames the interview’s unusual tonal balance. In the April 9, 2026 New York Times Opinion episode “How Ben Sasse Is Living Now That He Is Dying,” hosted by Ross Douthat, Sasse appears not as a pundit trying to win an argument, but as a man speaking from the far edge of time. The setting is intimate and grave without becoming sentimental. The episode, released through Interesting Times with Ross Douthat, runs about sixty-eight minutes and is explicitly structured around terminal diagnosis, politics, higher education, faith, and the task of living under the shadow of death. 

And yet the most striking part of the conversation is not the cancer. It is Sasse’s brief, compressed, but deeply revealing account of artificial intelligence.

His central point is refreshingly unspectacular. Asked whether AI will bring heaven or hell, he answers, in effect, that it will do both. That is a better answer than the utopian boosterism of Silicon Valley and better than the theatrical doom of those who speak as if the machine itself had suddenly become metaphysically evil. Sasse’s claim is simpler and more serious: AI will accelerate human behavior at warp speed, for good and for ill. In other words, the technology is not a substitute for anthropology. It is an amplifier of it.

That matters because it shifts the argument away from machine capability and back toward human character. Most public discussion of AI still assumes that the decisive question is what the systems can do. Can they write, diagnose, compose, persuade, automate, replace? Sasse’s answer suggests that the more important question is what kind of people we will become while using them. Once that question is asked, the debate changes. AI is no longer merely a labor-market event or a software milestone. It becomes a moral and civilizational test.

His most provocative claim follows from that shift. The great divide that is coming, he argues, will not chiefly be a class divide, but a divide of intentionality. That is a remarkable thought. For two centuries, the social consequences of technology have usually been described in terms of ownership, access, capital, and education. Sasse does not deny those factors, but he implies that AI introduces a different scarcity. Access to the tools will be widespread. The scarce good will be disciplined use.

That seems right to me, and not only in a religious or conservative frame. The industrial economy rewarded strength, coordination, and routine. The information economy rewarded analysis, filtering, and symbolic work. The AI economy may increasingly reward the capacity to decide what is worth doing when the cost of doing almost anything cognitive collapses toward zero. If writing, coding, summarizing, translating, illustrating, and even basic strategic drafting become cheap, then judgment becomes expensive. Not because it is technically harder, but because it depends on habits that no model can simply confer.

This is where Sasse’s language of affection becomes more interesting than his language of economics. He warns that many people will not use these systems as tools. They will outsource their attention to them. That formulation gets at something many critics miss. The deepest risk of AI may not be false answers or job losses, serious though those are. It may be the gradual surrender of agency. A society that delegates not just labor but also curiosity, concentration, memory, and taste to algorithms will not become more efficient in any meaningful human sense. It will become more suggestible.

Sasse links this danger to digital distraction more broadly, and here too he is persuasive. The modern person is already trained by screens to prefer immediacy over depth, stimulation over presence, novelty over continuity. AI enters that landscape not as a neutral assistant but as a supercharged layer of personalization, responsiveness, and seduction. It can flatter, anticipate, entertain, explain, and reassure on demand. That makes it useful. It also makes it dangerous. A machine that is always available, frictionless, and adaptive is not merely a productivity device. It is a competitor for the soul’s habits.

His proposed remedy is equally unfashionable: communities of self-restraint, shared deferred gratification, ranked loves, Sabbath-like interruption, intergenerational contact. One need not share his theology to see the logic. Freedom without structure does not remain freedom for long. In a high-choice environment, those without practices of refusal tend to become raw material for someone else’s system of optimization. Self-discipline is not a decorative virtue in such a world. It is defensive infrastructure.

I also think Sasse is onto something in his discussion of generational segregation. He argues that modern life has already severed many of the ordinary channels through which wisdom used to pass from old to young and vitality from young to old. Digital life accelerates that fragmentation. If a teenager’s moral and imaginative world is formed primarily by peer culture, algorithmic feeds, and machine-mediated interaction, then the loss is not merely educational. It is anthropological. What disappears is not information, but proportion.

Still, his argument has limits. He may underestimate the extent to which societies do eventually build countermeasures. People are not infinitely malleable. Families, schools, churches, and even firms can adapt. Norms can harden. Design can improve. Law can intervene at the margins. The future need not be a clean 20 percent heaven and 80 percent hell. But Sasse is valuable precisely because he refuses the softer illusion that adaptation will happen automatically. It will not. If healthier norms emerge, they will do so because some people choose discipline before discipline becomes fashionable.

That, finally, is what gives his words unusual weight. A man with stage-four pancreatic cancer is not especially likely to waste time on fashionable abstractions. In this interview, Sasse sounds like someone stripped of the luxury of vagueness. His warning about AI is therefore not mainly technological. It is existential. The decisive question is not whether the tools become powerful. They already are. The decisive question is whether human beings can remain the kind of creatures who are capable of governing power without being remade by it.

That is why his remarks linger. They do not ask whether AI can think. They ask whether we still can, and under what conditions. And they suggest, correctly, that the answer will depend less on the machine than on the moral seriousness of the people using it.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *