When most people talk about “planning ahead,” they mean next summer’s trip, next year’s promotion, or maybe a five-year savings plan. Governments and corporations stretch that timeline—ten years, twenty if they’re ambitious. But in certain circles, even a century is considered short-term thinking.
In recent years, a growing movement of philosophers, scientists, and strategists has been advocating for something much more audacious: making decisions today with the welfare of people who may live hundreds, thousands, or even millions of years from now in mind.
This is longtermism, and it’s as provocative as it sounds.
It’s not a slogan dreamed up by a marketing team—it’s a philosophical stance with deep roots in moral theory and an increasingly visible impact on real-world policy, research priorities, and philanthropic funding. It asks us to zoom out until today’s headlines shrink to a blip on the timeline of human civilization.
Some find this inspiring. Others think it’s dangerously speculative. Either way, it’s reshaping important conversations about what we owe the future.
What Exactly Is Longtermism?
The term longtermism comes out of the effective altruism (EA) community, which focuses on using evidence and reason to figure out how to help others most effectively. In recent years, some EA philosophers—particularly at the University of Oxford’s Future of Humanity Institute—have turned their attention to the far future.
One of the most cited definitions comes from philosopher William MacAskill:
The premise is built on two main points:
- The future could be vast. If humanity avoids existential risks, there could be trillions of people living in centuries or millennia to come.
- Our actions now may have ripple effects for centuries. Policies, technologies, and cultural norms we set today could shape the trajectory of civilization for better—or worse.
The Intellectual Roots
Longtermism draws from several philosophical traditions:
- Utilitarianism — the idea of maximizing well-being across all people, present and future.
- Population ethics — which grapples with how to weigh the value of potential future lives.
- Risk analysis — especially the study of “existential risks” that could permanently curtail humanity’s potential.
Nick Bostrom, a Swedish philosopher at Oxford, helped define the landscape with his 2002 paper Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards. He argued that avoiding events that could end or permanently damage humanity should be a top priority.
The Stakes According to Longtermists
If the future population is vast, then—mathematically—most humans who will ever exist haven’t been born yet. This means:
- Even small improvements now could compound into enormous benefits over time.
- Avoiding catastrophic risks—like nuclear war, runaway climate change, or misaligned artificial intelligence—takes on extreme importance.
- Cultural and institutional resilience matters. Stable, ethical governance structures might outlast individual lifetimes and influence countless future generations.
This framing can make current debates—about AI safety, biotechnology regulation, or climate policy—feel not just urgent for us, but for people living in the year 3000.
Where the Conversation Gets Complicated
Not everyone is convinced. Longtermism has drawn critique from multiple angles.
1. Speculation and Uncertainty
Critics argue that making moral decisions based on far-future scenarios involves huge uncertainty. How do we know which actions today will really matter in 500 years?
Philosopher Émile Torres, once a longtermist advocate, has become one of its most vocal critics, warning that focusing on the distant future could justify neglecting urgent present-day problems.
2. Present-Day Inequities
Some worry that emphasizing future lives could inadvertently deprioritize addressing current suffering, especially for marginalized groups facing immediate crises.
3. Power and Influence
Because longtermism has gained traction among wealthy philanthropists and tech leaders, skeptics raise concerns about a small group of elites shaping global priorities based on their worldview.
How Longtermism Is Showing Up in the Real World
Despite debate, longtermist ideas are influencing real-world decisions:
- Funding: Organizations like Open Philanthropy have allocated hundreds of millions toward “longtermist” causes, from pandemic preparedness to AI alignment research.
- Policy: Some governments, like the UK, have begun exploring “future generations” frameworks in legislation.
- Research: Academic centers dedicated to existential risk are growing, including Cambridge’s Centre for the Study of Existential Risk and the Future of Humanity Institute.
Thinking in Centuries: Practical, Not Just Philosophical
Even if you’re not ready to weigh the moral calculus of the next million years, longtermist thinking can inspire practical shifts:
- Resilience over short-term wins: Investing in infrastructure that lasts generations.
- Guardrails for powerful tech: Prioritizing safe deployment of AI, biotech, and geoengineering.
- Institutional memory: Creating archives and governance systems that outlive political cycles.
A small-scale example: Japan completed a “seawall” system to protect against tsunamis—not just for this decade’s residents, but with the expectation it could safeguard communities for centuries.
Buzz Boost!
- Expand your planning horizon — Instead of just one-year goals, try mapping ten-year or even fifty-year visions.
- Prioritize durability — Choose tools, materials, or policies that stand the test of time.
- Invest in knowledge preservation — Document processes so future teams (or generations) can build on them.
- Support risk-reducing research — Even small contributions to science and safety can have ripple effects.
- Practice “cathedral thinking” — Start projects that might not be finished in your lifetime, but could benefit people for centuries.
Where This Leaves Us
Longtermism asks a big question: if future people matter just as much as those alive today, how should that change what we do? It’s not an easy question—there’s tension between addressing immediate suffering and safeguarding future well-being. And there’s genuine risk in letting speculation about tomorrow overshadow tangible needs now.
But in an age of rapid technological change and planetary-scale risks, the longtermist perspective offers a reminder: the future is not some abstract place we’ll never touch. It’s built, in part, by the decisions we make this week, this year, and this decade.
You don’t have to accept the full moral math of longtermism to see value in its central prompt: think beyond the immediate. The most compelling version of longtermism might be the one that respects both timelines—protecting future generations while honoring the dignity and needs of people alive today.
In that sense, the rise of longtermism isn’t just about the next 1,000 years. It’s about cultivating a mindset that resists the tyranny of the urgent and treats the future not as an abstraction, but as an inheritance we’re actively shaping.
We might not all become longtermists. But we can all benefit from occasionally zooming out—not just to 2030, but to 2130, and asking: what are we leaving behind?