Effective Accelerationism (e/acc)
Overview
Effective Accelerationism (e/acc) is a techno-optimist ideology that advocates maximizing technological progress and economic growth at any cost. Within the Pax Judaica framework, e/acc represents:
- Officially: Optimistic vision of technology solving all problems
- Functionally: Justification for unconstrained AI development and corporate power
- Philosophically: Nihilistic progress worship removing human agency
- Eschatologically: Worship of false god (technology as Dajjal); accelerating toward inhuman future
Origins and Key Figures
Beff Jezos (Guillaume Verdon)
Identity revealed (2023):1
- Real name: Guillaume Verdon
- Former Google Quantum AI researcher
- Canadian physicist
- Created anonymous Twitter account "Beff Jezos" (2022)
- Revealed May 2023 (Forbes expose)
Background:2
- PhD in quantum information from University of Sherbrooke
- Google Quantum AI (2017-2022)
- Founded Extropic (AI hardware startup)
- Raised funding from prominent VCs
Philosophy:3
- "Based Beff Jezos" persona
- Thermodynamic view of progress
- Anti-AI safety, anti-regulation
- Pro-acceleration at all costs
- Meme-driven communication style
Marc Andreessen
Background (documented):4
- Netscape co-founder (1994)
- Andreessen Horowitz (a16z) co-founder (2009)
- One of most powerful VCs in Silicon Valley
- Invested in Meta, Airbnb, GitHub, OpenAI, etc.
"The Techno-Optimist Manifesto" (October 2023):5
Key claims:
- Technology is pure good
- Morality and equality don't matter; only growth
- Existential risk is fake; real risk is not building
- Sustainability and climate concerns are "enemy"
- Market capitalism is synonym for technology
- "We have the tools, the systems, the ideas. We have the will. It's time to build."
Reception: Celebrated by e/acc; horrified mainstream; revealed VC ideology.6
Other Figures
Guillaume Verdon's network:7
- Garry Tan (Y Combinator CEO)
- Martin Casado (a16z)
- Balaji Srinivasan (former Coinbase CTO)
- Various anon Twitter accounts
Philosophical Foundations
Accelerationism History
The lineage:21
Marx: "Accelerate capitalism to hasten its contradictions and collapse"
1990s left accelerationism (Sadie Plant, Nick Land at CCRU):
- Capitalism as unstoppable cyborg process
- Acceleration toward post-human future
2010s split:22
- Left accelerationism (l/acc): Accelerate to communism
- Right accelerationism (r/acc): Accelerate capitalism as end in itself
- Unconditional accelerationism (u/acc): Nick Land - nihilistic embrace
2022+: Effective accelerationism (e/acc) - Techno-optimist variant
Differences from NRx
Similarities:23
- Reject democracy as inefficient
- Embrace hierarchy
- Tech solutionism
- Anti-egalitarian
Differences:
- e/acc is techno-optimist; NRx is pessimist
- e/acc embraces chaotic change; NRx wants stable hierarchy
- e/acc is about acceleration; NRx about order
- e/acc doesn't care about human outcomes; NRx wants elite human rule
Longtermism Connection
Effective Altruism's longtermism:24
- Future people matter morally
- Existential risk is paramount concern
- Must preserve humanity's potential
- Safety and caution justified
E/acc's "longtermism":25
- Future civilization matters more than current humans
- Existential opportunity (missing singularity) worse than existential risk
- Must maximize progress
- Safety is anti-longtermist (delays arrival of glorious future)
The inversion: Same concern for future; opposite conclusions.26
The Silicon Valley Embrace
Why VCs Love E/acc
Documented reasons:27
1. Aligns with business model:
- VCs profit from rapid growth
- Regulation threatens returns
- Safety research slows products
2. Justifies recklessness:
- "Move fast and break things" (Facebook motto)
- Ethics and safety are "luxury"
- Progress justifies any means
3. Counters AI safety movement:
- Safety advocates want regulation
- Regulation would hurt VC portfolios
- E/acc provides intellectual cover for opposing safety
4. Competitive advantage narrative:
- "China threat" justifies recklessness
- National security framing
- Regulation = helping adversaries
Documented Support
Marc Andreessen (a16z):28
- "Techno-Optimist Manifesto" essentially e/acc document
- Opposes AI regulation
- Funds Extropic (Verdon's company)
Garry Tan (Y Combinator):29
- Public e/acc supporter
- "Based" culture promotion
- Anti-regulation stance
Multiple a16z partners: Publicly support or align with e/acc themes30
The Pax Judaica Interpretation
Accelerating Toward What?
The framework:
E/acc claims: Accelerating toward glorious transhuman future
Pax Judaica interpretation: Accelerating toward techno-totalitarian control
The outcome (per framework):31
- Unregulated AI development
- Corporate power unconstrained
- No democratic oversight
- Technology replaces human decision-making
- Elites merge with AI
- Masses left behind or merged into hive
- Transhumanist apotheosis for few; digital serfdom for many
The Dajjal Connection
Islamic eschatology: Dajjal (Antichrist) will deceive humanity with false promises of paradise.32
E/acc as Dajjalism:33
- Promises: Abundance, immortality, transcendence
- Reality: Serving techno-capital, not humanity
- False god: Technology worshiped as savior
- Deception: "Progress" that destroys human agency and dignity
- Outcome: Enslavement dressed as liberation
Instrumentalizing Humanity
The concern: E/acc treats humans as means to technology's ends, not ends in themselves.34
Examples:
- "Some people will suffer but that's acceptable for progress"
- "Technological unemployment is fine; people adapt"
- "Environmental damage worth it for advancement"
- "Safety concerns overblown; real risk is slowdown"
The Kantian objection: Humans are ends in themselves; using them instrumentally is immoral.35
E/acc response: Morality is obstacle; only thermodynamics and progress matter.
Critiques
From AI Safety Community
The critique: E/acc is reckless and potentially catastrophic.36
Specific arguments:
1. Ignores alignment problem:37
- Superintelligent AI with wrong goals = extinction
- We don't know how to align AI yet
- Accelerating before solving alignment is suicidal
2. Race dynamics are dangerous:38
- Competition incentivizes corner-cutting on safety
- Multiple actors increases accident probability
- First mover advantage encourages recklessness
3. Thermodynamics is not ethics:39
- Physical laws don't dictate moral imperatives
- Cancer grows fast; doesn't make it good
- Entropy increase is descriptive, not prescriptive
4. China threat is exaggerated:40
- Used to justify domestic recklessness
- China also concerned about AI safety
- Cooperation better than race
From Humanists
The critique: E/acc is anti-human.41
Arguments:
- Reduces humans to instrumentally valuable nodes in techno-capital machine
- No concern for suffering, meaning, dignity
- Progress for whom? Not for humans
- Nihilistic; worship of abstract force over concrete people
From Environmentalists
The critique: E/acc is ecocidal.42
Arguments:
- Maximizing energy throughput = environmental destruction
- Climate change dismissed as "enemy" (Andreessen)
- Sustainability seen as obstacle
- No planet to compute on after ecosystem collapse
From Democrats
The critique: E/acc is techno-fascism.43
Arguments:
- Authoritarian (rule by those who control tech)
- Anti-democratic (masses can't be trusted with decisions)
- Elitist (only tech elite matter)
- Social Darwinist (weak deserve to be left behind)
E/acc and Existential Risk
The X-Risk Debate
AI safety position:44
- Superintelligent AI poses existential risk
- Could kill everyone (intentionally or accidentally)
- Small probability but infinite downside
- Must proceed with extreme caution
E/acc position:45
- X-risk exaggerated by "doomers"
- Real risk is NOT building AI (stagnation risk)
- Missing Singularity is true catastrophe
- Probability we solve alignment while building is high enough
Who's Right?
Unknowable until after the fact: Either AGI kills us (safety right) or makes paradise (e/acc right)
The precautionary principle: When downside is extinction, err on side of caution46
E/acc response: Precautionary principle also argues for accelerating (risk of being overtaken by China)47
Documented Harms
Immediate Harms (Pre-AGI)
Already happening:48
1. AI-driven layoffs:
- ChatGPT replacing workers
- "Accelerate productivity" = fewer jobs
- No plan for those displaced
2. Misinformation at scale:
- AI-generated fake news
- Deepfakes
- Election interference
3. Surveillance capitalism:
- AI enabling more sophisticated manipulation
- Privacy erosion
- Social control
4. Environmental damage:
- AI training uses massive energy
- Data centers' carbon footprint
- E-waste from hardware acceleration
E/acc response: Short-term costs acceptable for long-term gains49
The Paradox
Why Intelligent People Believe This
Explanations:50
1. Financial incentive:
- VCs profit from e/acc narrative
- AI companies want no regulation
- Easy to believe what benefits you
2. Genuine optimism:
- Some truly believe technology solves everything
- Techno-utopianism has long history
- Evidence selectively interpreted
3. Status quo bias:
- Current system (capitalism + tech) is familiar
- Hard to imagine alternative
- Acceleration is "natural" continuation
4. Psychological:
- Nihilism masked as optimism
- Anxiety about human agency in tech age
- Submission to "inevitable" is comforting
The Future
If E/acc Wins
Predicted outcomes (per critics):51
Scenario A: Catastrophic AI:
- Misaligned AGI
- Human extinction or subjugation
- "We were warned"
Scenario B: Corporate dystopia:
- AI controlled by tech oligarchs
- Mass unemployment
- Surveillance state
- No democratic input
- Techno-feudalism
Scenario C: Environmental collapse:
- Acceleration outpaces ecosystem resilience
- Climate catastrophe
- Civilization collapses before Singularity
If Safety Advocates Win
E/acc predicted outcomes (per them):52
Scenario A: China wins:
- CCP gets AGI first
- Totalitarian surveillance world
- Western values erased
Scenario B: Stagnation:
- Heavy regulation strangles innovation
- Technological progress halts
- Humanity never reaches potential
Scenario C: Regulatory capture:
- Big AI companies use safety as excuse
- Eliminate competition
- Monopolistic control worse than open development
Discussion Questions
Further Reading
This article examines Effective Accelerationism within the Pax Judaica framework. While e/acc positions and Silicon Valley support are documented, interpretations of eschatological implications remain speculative.
Contribute to this Article
Help improve this article by suggesting edits, adding sources, or expanding content.