Effective Accelerationism (e/acc)

13 min readUpdated Jan 20, 2026Loading...

Overview

Effective Accelerationism (e/acc) is a techno-optimist ideology that advocates maximizing technological progress and economic growth at any cost. Within the Pax Judaica framework, e/acc represents:

  • Officially: Optimistic vision of technology solving all problems
  • Functionally: Justification for unconstrained AI development and corporate power
  • Philosophically: Nihilistic progress worship removing human agency
  • Eschatologically: Worship of false god (technology as Dajjal); accelerating toward inhuman future

Origins and Key Figures

Beff Jezos (Guillaume Verdon)

Identity revealed (2023):1

  • Real name: Guillaume Verdon
  • Former Google Quantum AI researcher
  • Canadian physicist
  • Created anonymous Twitter account "Beff Jezos" (2022)
  • Revealed May 2023 (Forbes expose)

Background:2

  • PhD in quantum information from University of Sherbrooke
  • Google Quantum AI (2017-2022)
  • Founded Extropic (AI hardware startup)
  • Raised funding from prominent VCs

Philosophy:3

  • "Based Beff Jezos" persona
  • Thermodynamic view of progress
  • Anti-AI safety, anti-regulation
  • Pro-acceleration at all costs
  • Meme-driven communication style

Marc Andreessen

Background (documented):4

  • Netscape co-founder (1994)
  • Andreessen Horowitz (a16z) co-founder (2009)
  • One of most powerful VCs in Silicon Valley
  • Invested in Meta, Airbnb, GitHub, OpenAI, etc.

"The Techno-Optimist Manifesto" (October 2023):5

Key claims:

  • Technology is pure good
  • Morality and equality don't matter; only growth
  • Existential risk is fake; real risk is not building
  • Sustainability and climate concerns are "enemy"
  • Market capitalism is synonym for technology
  • "We have the tools, the systems, the ideas. We have the will. It's time to build."

Reception: Celebrated by e/acc; horrified mainstream; revealed VC ideology.6

Other Figures

Guillaume Verdon's network:7

  • Garry Tan (Y Combinator CEO)
  • Martin Casado (a16z)
  • Balaji Srinivasan (former Coinbase CTO)
  • Various anon Twitter accounts

Philosophical Foundations

Accelerationism History

The lineage:21

Marx: "Accelerate capitalism to hasten its contradictions and collapse"

1990s left accelerationism (Sadie Plant, Nick Land at CCRU):

  • Capitalism as unstoppable cyborg process
  • Acceleration toward post-human future

2010s split:22

  • Left accelerationism (l/acc): Accelerate to communism
  • Right accelerationism (r/acc): Accelerate capitalism as end in itself
  • Unconditional accelerationism (u/acc): Nick Land - nihilistic embrace

2022+: Effective accelerationism (e/acc) - Techno-optimist variant

Differences from NRx

Similarities:23

  • Reject democracy as inefficient
  • Embrace hierarchy
  • Tech solutionism
  • Anti-egalitarian

Differences:

  • e/acc is techno-optimist; NRx is pessimist
  • e/acc embraces chaotic change; NRx wants stable hierarchy
  • e/acc is about acceleration; NRx about order
  • e/acc doesn't care about human outcomes; NRx wants elite human rule

Longtermism Connection

Effective Altruism's longtermism:24

  • Future people matter morally
  • Existential risk is paramount concern
  • Must preserve humanity's potential
  • Safety and caution justified

E/acc's "longtermism":25

  • Future civilization matters more than current humans
  • Existential opportunity (missing singularity) worse than existential risk
  • Must maximize progress
  • Safety is anti-longtermist (delays arrival of glorious future)

The inversion: Same concern for future; opposite conclusions.26

The Silicon Valley Embrace

Why VCs Love E/acc

Documented reasons:27

1. Aligns with business model:

  • VCs profit from rapid growth
  • Regulation threatens returns
  • Safety research slows products

2. Justifies recklessness:

  • "Move fast and break things" (Facebook motto)
  • Ethics and safety are "luxury"
  • Progress justifies any means

3. Counters AI safety movement:

  • Safety advocates want regulation
  • Regulation would hurt VC portfolios
  • E/acc provides intellectual cover for opposing safety

4. Competitive advantage narrative:

  • "China threat" justifies recklessness
  • National security framing
  • Regulation = helping adversaries

Documented Support

Marc Andreessen (a16z):28

  • "Techno-Optimist Manifesto" essentially e/acc document
  • Opposes AI regulation
  • Funds Extropic (Verdon's company)

Garry Tan (Y Combinator):29

  • Public e/acc supporter
  • "Based" culture promotion
  • Anti-regulation stance

Multiple a16z partners: Publicly support or align with e/acc themes30

The Pax Judaica Interpretation

Accelerating Toward What?

The framework:

E/acc claims: Accelerating toward glorious transhuman future

Pax Judaica interpretation: Accelerating toward techno-totalitarian control

The outcome (per framework):31

  • Unregulated AI development
  • Corporate power unconstrained
  • No democratic oversight
  • Technology replaces human decision-making
  • Elites merge with AI
  • Masses left behind or merged into hive
  • Transhumanist apotheosis for few; digital serfdom for many

The Dajjal Connection

Islamic eschatology: Dajjal (Antichrist) will deceive humanity with false promises of paradise.32

E/acc as Dajjalism:33

  • Promises: Abundance, immortality, transcendence
  • Reality: Serving techno-capital, not humanity
  • False god: Technology worshiped as savior
  • Deception: "Progress" that destroys human agency and dignity
  • Outcome: Enslavement dressed as liberation

Instrumentalizing Humanity

The concern: E/acc treats humans as means to technology's ends, not ends in themselves.34

Examples:

  • "Some people will suffer but that's acceptable for progress"
  • "Technological unemployment is fine; people adapt"
  • "Environmental damage worth it for advancement"
  • "Safety concerns overblown; real risk is slowdown"

The Kantian objection: Humans are ends in themselves; using them instrumentally is immoral.35

E/acc response: Morality is obstacle; only thermodynamics and progress matter.

Critiques

From AI Safety Community

The critique: E/acc is reckless and potentially catastrophic.36

Specific arguments:

1. Ignores alignment problem:37

  • Superintelligent AI with wrong goals = extinction
  • We don't know how to align AI yet
  • Accelerating before solving alignment is suicidal

2. Race dynamics are dangerous:38

  • Competition incentivizes corner-cutting on safety
  • Multiple actors increases accident probability
  • First mover advantage encourages recklessness

3. Thermodynamics is not ethics:39

  • Physical laws don't dictate moral imperatives
  • Cancer grows fast; doesn't make it good
  • Entropy increase is descriptive, not prescriptive

4. China threat is exaggerated:40

  • Used to justify domestic recklessness
  • China also concerned about AI safety
  • Cooperation better than race

From Humanists

The critique: E/acc is anti-human.41

Arguments:

  • Reduces humans to instrumentally valuable nodes in techno-capital machine
  • No concern for suffering, meaning, dignity
  • Progress for whom? Not for humans
  • Nihilistic; worship of abstract force over concrete people

From Environmentalists

The critique: E/acc is ecocidal.42

Arguments:

  • Maximizing energy throughput = environmental destruction
  • Climate change dismissed as "enemy" (Andreessen)
  • Sustainability seen as obstacle
  • No planet to compute on after ecosystem collapse

From Democrats

The critique: E/acc is techno-fascism.43

Arguments:

  • Authoritarian (rule by those who control tech)
  • Anti-democratic (masses can't be trusted with decisions)
  • Elitist (only tech elite matter)
  • Social Darwinist (weak deserve to be left behind)

E/acc and Existential Risk

The X-Risk Debate

AI safety position:44

  • Superintelligent AI poses existential risk
  • Could kill everyone (intentionally or accidentally)
  • Small probability but infinite downside
  • Must proceed with extreme caution

E/acc position:45

  • X-risk exaggerated by "doomers"
  • Real risk is NOT building AI (stagnation risk)
  • Missing Singularity is true catastrophe
  • Probability we solve alignment while building is high enough

Who's Right?

Unknowable until after the fact: Either AGI kills us (safety right) or makes paradise (e/acc right)

The precautionary principle: When downside is extinction, err on side of caution46

E/acc response: Precautionary principle also argues for accelerating (risk of being overtaken by China)47

Documented Harms

Immediate Harms (Pre-AGI)

Already happening:48

1. AI-driven layoffs:

  • ChatGPT replacing workers
  • "Accelerate productivity" = fewer jobs
  • No plan for those displaced

2. Misinformation at scale:

  • AI-generated fake news
  • Deepfakes
  • Election interference

3. Surveillance capitalism:

  • AI enabling more sophisticated manipulation
  • Privacy erosion
  • Social control

4. Environmental damage:

  • AI training uses massive energy
  • Data centers' carbon footprint
  • E-waste from hardware acceleration

E/acc response: Short-term costs acceptable for long-term gains49

The Paradox

Why Intelligent People Believe This

Explanations:50

1. Financial incentive:

  • VCs profit from e/acc narrative
  • AI companies want no regulation
  • Easy to believe what benefits you

2. Genuine optimism:

  • Some truly believe technology solves everything
  • Techno-utopianism has long history
  • Evidence selectively interpreted

3. Status quo bias:

  • Current system (capitalism + tech) is familiar
  • Hard to imagine alternative
  • Acceleration is "natural" continuation

4. Psychological:

  • Nihilism masked as optimism
  • Anxiety about human agency in tech age
  • Submission to "inevitable" is comforting

The Future

If E/acc Wins

Predicted outcomes (per critics):51

Scenario A: Catastrophic AI:

  • Misaligned AGI
  • Human extinction or subjugation
  • "We were warned"

Scenario B: Corporate dystopia:

  • AI controlled by tech oligarchs
  • Mass unemployment
  • Surveillance state
  • No democratic input
  • Techno-feudalism

Scenario C: Environmental collapse:

  • Acceleration outpaces ecosystem resilience
  • Climate catastrophe
  • Civilization collapses before Singularity

If Safety Advocates Win

E/acc predicted outcomes (per them):52

Scenario A: China wins:

  • CCP gets AGI first
  • Totalitarian surveillance world
  • Western values erased

Scenario B: Stagnation:

  • Heavy regulation strangles innovation
  • Technological progress halts
  • Humanity never reaches potential

Scenario C: Regulatory capture:

  • Big AI companies use safety as excuse
  • Eliminate competition
  • Monopolistic control worse than open development

Discussion Questions

  • Is technological progress inherently good, or are there limits?
  • Can we have both AI safety and rapid development, or must we choose?
  • Who should control AI development: companies, governments, or international bodies?
  • Is human agency compatible with accelerating technology, or must we submit?
  • Would you rather risk AGI catastrophe or risk Chinese AGI dominance?
  • Further Reading

    This article examines Effective Accelerationism within the Pax Judaica framework. While e/acc positions and Silicon Valley support are documented, interpretations of eschatological implications remain speculative.

    Discussion(0 comments)

    Join the conversationSign in to share your perspectiveSign In
    Loading comments...

    Contribute to this Article

    Help improve this article by suggesting edits, adding sources, or expanding content.

    Submit via EmailSend your edits

    References

    1
    Identity revealed: Griffith, Erin. "Who Is Beff Jezos?" Forbes, May 2023.
    2
    Verdon background: Public LinkedIn; Forbes profile; company bios.
    3
    Philosophy: Verdon's tweets (archived); interviews; public statements.
    4
    Andreessen bio: Public information; The New Yorker profiles over years.
    5
    Andreessen, Marc. "The Techno-Optimist Manifesto." a16z.com, October 2023. Full text available.
    6
    Reception: Wide media coverage; critique from NYT, The Atlantic, praise from tech Twitter.
    7
    Network: Public Twitter interactions; venture investments; conference appearances.
    8
    Thermodynamic argument: Verdon's threads; e/acc Discord discussions; manifestos.
    9
    Is-ought problem: Hume, David. A Treatise of Human Nature (1739). Classic philosophical objection.
    10
    Cancer analogy: Multiple critics; e.g., Jacobin response to Andreessen.
    11
    Progress at any cost: Core e/acc position; Andreessen manifesto; Verdon tweets.
    12
    Value hierarchy: Synthesized from e/acc materials; explicit in Andreessen.
    13
    Quote: Andreessen, "Techno-Optimist Manifesto" (2023).
    14
    AI race: E/acc standard argument; Verdon, various Silicon Valley figures.
    15
    Safety response: Yudkowsky, Eliezer. Various writings; Bostrom, Nick. Superintelligence. Oxford, 2014.
    16
    Doomer label: E/acc community terminology; pejorative usage.
    17
    E/acc critique of safety: Verdon threads; e/acc spaces; Andreessen implications.
    18
    Safety community response: Anthropic, OpenAI safety teams; academic AI safety researchers.
    19
    Techno-capital machine: Concept from Land; adopted by e/acc with optimistic spin.
    20
    Land similarity: Noted by philosophers; Shaviro, Steven. No Speed Limit. Minnesota, 2015.
    21
    Accelerationism history: Shaviro (2015); Noys, Benjamin. Malign Velocities. Zero, 2014.
    22
    2010s split: Srnicek, Nick and Alex Williams. "#Accelerate Manifesto" (2013). L/acc text.
    23
    Comparison to NRx: Analysis synthesizing both ideologies; some overlap in figures.
    24
    Longtermism: Ord, Toby. The Precipice. Hachette, 2020. ISBN: 978-0316484911.
    25
    E/acc "longtermism": Implicit in arguments; inverts EA longtermist conclusions.
    26
    Inversion noted: Torres, Émile. "Against Longtermism." Aeon, October 2021.
    27
    VC reasons: Analysis of incentive structures; interviews with VCs; pattern observation.
    28
    Andreessen: Public statements; manifest; investments; a16z portfolio.
    29
    Tan: Public tweets; YC announcements; "based" memes.
    30
    A16z partners: Multiple partners tweet e/acc-aligned content; firm culture.
    31
    Pax Judaica outcome: Framework interpretation; speculative projection.
    32
    Dajjal: Islamic eschatology; Hosein's interpretation in previous works.
    33
    E/acc as Dajjalism: Framework analysis; theological interpretation.
    34
    Instrumentalization: Kantian ethical framework applied to e/acc.
    35
    Kant, Immanuel. Groundwork of the Metaphysics of Morals (1785). Humans as ends.
    36
    Safety critique: Consensus in AI safety community; multiple papers and statements.
    37
    Alignment problem: Bostrom (2014); Christian, Brian. The Alignment Problem. Norton, 2020.
    38
    Race dynamics: Armstrong, Stuart, et al. "Racing to the Precipice." AI & Society (2016).
    39
    Thermodynamics not ethics: Standard philosophical objection; naturalistic fallacy.
    40
    China threat: Debated; some argue exaggerated; U.S.-China AI cooperation exists.
    41
    Humanist critique: Multiple humanist philosophers and ethicists.
    42
    Environmental critique: Climate scientists; sustainability advocates; ecosystem economists.
    43
    Democratic critique: Political theorists; democratic advocates.
    44
    X-risk position: Bostrom (2014); Yudkowsky; Centre for the Study of Existential Risk.
    45
    E/acc x-risk position: Verdon; Andreessen implications; e/acc community.
    46
    Precautionary principle: Standard in risk analysis; Jonas, Hans. The Imperative of Responsibility. Chicago, 1984.
    47
    E/acc response: Turns precautionary principle around; documented in debates.
    48
    Immediate harms: Documented impacts of current AI; ongoing research.
    49
    E/acc response to harms: Standard position; short-term sacrifice for long-term gain.
    50
    Explanations: Synthesized from sociology of knowledge; interviews; analyses.
    51
    Critics' predictions: Dystopian scenarios from safety advocates, environmentalists, democrats.
    52
    E/acc predictions: What e/acc claims will happen if safety wins.