Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Effective altruism, which is both a general (and not especially objectionable) variant of act utilitarianism, and also a specific community that has (fairly, to my eye) been accused of diverting money towards a particular set of reactionary worldviews held by its leadership.


EA isn't a form of act utilitarianism. Though EA certainly has a consequentialist flavor, it isn't the case that all (or even most?) EAs endorse utilitarianism (utilitarians aren't the only ones who care about scale). For example, Holden Karnofsky is explicitly non-utilitarian and Will MacAskill advocates making decisions based on considering (fundamentally, not just for optics) non-utilitarian ethical systems, though he is more utilitarian than someone like Karnofsky.


The main recipients of EA political funding (what little there is: the majority of money goes to global health and development, with AI safety as a distant second) are centrist Democrats. They're too right-wing for my taste too, but calling them "reactionaries" dilutes the word into meaninglessness.


I'm not sure woodruffw necessarily talked about donations to US politicians when they mentioned 'diverting money towards a particular set of reactionary worldviews held by its leadership.'

Perhaps they just don't like malaria nets? See https://www.givewell.org/charities/top-charities


I meant more the accelerationist and AI derisking contingents, not political donations. Both are explicitly reactionary in the most basic sense (and accelerationism is also reactionary in the far-right sense).


> Both are explicitly reactionary in the most basic sense

In the most basic sense, as a reaction? Are weapons regulations "reactionary?" I think it's well established what using the term "reactionary" is, and it doesn't apply to X-risk concerns. Also Eliezer Yudkowsky has been "reacting" against AI X-risk since you were a toddler, in case you thought this was some recent phenomena...


The idea that we should plow charitable donations into various low-value and ill-conceived projects because the moral calculus of doing so versus a malicious general AI strikes me as reactionary.

That’s independent of (but connected to) Yudkowsky’s whole thing, which to my understanding never really attempted to apply the logical extreme here.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: