Seems to be popping up more and more recently anyone been following the discourse?

It’s interesting in that I’d probably say kind of agree with some of what’s being said in that article the issue that I always have with these sorts of ideologies is exactly what sort of civilization they’re envisaging in the future and consequently what sort of ethics they think should guide that society.

I’m also not entirely sold on the idea that looking at future populations is necessarily the best way to develop a politics for the present.


Yeah loads of this has been popping up on my YouTube. My initial impression is that these types of arguments often totally ignore the global south and the extraction of resources required for the crazy technological advancement they talk about

1 Like

people starting to doubt and question elton and bozo…? better push some propaganda out about how great they actually are and we should be thankful for them.


Have also seen this train of thought used to further highlight the importance of ceasing the use of fossil fuels immediately though, which is obviously a good idea

Think it’s pretty dodgy when you peer behind the curtain. Correlates with tech bros making excuses for their lack of compassion.

Never heard of it but that can absolutely get to fuck.


Stands to reason if you’re letting AI take out risk assessments to decide your ethical decision making it’ll always benefit or favour those that start with the most material wealth.

Elephant is a decent track but never really got into it


I like the idea of effective altruism but this feels like where the logic of it breaks down a bit.

In your book, you urge people to protect the “future of humanity.” How many years into the future are you talking about?

Well, we don’t know because we don’t know how long human civilization will last. But it could be an extremely long time. Typical mammal species last for a million years. Homo sapiens have [already] existed for 300,000 years. That would give us 700,000 years to come. The Earth, meanwhile, will remain habitable for hundreds of millions of years, and if we one day escaped Earth and took to the stars, then we could live for hundreds of trillions of years.

That’s not happening.


Yeah this part was particularly eyebrow raising :sweat_smile:

All of these things are usually long winded ways of trying to justify eugenics


You aren’t supposed to implement speculative sci-fi and the sooner we remove from positions of influence the kinds of people who are predisposed to try, the better.

(I’ve not read anything beyond the preview snippets in this thread and you can’t make me )

See also degrowth. Imo a good idea to break the idea that constant economic growth is essential to success, but you always get people bastardising it to mean population control, esp. in the global south

The author of the BBC article, William MacAskill, is one of the founders of Effective Altruism (EA) movement.

As a moral philosopher, initially he tried to answer questions like “is it better to spend $40,000 to train a guide dog and improve the life of one blind American, or should you spend that same money in developing countries and cure between 400 and 2,000 people of glaucoma?”. For a philosopher a very worthwhile exercise.

However, over time the EA movement has moved away from looking at such short-term issues and has started to focus on what may happen in the future, and often the far-away future at that. The question is now: “should we focus on our immediate problems or what disasters lie ahead and may wipe out humanity?”. MacAskill has decided to focus on the latter, and this is where longtermism originates from.

Some influential people in EA have predicted that the chances of human extinction during the next century stand at 6 to 1. And this is not because of climate change, which is unlikely to render Earth completely inhabitable. Instead they predict it may be due to engineered pathogens and runaway artificial intelligence. DNA editing may allow a scientist to create a superbug that will wipe us out.

And this is where it gets problematic. The philosophers have called in the Silicon Valley troops to save the planet. Venture capitalists and computer geeks are now our last defence against the demise of the human race.

MacAskill is close friends with Sam Bankman-Fried, the founder of crypto exchange FTX. Bankman-Fried has promised significant funding to various longtermism projects, and in exchange gets to influence the direction of the EA movement. His cash has given MacAskill enormous power to decide the course of action. All the usual suspects are trying to get involved. Even Elon Musk has shown interest in MacAskill’s new book.

The problem with longtermism is that we as people have trouble projecting decades in the future, let alone much further than that. And it seems extremely unlikely that a group of moral philosophers and computer scientists have drawn the correct conclusion that the people most likely to safeguard humanity’s future are a group of moral philosophers and computer scientists.

TL;DR: tech bros will not save the world


Can’t read this without an announcer’s voice going “IT’S IN THE GAME”



Interesting interview with Macskill the guy who wrote the first article in the OP, feels he never really gives a satisfactory answer regarding consequentialism and longtermism but I guess it delves into some of the concepts a bit deeper:

I see MacAskill deflects any challenges about the involvement of Silicon Valley venture capitalists and tech bros:


invokes some bland statements from MacAskill about Peter Thiel and Elon Musk, and the interviewer completely fails to mention the involvement of Sam Bankman-Fried.

He’s getting a very easy ride here.

1 Like

William MacAskill was on my course at uni. He had/has a great brain for philosophical argument but clearly very little relation to the real world.

1 Like