LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you

Episodes

Total: 438

from aisafety.world The following is a list of live agendas in technical AI safety, updating our pos

I've heard many people say something like "money won't matter post-AGI". This ha

Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey. Mix

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has

TL;DR: If you want to know whether getting insurance is worth it, use the Kelly Insurance Calculator

My median expectation is that AGI[1] will be created 3 years from now. This has implications on how

There are people I can talk to, where all of the following statements are obvious. They go without s

I'm editing this post.OpenAI announced (but hasn't released) o3 (skipping o2 for trademark

I like the research. I mostly trust the results. I dislike the 'Alignment Faking' name and

Increasingly, we have seen papers eliciting in AI models various shenanigans.There are a wide variet

Six months ago, I was a high school English teacher.I wasn’t looking to change careers, even after n

A new article in Science Policy Forum voices concern about a particular line of biological research

A fool learns from their own mistakes The wise learn from the mistakes of others.– Otto von Bismark

Someone I know, Carson Loughridge, wrote this very nice post explaining the core intuition around S

We make AI narrations of LessWrong posts available via our audio player and podcast feeds.We’re thin

This is a link post. Someone I know wrote this very nice post explaining the core intuition around S

TL;DR: In September 2024, OpenAI released o1, its first "reasoning model". This model exhi

We present gradient routing, a way of controlling where learning happens in neural networks. Gradien

This is a brief summary of what we believe to be the most important takeaways from our new paper and