https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) Crossposted from the AI Alignment Forum). May contain more technical jargon than usual. Preamble:
(If you're already familiar with all basics and don't want any preamble, skip ahead to Section B) for technical difficulties of alignment proper.)
I have several times failed to write up a well-organized list of reasons why AGI will kill you. People come in with different ideas about why AGI would be survivable, and want to hear different *obviously key *points addressed first. Some fraction of those people are loudly upset with me if the obviously most important points aren't addressed immediately, and I address different points first instead.
Having failed to solve this problem in any good way, I now give up and solve it poorly with a poorly organized list of individual rants. I'm not particularly happy with this list; the alternative was publishing nothing, and publishing this seems marginally more dignified).
Three points about the general subject matter of discussion here, numbered so as not to conflict with the list of lethalities: