AI Safety

https://arxiv.org/abs/2306.12001

This paper provides an overview of the main sources of catastrophic AI risks, organized into four categories: Malicious Use; AI Race; Organizational Risks; and Rogue AIs. (PDF can be downloaded from the linked arxiv.org page.)

1
0
https://www.youtube.com/watch?v=144uOfr4SYA

Debating the proposition "AI research and development poses an existential threat"! Witness incredible feats of mental gymnastics and denialism! Gaze in dumbstruck awe as Yann LeCun suggests there is no need to worry because if and when AI starts to look dangerous we simply won't build it! Feel your jaw hit the floor as Melanie Mitchell argues that of course ASI is not an X-risk, because if such a thing could exist, it would certainly be smart enough to know not to do something we don't want it to do! A splendid time is guaranteed for all.

1
0
http://podcast.banklesshq.com/176-why-the-ai-race-is-a-ticking-time-bomb-with-connor-leahy

Pretty solid interview with Connor Leahy about why AI Safety/Alignment is important, how transhumanists are caught in an ideological race, and some ideas for possible societal solutions/regulations. I think this might be Leahy's best yet.

1
0
https://www.youtube.com/watch?v=JD_iA7imAPs

This video by Robert Miles makes a levelheaded argument for taking existential AI Safety seriously as rapidly as possible.

1
0