What could the world be if we managed to get aligned AI? And are we be able to make such a future happen?
This episode of the Existential Hope podcast features Richard Mallah, Director of AI Projects at the Future of Life Institute, where he works to support the robust, safe, beneficent development and deployment of advanced artificial intelligence. He helps move the world toward existential hope and away from outsized risks via meta-research, analysis, research organization, community building, and advocacy, and with respect to technical AI safety, strategy, and policy coordination.
This interview was recorded in July 2022.
Submit your contribution to the storytelling bounty from Richard's prompt to “Imagine a world with aligned AGI” here: https://gitcoin.co/issue/29383
About this podcast:
In the Existential Hope-podcast, we invite scientists to speak about long-termism. Each month, we drop a podcast episode where we interview a visionary scientist to discuss the science and technology that can accelerate humanity towards desirable outcomes.