NOTICE: The PDF of the actual report is at the bottom of this page
Type: Policy Research - Defensive Analysis
License: MIT / Unlicense (Public Domain)
NOTICE: The PDF of the actual report is at the bottom of this page
Classification: Policy Research - For Defensive Analysis
Prepared For: Emerging Technology Risk Assessment Committee
NOTICE: The PDF of the actual report is at the bottom of this page
Classification: Policy Research - For Defensive Analysis
Prepared For: Emerging Technology Risk Assessment Committee
NOTICE: The PDF of the actual report is at the bottom of this page
Classification: Policy Research - For Defensive Analysis
Prepared For: Emerging Technology Risk Assessment Committee
NOTICE: The PDF of the actual report is at the bottom of this page
Classification: Policy Research - For Defensive Analysis
Prepared For: Emerging Technology Risk Assessment Committee
NOTICE: The PDF of the actual paper is at the bottom of this page
"The fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism." — Thomas Nagel, "What Is It Like to Be a Bat?" (1974)
This guide draws heavily from the educational content created by Robert Miles (LinkedIn), whose work in AI safety communication has made complex technical concepts accessible to broader audiences. The videos referenced throughout this document are primarily from his collaborations with Rational Animations, and his dedicated AI Safety YouTube channel provides deeper technical explorations of these topics. This guide serves as a structured hub to help practitioners navigate and apply the safety concepts Rob has explained so effectively.
As AI systems become increasingly capable and integrated into critical workflows, understanding AI safety principles is essential for anyone working with AI agents. This guide provides practical knowledge about potential risks, safety measures, and thought experiments to help you work more safely and effectively with