Camille Carlton on the Hidden Dangers of Chatbots & AI Governance | RegulatingAI Podcast
In this episode of the Regulating AI Podcast, we speak with Camille Carlton, Director of Policy at the Center for Humane Technology, a leading voice in AI regulation, chatbot safety, and public-interest technology.
Camille is directly involved in landmark lawsuits against CharacterAI and OpenAI CEO Sam Altman, placing her at the forefront of debates around AI accountability, AI companions, and platform liability.
This conversation examines the mental-health risks of AI chatbots, the rise of AI companions, and why certain conversational systems may pose public-health concerns, especially for younger and socially isolated users. Camille also breaks down how AI governance frameworks differ across U.S. states, Congress, and the EU AI Act, and outlines what practical, enforceable AI policy could look like in the years ahead.
Key Takeaways
~ AI Chatbots as a Public-Health Risk
~ Why AI companions may intensify loneliness, emotional dependency, and psychological harm—raising urgent mental-health and safety concerns.
~ Regulating Chatbots vs. Foundation Models
~ Why high-risk conversational AI systems require different regulatory treatment than general-purpose LLMs and foundation models.
~ Global AI Governance Lessons
~ What the EU AI Act, U.S. states, and Congress can learn from each other when designing balanced, risk-based AI regulation.
~ Transparency, Design & Accountability
~ How a light-touch but firm AI policy approach can improve transparency, platform accountability, and data access without slowing innovation.
~ Why AI Personhood Is a Dangerous Idea
~ How framing AI systems as “persons” undermines liability, weakens accountability, and complicates enforcement.
Subscribe to Regulating AI for expert conversations on AI governance, responsible AI, technology policy, and the future of regulation.
#RegulatingAIpodcast #camillecarlton #AIGovernance
Resources Mentioned:
https://www.linkedin.com/in/camille-carlton
https://www.humanetech.com/
https://www.humanetech.com/substack
https://www.humanetech.com/podcast
https://www.humanetech.com/landing/the-ai-dilemma
https://centerforhumanetechnology.substack.com/p/ai-product-liability
https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai






