Enough About AI

Alignment Anxieties & Persuasion Problems

Enough About AI Season 2 Episode 2

Dónal and Ciarán continue the 2025 season with a second quarterly update that looks at some recent themes in AI development. They're pondering doom again, as we increasingly grapple with the evidence that AI systems are powerfully persuasive and full of flattery at the same time as our ability to meaningfully supervise them seems to be diminishing.

Topics in this episode

  • Can we see how reasoning models reason? If AI is thinking, or sharing information and it's not in human language, how can we check that it's aligned with our values. 
  • This interpretability issue is tied to the concept of neuralese - inscrutable machine thoughts!
  • We discuss the predictions and prophetic doom visions of the AI-2027 document
  • Increasing ubiquity and sometimes invisibility of AI, as it's inserted into other products. Is this more enshittification
  • AI is becoming a persuasion machine - we look at the recent issues on Reddit's r/ChangeMyView, where researchers skipped good ethics practice but ended up with worrying results
  • We talk about flattery, manipulation, and Eli Yudkowsky's AI-Box thought experiment

Resources & Links

You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

People on this episode