Enough About AI

04 Digesting The Data

Enough About AI Season 1 Episode 4

Dónal and Ciarán discuss the vast ocean of data that Large Language Models (LLMs) depend on for their training, covering some of the issues of access to that data and the biases reflected within it. This episode should help you better understand some aspects of the AI training process.

Topics in this episode

  • What data is being used to train models like ChatGPT?
  • What are "supervised" or "unsupervised" machine learning methods?
  • How have the owners of copyright data, like news organisations, reacted to the use of their text?
  • What issues of bias arise in training models based on existing text?
  • What happens when AI models train on AI output?
  • How do we morally and ethically align the actions of AI models, as part of their training?


You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!

People on this episode