Sep 24, 2024
The world of privacy and AI shook and trembled when Hamburg's Data Protection Authority published its edgy discussion paper on Large Language Models (LLM). In a nutshell, they stated that LLMs do not store personal data and that this is in line with the CJEU’s views. Milla and Pilvi were honored and humbled (=overly excited with fangirl-hats on) to have Dr. Markus Wünschelbaum, Policy and Data Strategy Advisor at the Hamburg Data Protection Authority, to discuss what’s this all about. And what a discussion this ended up being!
Markus takes our (and your) hands and walks us all through the discussion paper’s key points and how the DPA ended up with this view: From the technical key points (it’s all about probabilities) all the way to the legal gymnastics and philosophy. On the other hand we also discuss what the result and impact would be if we would take the stance that LLMs do in fact store personal data and if that would actually make any sense. And what about NOYB’s complaint on OpenAI?
All this and much, much more awaits all our 6 listeners in this episode that you should not miss. After the recording our hosts needed a moment to gather themselves from all the excitement. We tried to be tough journalists but how can you not get excited about all this. We love DPAs with edgy action and hot tea to serve. Sorry about that. BUT IT WAS TOO FUN!
Did you enjoy our show? Support us by buying us a coffee here: https://bmc.link/privacypod4u
We would love to get feedback – so please tag us, follow us, DM us, or send us traditional email:
Twitter: https://twitter.com/PodPrivacy, #privacypod
Instagram: @privacypod
LinkedIn: https://www.linkedin.com/company/tietosuojapod/about/
Email: tietosuojapod@protonmail.com
Links:
In German:
https://datenschutz-hamburg.de/news/hamburger-thesen-zum-personenbezug-in-large-language-models
In English: