Preview Mode Links will not work in preview mode


Apr 28, 2023

In this week’s episode Floora, Milla and our guest host Joost discuss how to look at AI through the lenses of privacy and data protection. With so much hype around the topic, we go through the most important developments in the past weeks.

On one hand, proponents of AI regulation argue that without proper oversight, AI could cause harm to privacy, whether intentionally or unintentionally. For example, if AI systems are used to make important decisions that affect people's lives, such as hiring decisions or medical diagnoses, there is a risk of bias or errors that could lead to unfair outcomes.

On the other hand, opponents of AI regulation argue that too much regulation could stifle innovation and hinder progress in the field. They argue that AI is still in its early stages of development and that it is not yet clear what the long-term impacts of AI will be.

The main reason for all the discussion is that service providers have been very opaque about the collection and usage of personal data in the AI models. Do they even use personal data? Are they processing it actively?

And thats where the Italians come along, as the Italian DPA already has taken a stance against Chat GPT and a "virtual friend" chatbot named Replica. What were the main issues according to the Italian DPA with ChatGPT? Why should you re-read the Google Spain decision to understand how ChatGPT might comply (or not comply) with the GDPR? We also shortly touch on the upcoming AI Act.



Google Spain (Case C‑131/12) 

Italian DPA Garante on ChatGPT: 

Norwegian DPA Datatilsynet: Artificial intelligence and privacy 


Did you enjoy our show? Support us by buying us a coffee here:

We would love to get feedback – so please tag us, follow us, DM us, or send us traditional email:

Twitter:, #privacypod

Instagram: @privacypod