Skip to main content
Post

Between Rules and Reality: The Complex Challenge of EU Data Compliance

AEIdeas

November 11, 2024

The European Union presents businesses with a complex challenge: They must comply with three major regulatory frameworks that sometimes have conflicting requirements. These frameworks are the General Data Protection Regulation (GDPR), European Union Artificial Intelligence or EU AI Act, and Digital Markets Act (DMA). While each regulation serves a purpose, their overlapping and occasionally conflicting mandates present significant compliance challenges for organizations.

GDPR establishes strict data collection requirements to protect individuals’ personal data and privacy rights. It also establishes strict rules on how organizations collect, process, store, and share this information, demanding explicit consent for data collection and emphasizing data minimization.

The AI Act mandates high-quality, representative datasets for developing high-risk AI systems, focusing on responsible, unbiased data collection to promote fairness. It requires data practices that uphold privacy and fundamental rights, along with robust governance, documentation, accuracy standards, and cybersecurity. However, to mitigate bias, the AI Act often requires broader data collection to ensure a representative dataset across demographics, sometimes requiring gathering sensitive data (like race or gender).

Meanwhile, the DMA pushes large tech platforms to share their data with competitors to promote market fairness. These three frameworks, each with distinct goals, create a challenging business compliance environment. Consider a company developing an AI-powered recruitment tool. Under GDPR, they must minimize data collection and obtain explicit consent from candidates, while the AI Act requires comprehensive, representative datasets to prevent discriminatory outcomes in hiring decisions.

This creates an immediate tension: How can organizations build effective, unbiased AI systems while adhering to GDPR’s data minimization principles? The challenge becomes even more complex for “gatekeeper” platforms subject to the DMA. These companies must collect minimal personal data (a GDPR requirement), ensure their AI systems use representative datasets (an AI Act requirement), share user-generated data with competitors (a DMA requirement), and maintain user privacy throughout this process (a GDPR requirement). This balancing act often feels like trying to solve a Rubik’s Cube; adjusting one side inevitably affects the others.



For businesses operating in the EU, compliance requires understanding the conflicts that become particularly evident in three key areas. First, while GDPR pushes for minimal data collection, the AI Act’s emphasis on representative datasets often requires broader data gathering. Organizations must somehow reconcile these competing demands, ensuring they collect enough data to build fair AI systems while not violating GDPR’s minimization principle. Second, the DMA’s mandate for data sharing among competitors directly challenges GDPR’s strict consent requirements. How can a platform share user data with competitors while ensuring GDPR compliance? The consent mechanisms needed for such sharing are complex and potentially confusing for users. Third, each regulation approaches discrimination differently. GDPR focuses on preventing discriminatory data processing, the AI Act targets bias in AI systems, and the DMA addresses competitive discrimination. Organizations must simultaneously satisfy all these anti-discrimination requirements, often with different solutions.



Moreover, organizations must maintain detailed records demonstrating compliance with all three frameworks, often requiring separate but interconnected documentation systems. These systems must be designed to simultaneously minimize data collection, ensure dataset representativeness, and facilitate data sharing when required. This requires dedicated teams with expertise in all three regulatory frameworks, significantly increasing operational costs.



Is it possible to successfully navigate this regulatory landscape? Rather than treating each regulation separately, organizations could develop integrated compliance programs considering all three frameworks simultaneously. Continuously monitoring and adjusting data practices to ensure ongoing compliance with all frameworks and the variations of requirements across different countries and regions is an arduous, if not impossible, task.



The challenge of complying with these regulations reflects a broader tension in digital governance: balancing privacy rights, algorithmic fairness, and market competition. While these regulations serve their purposes, their interaction creates significant complexity for businesses. Success requires careful planning around the legal agreement for compliance and the possible outcomes for cooperation between the engineers and compliance. The ability to build robust systems that can be fine-tuned with a nuanced understanding of how these frameworks work is a challenge that may not always satisfy all parties.

As the digital landscape evolves, governments should recognize the complicated nature of this delicate balance. At the same time, organizations must remain agile in their compliance approaches while maintaining these regulations’ spirit: protecting individual privacy, ensuring algorithmic fairness, and promoting healthy market competition.

Learn more: Does Privacy Regulation Compromise the Public Good? | European AI Regulations: Real Risk Reduction or Regulatory Theater? | Raising the Bar, Not Lowering Our Guard, Around Cybersecurity | European Commission’s Attack on Apple Illustrates the Growing Incoherence of Its Competition Policy