Ethical and Legal AI Compliance in the Age of FOSS

Aligning transparency, accountability and digital sovereignty

Seminar 2

17:2015 mins07/11/2025

As AI systems become increasingly embedded in decision-making processes across both public and private sectors, the need for ethical, legally compliant and transparent AI has never been more urgent. The EU AI Act, along with existing frameworks like the GDPR, provides a structured legal backbone for risk management and accountability, but true digital sovereignty demands more than compliance. It calls for proactive design choices that empower users and uphold fundamental rights.

Free and Open Source Software (FOSS) plays a pivotal role in this shift. By enabling auditability, community oversight and modular reuse, FOSS can act as a practical enabler of ethical principles such as transparency, fairness and user control. Yet its deployment in high-risk AI applications also raises questions around shared responsibility, traceability and compliance assurance.

This talk will explore how FOSS can support – and sometimes challenge – AI governance objectives under the AI Act and GDPR.

We will examine:

– The intersection of AI compliance and ethical design, including the role of explainability, risk classification and human oversight.

– How open AI models and datasets can contribute to or conflict with obligations for data minimization, bias mitigation and impact assessments.

– Key governance gaps in the current regulatory landscape, particularly in relation to foundation models and open collaborative development.

– Strategies for building trust and legal certainty into open AI ecosystems through governance frameworks, documentation standards and community norms.

We will also discuss emerging best practices for privacy-preserving and rights-respecting AI development, with examples from real-world projects that combine open innovation with ethical rigor.

Ultimately, the session aims to spark a critical reflection on the dual role of FOSS as both a tool for empowerment and a space requiring careful ethical stewardship. By placing user rights, transparency and compliance at the core of AI development, we can better align innovation with digital sovereignty in Europe and beyond.