As artificial intelligence becomes increasingly integrated into sensitive domains like healthcare and finance, the tension between data utility and privacy protection has never been more critical. This talk explores how Fully Homomorphic Encryption (FHE) is reshaping the landscape of privacy-preserving computation, enabling AI services that can analyze and learn from data without ever seeing it in plaintext.
We’ll dive into the practical realities of implementing FHE systems, examining both the technical challenges—massive computational overhead, complex parameter selection, and memory constraints—and the transformative potential for ethical AI deployment. Through concrete examples, we’ll explore how encrypted machine learning can enable medical research on patient data without compromising individual privacy, and how financial institutions can detect fraud patterns while keeping transaction details encrypted.
The talk addresses critical questions about AI ethics in the age of data abundance: How can we build AI systems that respect individual privacy by design? What does it mean for algorithmic fairness when training data remains encrypted? How can open-source development ensure that privacy-preserving AI benefits everyone, not just those with the resources to develop proprietary solutions?
Drawing from hands-on research experience, this presentation offers a student’s perspective on the current state of FHE technology—what works, what doesn’t, and what needs to happen to make privacy-preserving AI practical for everyday use. We’ll discuss the gap between theoretical possibilities and real-world deployment, and explore how the next generation of developers can contribute to building more ethical AI systems.