Artificial Intelligence is not neutral. Every algorithm carries a worldview, and too often, it’s one that’s opaque, centralized, and unaccountable. This talk explores why AI ethics cannot be an afterthought or a moral sticker slapped on at the end, but must be embedded from the very beginning of design and development.
Free and open-source software has a unique role to play, not just in making code transparent, but in enabling communities to audit, govern, and reshape the systems that affect us all. We’ll discuss how open models can foster accountability and digital rights, but also how the term “open” is increasingly co-opted by corporate interests to legitimize closed, extractive practices.
Through real-world examples and speculative provocations, we’ll ask: What does responsible AI really look like in practice? How do we move beyond just opening code to embedding ethics into architectures, datasets, licenses, and development cultures?
Free software can be the backbone of a more just AI, but only if we design it to be.