As 62% of developers now rely on AI coding assistance, we face an unprecedented challenge: most trust AI output without verification, creating new attack vectors in our software supply chain. This talk examines the critical security vulnerabilities emerging from AI-generated code. We’ll explore how AI tools inherit flaws from their training data and operate without transparency, while discussing practical strategies for responsible AI integration in development workflows.
