You can’t write “AI” without an “I”

Control, clarity, and responsibility in the age of copilots

Seminar 3

13:0015 mins07/11/2025

In modern software development, AI tools are branded as copilots but, too often, we treat them like autopilots. That’s a mistake.

As someone who writes code with LLMs daily, I’ve learned that effective use of AI starts with understanding roles, responsibilities, and constraints, just like real pilots.

In aviation, the pilot and the copilot follow a preflight checklist where each performs distinct, auditable tasks. One reads, the other looks and confirms. Each is 100% responsible for their own actions: not for supervising the other. That’s the mindset we need with AI: not “let it write and I’ll review,” but “I define, it executes within bounds, I confirm.” The goal is not redundancy but complementarity.

I’ve spent the last few months designing workflows to make this model real. I’ll show concrete examples from my own practice in real use cases: starting with unit tests that define verifiable expectations, breaking down prompts into atomic operations, and using LLMs only for tasks that are context-bounded and algorithmically checkable.

Alongside the pragmatic, I’ll touch on the ethical: delegation without responsibility leads to plausible deniability. And that’s unacceptable. If a function generated by an LLM causes harm, it’s not “the AI’s fault.” It’s yours.

AI doesn’t write software. Developers do.

That’s why the “I” in AI matters more than ever.