The Comfort Trap: Getting Too Used to AI

Seminar 3

14:4015 mins07/11/2025

You quickly get used to the good things. We’ve already grown accustomed to delegating important tasks to an invisible outsource — artificial intelligence. This raises a question: how ethical is it? The answer isn’t straightforward.

In testing, AI can automate many routine processes, create test scenarios, and find bugs. But if we stop double-checking the results ourselves, we risk missing important details that AI simply cannot understand. Is it ethical to shift responsibility onto algorithms, forgetting about the human who is supposed to analyze and make decisions? Not to mention fair compensation for labor.

The same applies to accessibility — designing interfaces that are accessible to all users. AI can generate image descriptions or help adapt UI design, but it can’t truly feel what people with disabilities need. If we allow AI to handle this without additional review, we risk creating a product that is inaccessible to many. This, too, is a question of ethics.

Think of artificial intelligence like Woland from The Master and Margarita. Don’t ask him for anything directly — especially from those more powerful than you. He will offer and give everything himself. But then it’s up to you to figure out whether it was all done in good conscience, whether we used these capabilities ethically, and whether we remembered the people we are creating for.

In our talk, we’ll explore how to preserve human responsibility in the age of AI, why ethics is not just a set of rules but a conscious choice, and how testing and accessibility have become apostles of digital conscience — helping us make technology truly fair and accessible.