← Back to insights

The Most Expensive Bug in AI Is a Word

Originally published on LinkedIn →

When you misname systems, you misplace trust. And then you build workflows that no one can truly own when they fail.

ā€œComputer science is a terrible name,ā€ the MIT professor says, almost casually. Not because the field is weak, but because the name makes beginners confuse the tool with the essence.

That pattern is repeating in AI, at scale.

We named things in a way that inflates trust:

  • Artificial intelligence isn’t intelligent, at least not in the human sense.
  • AI agents have no agency.
  • Data scientists aren’t automatically doing science.

These are not insults. They are boundary markers.

Because naming is never neutral. Naming is governance. A label sets expectations, budgets, and responsibility.

Here is the real damage of wrong naming.

When you call a system ā€œintelligentā€, people quietly switch off parts of their own intelligence.

  1. They stop interrogating assumptions.
  2. They stop asking what the goal really is.
  3. They stop checking whether the answer makes sense in context.

Not because they became lazy, but because the name gave them psychological permission to outsource thinking.

And that is the most dangerous failure mode in enterprise AI.

Today’s models can be fast and smart in a narrow way. They compress patterns, generate plausible outputs, and help us move quickly.

But,

  • They do not carry intention.
  • They do not carry accountability.
  • They do not carry the lived context of your organisation, your clients, your ethics, your risk.

So if we misname them, we misuse them.

In our tradition, naming carries "amanah" , a responsibility. A word can be a trust, or a trap.

So I’ll ask a practical question that is also philosophical:

Which mislabel in your organisation has caused the most harm, ā€œintelligenceā€, ā€œagentā€, or ā€œscienceā€, and what behaviour did it silently create?