On defining Artificial Intelligence
The discourse around AI fails to define its core concept before subjecting us to foundation-destroying cognitive contagions. We should start by defining what exactly we mean by "Artificial Intelligence".
Given the daily prognostication about AI transforming everything, one must begin with a definition of artificial intelligence as a rite of passage. The best I’ve been able to come up with so far is:
Artificial intelligence is behaviour that mirrors our own manifested by us outside of a mind.
This description embodies my understanding and motivates my more specific use of terms like large language models. I want to describe the technology without conjuring mythological automatons, summoning anthropomorphic deities, or breathing life into silicon golems.
In all of my reading, I’ve yet to come across a universally agreed upon definition of intelligence. We’ve seemingly given up trying to define it rigorously for animals — are bacteria intelligent? Do dolphins lack intelligence because they don’t write dictionaries or fantasize about superyachts? Yet we confidently declare the helpful assistant in our chat window intelligent, even anthropomorphizing it with gender.
Defining intelligence
So can we come up with an acceptable definition of intelligence?
The ability to react to one’s surroundings alone would mean the universe is intelligent — rocks fall from mountain tops and ricochet on their way down a ravine, reacting as they go.
Perhaps then we need to qualify reactivity by introducing intentionality. But thermostats intend to maintain temperature; they were created with purpose.
Something that learns and adapts to its surroundings then? Metals develop stress patterns.
What about more advanced problem-solving abilities? Water finds the quickest path downhill. The very matter that makes us up seems gifted at minimizing action — something our most advanced computers struggle with. And even slime molds are more adept at path-finding algorithms than the most intelligent of species.
Too many definitions of intelligence rely on other cranial concepts like thinking. How can we define intelligence as ‘critical thinking’ when thinking is arguably defined as a component of intelligence? These definitions are self-referential.
Defining artificiality
On the point of artificiality I will only go so far as to say “man-made” implies artificial but man is natural. As we reproduce, would our offspring not be artificial? When does a chair made from natural materials become artificial? Is it simply when we no longer see ourselves in this object? If so, will suitably advanced artificial intelligence become natural?
Declaring bankruptcy
This clumsy search through a higher-dimensional space of fear, uncertainty and doubt invites us into a semantic void that enables vacuous prognostication and mysticism.
It’s time to admit that in 2025, ‘Artificial intelligence’ is a bankrupt term capable primarily of generating astronomical sums of money.
Articles breathlessly discussing ‘AI’ should be read as signals — not of technological sophistication, but of conceptual poverty. When writers can’t even define their central term, why should we trust their predictions about economic collapse or transcendent singularities?
And with that, I’d like to announce my upcoming post on predicting the impact AI will have on technology and the wider economy.