is intelligence really the "point"?
an exploration of where humans "retain an edge" in a post-AI world
There is a question I’ve seen a lot of people existentially struggling with in the age of ai - where do humans retain an edge when ai can do so much?
I think to really answer this question we have to get away from the narrow lens of the past half century-to-century and approach this from a historical lens. What was the last “big edge” that humans had that a new technology took away? And how did it play out? Who were the winners and losers? And how can this frame the impact of AI over the next century.
The first thing that I can think of is labor.
The steam engine replaces labor as an “edge”
Pre industrial revolution, human labor was what drove society forward. The human wonders of the world - from the pyramids of Egypt to the Taj Mahal - were built on physical labor. While the steam engine in the 18th century was a step change in productivity and what “human labor” meant, it wasn’t an elimination of labor entirely.
However, in the 20th century, with the invention of the internet, the mind became the new “workhorse.” Calculators, then excel, automated calculation. Smartphones and computers become productivity multipliers. Google took knowledge and placed it at everyone’s fingertips.
“The steam engine replaced muscle.”
Both these revolutions materially impacted society and what was considered the human “edge.” Whereas pure physical strength was relevant and valued pre-industrial revolution, post industrial revolution human labor was no longer the singular constraint. Humans could leverage skills and machines.
This further led to changes in how humans operated and organized. We created governments, corporations, and legal systems. This needed lawyers, accountants, engineers i.e. knowledge work. With the internet, physical skills became even less important and the “edge” pivoted even further to intellect.
AI replaces the current “form” of intelligence
Over the last decade, we have seen a new revolutionary technology - artificial intelligence. AI began as machine learning - spell checkers, search algorithms (i.e. Google), GPS. Now you have LLMs that can think, analyze, write, compose art and music. And slowly, we are seeing LLMs move deeper and replacing more.
Tech and venture leaders alike are warning that knowledge-work is in for a rude awakening. Anthropic co-founder Dario Amodei forewarned in an essay that LLMs could take over ~50% of starting-level white collar work over the next several years.1
There are, I believe, two ways to think about AI. The first is that it is the next stage of humans leveraging machines to improve productivity and scale. The second is that it is “replacing” the current form of intellect as the human edge. The honest answer is that it is probably a bit of both. I don’t believe the two views are mutually exclusive.
AI will not replace “humans”
There’s a debate on the “extent” of AI and the big concept of “AGI” i.e. a “super human” intelligence. However, this is a moving goalpost and many would argue we are already there. In many domains today, AI easily passes the “turing test.” Many people struggle to detect AI writing, art, music. There are rumors that AI code is a meaningful percentage of code today.
In fact, AI far surpasses “individual” human intelligence in many domains if we define intelligence as pure knowledge, thinking, or analytical ability. While a trained cardiologist may have a better understanding of the heart than an LLM, an LLM has a better understanding of the heart than the average person that is not a cardiologist.
AI further surpasses many humans in terms of writing capability, creative ability, and even emotional awareness and empathy — or at least the show of it. That said, to define an LLM as “human” is both under-rating its capabilities and over-rating them. Andrej Karpathy has called LLMs “ghosts” or a different species which I believe is the most accurate definition I’ve heard.
What LLMs can do (honestly)
We must ask ourselves - what is it that an LLM cannot do. And in that question, I believe we may find the “edge” we are searching for. First, though, we can take out the things it can do. And the roles that are “at risk.” This includes:
Repetitive knowledge work: medical coding, service/call centers
Knowledge-based advice: therapy, medical advice, legal advice, accounting
Analysis: entry/mid-level white collar jobs in finance, law, engineering, academia, marketing
Writing: journalism, blogging, coding, author
Creating: artist, poet, writer, creator, marketing, actor, producer
Now, the next stage of this is understanding the “level” at which AI can “aid” vs. “replace.” This is, I believe, harder to really assess given the pace of change of the technology and the lack of consistency across roles and expectations. For example, one could argue that the mediocre writer will be replaced but a great writer will never be replaced.
However, one could also argue that there will be a new “kind” of writer, one who has great ideas but lack of ability / language barriers held them back from getting their ideas across effectively. One could even imagine society facing back-lash on AI writing and reverting to preferring people that write “normally” and imperfectly vs. what could be considered “ai slop.”
Human preference and culture is so hard to predict but I go back to my initial point - that some of writing - and writers - will likely be replaced and the writing that remains will likely have an ability to leverage LLMs to improve quality, productivity, scale. Will they be the same writers of today is harder to say.
What LLMs cannot do (honestly)
Now on to the important question, if these are all things that LLMs can do, at some level, what is it that an LLM cannot do?
Experience things - human reinforcement learning is an attempt to “teach” LLMs from experience but it is not true “experience”
Human contact - LLMs cannot touch, feel, taste
Agency - LLMs, as of yet, do not have agency unless given it explicitly. You cannot “trust” an LLM
Taste - I think of Anu Atluru’s essay on “taste.” In a world where knowledge becomes commoditized, “taste” and preference becomes the real edge
Morality and ethics - for all you can “train” an LLM on morals and ethics, it is hard to convince ourselves that an LLM can be intrinsically moral or ethical.
Existential - an LLM is not human and cannot die. Nor can it be alive.
I believe that we as a society have spent a lot of the last century over-indexing on pure verbal-analytical IQ. From standardized testing as prerequisites to institutions of higher learning to higher paying jobs given to people with the best GPA. From math Olympiad to chess championships, one type of intelligence has been our metric for reward.
However, there are a lot of types of intelligence. As one type of IQ becomes commodified, and creative work becomes less differentiated, these different types of intelligence - the ones that remain strictly human - may become more highly valued. Now, how do you value “existential” intelligence is a different thing.
Work is an economic concept not reliant on a single edge
Ultimately, we forget that when we fear AI is taking over the human edge, what we’re talking about is human “value.” The only reason knowledge-work has been valuable so far is because we have given it value.
It is a little odd to think of work and productivity as reliant on “knowledge” or “creativity.” We once had a society that relied almost exclusively on physical labor. The origin of work was never “knowledge” but rather a payment of goods from one party to another to get what needs to be done.
Work is simply a function of how we’ve organized society where economics are shared. As we cycle away from “knowledge” as an edge, the other IQ edges will become “work.” I don’t think AI will necessarily impact that “work” exists but rather what will be considered profitable or less profitable kinds of work.
I would think that the work that has become less profitable with the entrance of the industrial revolution or the internet, will revert to becoming more profitable - deep thinking, meaning-making, and somatic intelligence.
Perhaps philosophy will become highly valued again.
https://www.darioamodei.com/essay/the-adolescence-of-technology


