A Skill Is Not a Person: What Remains of Human Value When Everything Becomes a Skill?

A recent trend has become increasingly popular: turning people into skills.
A coworker can become coworker.skill. An ex-girlfriend can become ex-girlfriend.skill. Linus can become linus.skill. Trump can become trump.skill. As long as a person has enough public material, a recognizable speaking style, and a stable enough set of observable behaviors, they seem compressible into something that can be packaged, invoked, and reused like a capability module.
On the surface, this feels impressive. It looks like a new kind of engineering power: abstracting human experience, style, and knowledge into callable skills so that AI can play different roles, speak in different voices, and collaborate in place of different people. It certainly tells us something important about the agent era: expectations for AI are expanding rapidly. We no longer want AI merely to answer questions. We want it to occupy roles, assume responsibilities, and move into domains that once seemed irreducibly human.
But what makes this trend unsettling is not that it is weak. It is that it feels almost real while not being real at all.
On one side, it reveals a desire to control and modularize intelligence itself: to break the world into reusable units of capability. On the other, it reveals confusion and anxiety. If everything can be turned into a skill, then where exactly does human value remain? If a person can be distilled into a prompt, a document, a role card, or a behavioral wrapper, what essential difference remains between a human being and a tool?
I increasingly think the answer is this: a human being cannot be genuinely digitized into a skill.
What a skill can extract are only those parts of a person that are easiest to document, structure, formalize, and repeat. It can imitate expression, summarize methods, restate knowledge, and reproduce something that resembles tone or style. But what it captures is still only the residue of output, not the deeper capacity that generated that output in the first place.
This is where many people make the most basic mistake.
They see what someone has written over time and conclude that the writing itself is the person’s ability. They see code, documents, speeches, essays, interviews, and decision records, and assume that if those artifacts are collected, organized, and fed into a model, the person themselves has been captured. But that leap is far too large.
A person’s outputs are not their ability. The ability that produced those outputs is their ability.
These two ideas seem close, but they are not the same thing at all.
A writer’s real ability is not the sentences already written, but the capacity to observe, to feel tension, to structure experience, to discover order inside confusion, and to approach what does not yet have language. An engineer’s real ability is not the code already committed, but the capacity to understand systems, judge under incomplete information, trade off under constraints, and anticipate future failure modes. A political figure is not only their public speaking style or social media voice; their deeper power — or danger — lies in how they move emotion, shape narratives, read collective psychology, exploit institutional gaps, and create real-world consequences.
A skill can at best learn the traces those abilities leave behind. It rarely reaches the generative source.
This does not mean skills are worthless. On the contrary, skills are extremely valuable. They can formalize explicit rules and methods, compressing experience that once required long apprenticeship into a callable and much cheaper form. That is real progress. Knowledge has often remained monopolized precisely because it was never structured. In that sense, the rise of skills is part of a broader acceleration of knowledge externalization and workflow modularization.
The problem begins when we start believing that what has been expressed is already the whole person.
It is not.
Philosophy has warned about this for a long time. Michael Polanyi said, “We know more than we can tell.” What matters most in a person is often not what can be fully written down, but what remains tacit: intuition, judgment, timing, embodied experience, situational feel. You can document a skilled craftsperson’s process, but not fully replicate the feel in their hands or the judgment in their eyes. You can summarize a great manager’s principles in slides, but not fully capture their sense of timing, restraint, or silence in a complicated human situation.
This also recalls Aristotle’s distinction between episteme, techne, and phronesis. Skills are best at compressing techne — teachable, repeatable technique. But what is hardest to digitize in a human being is often phronesis: practical wisdom. Not just rule application, but the capacity to make fitting judgments within concrete reality. AI skills can look like encapsulated technique. They are still far from practical wisdom.
Because a person is not a rule executor. A person is a being with history, desire, injury, memory, prejudice, conflict, limitation, and contradiction. Human judgment does not emerge from nowhere. It grows out of lived paths, losses, books, wounds, loves, failures, and accumulated tension. Very often, what determines an output is not merely what rules someone knows, but why they think this way now. That “why” is exactly the part least available to skillification.
So when people build coworker.skill, ex-girlfriend.skill, linus.skill, or trump.skill, I see something that is half ambition and half anxiety.
The ambition is obvious. For the first time, people can truly imagine modularizing roles, assetizing experience, and semi-tooling personality and style. We want AI to be as usable as a tool and as flexible as a person, so we begin looking for which parts of a person can be abstracted into a tool.
The anxiety is just as obvious. In a world where everything can be wrapped into prompts, roles, skills, and workflows, we become less certain about what remains irreducibly human. So we keep trying to prove that Linus can be packaged, Trump can be packaged, the ex-girlfriend can be packaged, the teacher can be packaged, the investor can be packaged, the founder can be packaged. As if, if we keep going long enough, a person will turn out to be only a more complicated callable interface.
I think this is one of the most dangerous illusions in current AI discourse.
It mistakes observable output for human essence, explicit style for generative capability, and summarized method for existence itself.
But a real human being is far thicker than any of those layers.
A person is not just someone who says certain things, writes certain sentences, or makes certain judgments. Beneath those judgments is a structure of life that cannot be fully formatted: memory, emotion, shame, obsession, fear, desire, hesitation, ambition, value ranking, sense of timing, relational awareness, real-world pressure, bodily state, and even things the person themselves cannot fully articulate but which nevertheless shape them.
A skill can closely resemble the shell a person leaves behind, but it remains very far from the living, struggling, generating being themselves.
That is why I think what deserves renewed respect in the AI era is not what is easiest to compress into a skill, but what is hardest to compress: how judgment forms, why taste becomes what it is, where risk appetite comes from, why a certain style has force, how restraint and proportion are built over time. These things may be imitated. They are much harder to reproduce.
So perhaps the question is not whether people will be replaced by skills. The deeper question is whether people will start to understand themselves as skills.
Once that happens, people will begin overvaluing what is easiest to externalize, package, and call, while neglecting what is slower, deeper, less quantifiable, and more constitutive of who they are. In the end, a person may start to live less as a complex being and more as a preparation of training material for machines.
That is what I find most alarming.
The age of skills is not avoidable. It will come, and it will bring real productivity gains. People will continue to package knowledge into skills, abstract roles into skills, fit styles into skills, and crystallize workflows into skills.
But if we conclude from that that a person is nothing more than the sum of skills, then we have made a profound mistake.
A skill can only seize the parts of a person that have already been expressed, formalized, and abstracted. The real person still exists in what has not yet been expressed, cannot be exhausted, and resists being fully formatted.
AI may become more and more human-like. That does not mean it has understood the human.
And perhaps one of the most important tasks for human beings in the AI era is precisely to protect this distinction:
do not let stronger tools teach us to understand humans as tools.


