The arrival of the Artificial General Intelligence (AGI) era is already an absolute, universally recognized fact; the only uncertainty is how much time is still needed for this "industrial-level revolution" to thoroughly succeed.
And this massive social revolution will fundamentally change the current logic of socio-economic growth.
First, whether a job is measurable has become the core factor in determining whether it will be replaced in the new era. Those closest to writing code will feel the impact first, and of course, this impact has already occurred. Tech giants and small-to-medium enterprises domestically are universally laying off staff and downsizing; and foreign companies are doing the exact same thing. Essentially, as long as your work follows known best practices to process data or output templated content—commonly known as white-collar work—your job can be replaced, and I believe no one thinks their value can outpace tokens. Agents are more efficient and cheaper than humans, so the replacement of these jobs is inevitable. The most classic examples are junior programmers and "button-pushing" operational roles.
Throughout human history, the scarcest resources have always been "intelligence" and "execution," which have continually hindered technological progress. However, with the development of AI, the generation cost of intelligence is approaching zero, meaning the future economic bottleneck will shift to human "validation capability." Validation is the final filtering layer between humans and AI. AI can "measure" rules by reading data across the entire web, but every top human expert has a set of "weights" in their brain that cannot be easily quantified—stemming from a lifetime of trial and error, intuition, and experience handling extreme edge cases. Judging whether a piece of code is safe, or whether an article strikes a chord with people—this ability to make the final call, execute, and bear responsibility is validation. Human capacity and energy are ultimately limited. Imagine AI writing a precise 1000-page business plan in a single second; as a human, you still need to read, think, and confirm word by word whether the strategy in this plan is genuinely viable. This checking process is constrained by the processing speed and energy of the human brain. There is a massive gap between the two, which we call the "measurability chasm." Consequently, human validation capability has become the "scarce resource" of the new era.
From the above two points, a massive problem can be deduced—the younger generation is facing a "missing primary loop." In traditional society, a person who just graduated from a university campus and wants to become an expert must start as a junior clerk, continuously accumulating experience and getting promoted upwards. To become a chief physician, you must start as an intern, writing medical records and doing odd jobs on ward rounds; to become a senior architect, you must start as a foundational junior programmer. This is the "primary loop." However, in the modern era, AI perfectly replaces all entry-level workplace jobs, causing the growth ladder for newcomers to fracture. The traditional human "apprenticeship" path of experience accumulation will break, leading to a future where we might face a situation with no senior experts available.
For this seemingly insoluble dilemma, the antidote still lies within AI. In the AGI era, although young people have lost entry-level positions, they have acquired AI as a superweapon. We no longer need to gain experience and knowledge from foundational roles, but instead use AI to accomplish this task. For example, a business novice no longer needs to spend ten years slowly learning through trial and error in the real market; instead, they can conduct high-intensity simulated drills in countless highly realistic crisis scenarios generated by AI, exponentially accelerating the development of their intuition and skills. Although young people still need real-world feedback to calibrate their coordinates, AI simply compresses the lengthy linear trial and error process into a high-frequency, high-density process of compound accumulation. And this is the key to the economy developing in a healthy, sustainable direction in the AI era.
However, as more and more people use AI to improve efficiency, people will continuously summarize and hand over all the experience, judgment rules, and intuition they have accumulated over many years to AI. Especially for top practitioners in an industry, when they hand over their final trump card, AI will quickly do it just as well as they do. This also makes these top practitioners lose their irreplaceable value, personally creating their own "digital doubles."
This brings us to the recently popular array of bizarre "skills," like "self-skill," "colleague-skill," and so on. The essence of a skill is the precipitation and packaging of knowledge, habits, and abilities. The emergence of these skills corresponds to the situation we mentioned above, which we define as the "Curse of AI."
Therefore, enterprises start installing large amounts of precipitated skills into agents to work, finding that this saves a massive amount of labor costs, and thus they abandon human review. In the short term, this is certainly good for the boss. But when these unreviewed, hidden errors accumulate to an undeniable degree, they will trigger a collapse that affects the entire industry.
This is absolutely not alarmist.
Facing this kind of "doomsday," many workers have started exploring the human "moat," thinking about the advantages humans have over AI, and the most frequently used word is "Taste." But in my opinion, un-deconstructed taste itself is a vague, false concept. People always use words like "taste" or "judgment" to establish their superiority over AI, but this is just an avoidance behavior, a psychological defense mechanism. The definitions of these words are extremely blurry; even what is considered excellent taste today might not apply tomorrow. Therefore, we break taste down into two categories: "measurable" and "immeasurable." If a certain so-called "good taste" can be deconstructed, digitized, and measured, then machines will ultimately be able to perfectly copy or even surpass it. The "taste" that we currently believe only humans possess is essentially just because machines haven't collected enough data to measure it yet, or it exists in a domain full of "unknown uncertainties."
And the essence of the first layer of "immeasurable" taste is the unique "weights" in the human brain. These weights don't just appear out of nowhere; they originate from a person's entire experience from birth to the present, the edge cases they've seen, and thousands of hours of trial and error. When a top editor judges whether an article is "funny enough," or a senior CTO judges whether a piece of code "can go online," you can certainly call it "taste," but in its underlying logic, it is humans using their unique "weights" to conduct final "validation" of the output result.
The second layer, which is also the most core, is the ability to "coordinate human groups." In "human-to-human" fields like fashion and art, the core competitiveness of a "tastemaker" is actually not the judgment of objective beauty or ugliness, but the ability to "coordinate human groups." Their "taste" is reflected in their ability to accurately capture the current sentiment of the times, create a narrative, and successfully call upon and rally a group of people to care about and identify with a certain thing. Its essence is "meaning making."
And this leads to our next topic—what professions will have commercial value in the future? How should workers transition?
According to our previous discussion, we have divided future professions into the following four quadrants (The X-axis is AI automation cost, the Y-axis is human validation cost):
| Validation / Automation | AI Automation Cost: Low (Highly Replaceable) | AI Automation Cost: High (Extremely Hard to Replace) |
|---|
Human Validation Cost: High (Requires extreme trial & error and professional weights) | Liability Underwriter
Top experts using AI to amplify capacity; conducting security audits and ultimately endorsing and bearing responsibility for critical decisions. | Director
Setting the overall direction and strategic intent; driving AI swarms through intuition under ambiguous conditions to explore the unknown. |
Human Validation Cost: Low (Standardized, templated output) | Displaced Worker
Standardized, easily measurable non-original work; market value rapidly drops to zero. (Advise to avoid) | Meaning Maker
Establishing social consensus, emotional connection, and trust; cultivating unique human tacit knowledge and subjective value. |
Based on the quadrant division above, I suggest everyone avoid the "Displaced Worker" in the bottom-left corner and develop towards the other three directions:
- Liability Underwriter: The top 1% experts in various vertical domains (e.g., top doctors, senior VCs). They use AI to infinitely amplify their capacity, and rely on personal reputation and solid professional prowess to provide the final endorsement and bear responsibility for extremely complex systems or high-risk decisions. This type of work is usually very difficult to automate and requires solid human professional proficiency.
- Director: Facing "unknown unknowns" that cannot be calculated at all, responsible for setting initial strategic intent, directing AI swarms to explore, and relying on sharp intuition to pull projects back on track when they deviate (e.g., entrepreneurs, frontier researchers, etc.).
- Meaning Maker: Deeply cultivating unquantifiable social consensus games (e.g., art, fashion, community leaders). They are responsible for building trust and emotional connections between people. The demand space for this type of work will not shrink, but rather expand. This corresponds to our discussion on "taste" above.
In terms of commercial morphology, the future moat is no longer who owns the smartest AI model, but who owns the most powerful "validation infrastructure" (for example, tools that can compress massive AI behaviors into human-readable summaries), and who grasps the highest quality ground-truth data.
So what should ordinary people do in the future? This is the core anxiety of modern people—not knowing what to do in the face of this tidal wave of the times. My advice is absolutely do not fall into panic and low-grade anxiety. All you need to do is immediately start using AI tools extensively, actively using them to "replace" the parts of your current work that can be automated. In this process, not only can you figure out the real baseline of AI, but you can also free up time to delve deeply into the personal hobbies that put you into a state of "flow"—in the future, those pure human passions will very likely be the most indestructible commercial moat.
The ultimate destination of technology is human responsibility and sense of direction, and we need to find clear coordinates to establish our own value.