khlilo

困扰所有人的AI时代焦虑和似乎无解的困境

2026-04-07

通用人工智能(AGI)时代到来已经是绝对的公认事实,不确定的只有这场“工业级变革”彻底成功还需要多少时间。

而这场巨大的社会变革又将根本性地改变当前的社会经济增长逻辑。

首先,是否可衡量成为了新时代下你的工作是否会被替代的核心因素。距离编写代码最近的人会最先感受到冲击,当然,这种冲击已经发生了。国内大中小厂通通开始裁员、缩编;而国外的企业也在做同样的事。从本质上讲,只要你的工作遵循已知的最佳实践去处理数据或输出套路化内容,俗称白领,那你的工作就可以被取代,而且我相信,没有人认为自己的价值能跑得过token。agent比人类更高效、更廉价,所以这些工作被取代就是必然的。最经典的就是初级程序员和“按键操作”类的岗位。

在人类历史上,最稀缺的资源一直都是“智能”和“执行力”,这些一直在阻碍科技进步。而随着AI的发展,智能的生成成本已经趋近于零,于是未来的经济瓶颈将变为人类的“验证能力”。验证,是人类与AI之间的最后一道过滤层。AI可以通过读取全网数据来“衡量”规则,但每个顶尖人类专家大脑中都有一套无法被轻易量化的“权重”——它来源于人生的试错、直觉以及对极端边缘情况的处理经验。判断一段代码是否安全、判断一篇文章是否直击人心,这些最终拍板执行并承担责任的能力,就是验证。而人类的能力和精力终究有限,想象AI可以在一秒钟内写出一份长达1000页的精密商业计划书,但是,作为人类,你依然需要逐字逐句去阅读、思考并确认这份计划书的战略是否真的可行,这个检查过程受限于人类大脑的处理速度和精力。两者之间存在着巨大落差,我们将其命名为可衡量性鸿沟。于是人类的验证能力成为了新时代的“稀缺资源”。

由以上两点内容就会推断出来一个巨大的问题——年轻一代正在面临“初级循环的缺失”。在传统社会,一个刚从大学校园毕业的人,想要成为专家,就要从小职员做起,不断积累经验并向上晋升。想要成为主任医师,你就要从实习医生开始,写病历、查房打杂;想要成为高级架构师,你就要从最基础的初级程序员做起,这就是“初级循环”。可是到了现代,AI完美取代了所有职场的入门级工作,导致新人的成长阶梯断裂,人类传统的“学徒制”经验积累路径就会断裂,导致未来我们可能面临没有高级专家可用的局面。

这个看似无解的困境,解药依然在AI身上。在AGI时代,年轻人虽然失去了初级岗位,但获得了AI这一超级武器。我们不再需要从基础岗位中获得经验和知识,转而用AI来完成这一任务。比如,商业新手不再需要花十年时间在真实市场里缓慢试错,而是可以在AI生成的无数逼真危机场景中进行高强度的模拟演练,从而成倍加快自身直觉与技能的养成。虽然年轻人依然需要真实世界的反馈来校准坐标,只不过AI将漫长的线性试错,压缩成了高频次、高密度的复利累积过程。而这就是AI时代经济向健康的可增长的方向发展的关键。

但随着越来越多的人为了提高效率而使用AI,人们会不断地把自己多年来积累下来的经验、判断规则和直觉全部总结出来交给AI,尤其对于行业中的顶级从业者来讲,当交出自己最后的底牌时,AI会很快做得和他一样好,这也使得这些顶级的从业者失去了自己不可替代的价值,亲手创建出来了自己的“数字替身”。

这就不得不提到最近火爆的各种各样牛鬼蛇神的skill了,什么“自己skill”、“同事skill”等等。skill的本质就是将知识、习惯和能力进行沉淀包装。这些skill的出现就对应了我们上文所说的情况,我们将其定义为“AI的诅咒”。

于是,企业开始用大量沉积下来的skill安装进agent里进行工作,发现这样会省下大笔的人工费用,于是放弃了人工审核。在短期来看,这对老板当然是好事。可当这些未经审核而带来的隐形错误叠加至不可忽视的程度时,就会引发波及整个行业的崩溃。

这绝对不是危言耸听。

面对着这样的“末日”,许多打工人开始探讨起人类的“护城河”,思考人类对比AI的优势所在,而其中最高频的词就是“品味(Taste)”。但在我看来,未经拆解的品味本身就是一个模糊的虚假概念,人们总是用“品味”或“判断力”这些词来确立自己优于AI的地位,但这只是一种逃避行为,一种心理防御机制。这些词汇定义极度模糊,甚至今天的优秀品味,明天就不适用了。所以,我们将品味拆解分为“可衡量”与“不可衡量”两类。如果某种所谓的“好品味”可以被拆解、被数据化、被衡量,那么机器最终一定能完美复制甚至超越它。如今我们认为只有人类才具备的“品味”,本质上只是因为机器目前还没有收集到足够的数据去衡量它,或者它处于充满“未知的不确定性”领域中。

而第一层“不可衡量”品味的本质,就是人类大脑中独特的“权重”。这种权重不是凭空产生的,而是来源于一个人从出生到现在的全部经历、看过的边缘案例以及成千上万小时的试错。当一个顶尖编辑判断一篇文章“好不好笑”、一个资深CTO判断一段代码“能不能上线”时,你大可以称之为“品味”,但它在底层逻辑上,就是人类在用自己独有的“权重”对输出结果进行最终的“验证”。

第二层,也是最核心的,是“协调人类群体”的能力。在时尚、艺术等“人对人”的领域,“品味制造者”的核心竞争力其实不是对客观美丑的判断,而是“协调人类群体”的能力。他们的“品味”体现在能够精准捕捉当下的时代情绪,创造出一种叙事,并成功号召、凝聚一群人去关心和认同某个事物。其本质就是“意义构建”。

而这就引出了我们的下一个话题——未来什么职业有商业价值?打工人该如何转型?

按照我们之前的讨论,我们将未来的职业划分成了如下四个象限(X轴为AI自动化成本,Y轴为人类验证成本):

验证/自动化AI 自动化成本:低(极易被替代)AI 自动化成本:高(极难被替代)
人类验证成本:高
(需极端试错与专业权重)

责任承保人 (Liability Underwriter)
利用 AI 放大产能的顶尖专家;为关键决策进行安全审计并最终背书担责。

指挥者 (Director)
设定大局方向与战略意图;在模糊条件下凭借直觉驾驭 AI swarm,进行未知探索。

人类验证成本:低
(标准化、套路化输出)

被淘汰工人 (Displaced Worker)
标准化、易于衡量的非原创工作;市场价值迅速归零。(建议避开)

意义建构者 (Meaning Maker)
建立社会共识、情感连接与信任;深耕人类独特的隐性知识与 subjective 价值。

基于以上象限划分,我建议大家避开左下角的“被淘汰工人”,向其他三个方向发展:

  • 责任承保人: 各个垂直领域的顶尖1%专家(如顶尖医生、资深VC)。他们利用AI无限放大自己的产能,并凭借个人的信誉和过硬的专业实力,为极其复杂的系统或高风险决策做最终背书并承担责任。这类工作通常很难自动化,并且需要人类过硬的专业水平。
  • 指挥者: 面对根本无从测算的“未知的未知”,负责设定初始战略意图、指挥AI群体进行探索,并在项目跑偏时凭借敏锐的直觉将其拉回正轨(如创业者、前沿科研人员等等)。
  • 意义建构者: 深耕无法量化的社会共识游戏(如艺术、时尚、社区领袖)。他们负责建立人与人之间的信任和情感连接,这类工作的需求空间不仅不会缩小,反而会扩大。这就对应了上文中我们关于“品味”的探讨。

在商业形态上,未来的护城河不再是谁拥有最聪明的AI模型,而是谁拥有最强大的“验证基础设施”(例如能将海量AI行为压缩成人类可读摘要的工具),以及谁掌握着最高质量的基准事实数据。

所以普通人未来该怎么做呢?这是当代人们的焦虑核心,就是不知道面对这时代浪潮自己应该做什么。我的建议是绝对不要陷入恐慌和低度焦虑。你最需要做的只有立刻大量使用AI工具,主动用它们“替换”掉你目前工作中可以被自动化的部分。在这个过程中,你不仅能摸清AI的真实底线,还能释放出时间去深耕那些让你进入“心流”的个人爱好——在未来,那些纯粹的人类热情,很可能就是最坚不可摧的商业护城河。

技术的尽头终究是人类的责任与方向感,而我们需要找到确立自身价值的清晰坐标。

The arrival of the Artificial General Intelligence (AGI) era is already an absolute, universally recognized fact; the only uncertainty is how much time is still needed for this "industrial-level revolution" to thoroughly succeed.

And this massive social revolution will fundamentally change the current logic of socio-economic growth.

First, whether a job is measurable has become the core factor in determining whether it will be replaced in the new era. Those closest to writing code will feel the impact first, and of course, this impact has already occurred. Tech giants and small-to-medium enterprises domestically are universally laying off staff and downsizing; and foreign companies are doing the exact same thing. Essentially, as long as your work follows known best practices to process data or output templated content—commonly known as white-collar work—your job can be replaced, and I believe no one thinks their value can outpace tokens. Agents are more efficient and cheaper than humans, so the replacement of these jobs is inevitable. The most classic examples are junior programmers and "button-pushing" operational roles.

Throughout human history, the scarcest resources have always been "intelligence" and "execution," which have continually hindered technological progress. However, with the development of AI, the generation cost of intelligence is approaching zero, meaning the future economic bottleneck will shift to human "validation capability." Validation is the final filtering layer between humans and AI. AI can "measure" rules by reading data across the entire web, but every top human expert has a set of "weights" in their brain that cannot be easily quantified—stemming from a lifetime of trial and error, intuition, and experience handling extreme edge cases. Judging whether a piece of code is safe, or whether an article strikes a chord with people—this ability to make the final call, execute, and bear responsibility is validation. Human capacity and energy are ultimately limited. Imagine AI writing a precise 1000-page business plan in a single second; as a human, you still need to read, think, and confirm word by word whether the strategy in this plan is genuinely viable. This checking process is constrained by the processing speed and energy of the human brain. There is a massive gap between the two, which we call the "measurability chasm." Consequently, human validation capability has become the "scarce resource" of the new era.

From the above two points, a massive problem can be deduced—the younger generation is facing a "missing primary loop." In traditional society, a person who just graduated from a university campus and wants to become an expert must start as a junior clerk, continuously accumulating experience and getting promoted upwards. To become a chief physician, you must start as an intern, writing medical records and doing odd jobs on ward rounds; to become a senior architect, you must start as a foundational junior programmer. This is the "primary loop." However, in the modern era, AI perfectly replaces all entry-level workplace jobs, causing the growth ladder for newcomers to fracture. The traditional human "apprenticeship" path of experience accumulation will break, leading to a future where we might face a situation with no senior experts available.

For this seemingly insoluble dilemma, the antidote still lies within AI. In the AGI era, although young people have lost entry-level positions, they have acquired AI as a superweapon. We no longer need to gain experience and knowledge from foundational roles, but instead use AI to accomplish this task. For example, a business novice no longer needs to spend ten years slowly learning through trial and error in the real market; instead, they can conduct high-intensity simulated drills in countless highly realistic crisis scenarios generated by AI, exponentially accelerating the development of their intuition and skills. Although young people still need real-world feedback to calibrate their coordinates, AI simply compresses the lengthy linear trial and error process into a high-frequency, high-density process of compound accumulation. And this is the key to the economy developing in a healthy, sustainable direction in the AI era.

However, as more and more people use AI to improve efficiency, people will continuously summarize and hand over all the experience, judgment rules, and intuition they have accumulated over many years to AI. Especially for top practitioners in an industry, when they hand over their final trump card, AI will quickly do it just as well as they do. This also makes these top practitioners lose their irreplaceable value, personally creating their own "digital doubles."

This brings us to the recently popular array of bizarre "skills," like "self-skill," "colleague-skill," and so on. The essence of a skill is the precipitation and packaging of knowledge, habits, and abilities. The emergence of these skills corresponds to the situation we mentioned above, which we define as the "Curse of AI."

Therefore, enterprises start installing large amounts of precipitated skills into agents to work, finding that this saves a massive amount of labor costs, and thus they abandon human review. In the short term, this is certainly good for the boss. But when these unreviewed, hidden errors accumulate to an undeniable degree, they will trigger a collapse that affects the entire industry.

This is absolutely not alarmist.

Facing this kind of "doomsday," many workers have started exploring the human "moat," thinking about the advantages humans have over AI, and the most frequently used word is "Taste." But in my opinion, un-deconstructed taste itself is a vague, false concept. People always use words like "taste" or "judgment" to establish their superiority over AI, but this is just an avoidance behavior, a psychological defense mechanism. The definitions of these words are extremely blurry; even what is considered excellent taste today might not apply tomorrow. Therefore, we break taste down into two categories: "measurable" and "immeasurable." If a certain so-called "good taste" can be deconstructed, digitized, and measured, then machines will ultimately be able to perfectly copy or even surpass it. The "taste" that we currently believe only humans possess is essentially just because machines haven't collected enough data to measure it yet, or it exists in a domain full of "unknown uncertainties."

And the essence of the first layer of "immeasurable" taste is the unique "weights" in the human brain. These weights don't just appear out of nowhere; they originate from a person's entire experience from birth to the present, the edge cases they've seen, and thousands of hours of trial and error. When a top editor judges whether an article is "funny enough," or a senior CTO judges whether a piece of code "can go online," you can certainly call it "taste," but in its underlying logic, it is humans using their unique "weights" to conduct final "validation" of the output result.

The second layer, which is also the most core, is the ability to "coordinate human groups." In "human-to-human" fields like fashion and art, the core competitiveness of a "tastemaker" is actually not the judgment of objective beauty or ugliness, but the ability to "coordinate human groups." Their "taste" is reflected in their ability to accurately capture the current sentiment of the times, create a narrative, and successfully call upon and rally a group of people to care about and identify with a certain thing. Its essence is "meaning making."

And this leads to our next topic—what professions will have commercial value in the future? How should workers transition?

According to our previous discussion, we have divided future professions into the following four quadrants (The X-axis is AI automation cost, the Y-axis is human validation cost):

Validation / AutomationAI Automation Cost: Low (Highly Replaceable)AI Automation Cost: High (Extremely Hard to Replace)
Human Validation Cost: High
(Requires extreme trial & error and professional weights)

Liability Underwriter
Top experts using AI to amplify capacity; conducting security audits and ultimately endorsing and bearing responsibility for critical decisions.

Director
Setting the overall direction and strategic intent; driving AI swarms through intuition under ambiguous conditions to explore the unknown.

Human Validation Cost: Low
(Standardized, templated output)

Displaced Worker
Standardized, easily measurable non-original work; market value rapidly drops to zero. (Advise to avoid)

Meaning Maker
Establishing social consensus, emotional connection, and trust; cultivating unique human tacit knowledge and subjective value.

Based on the quadrant division above, I suggest everyone avoid the "Displaced Worker" in the bottom-left corner and develop towards the other three directions:

  • Liability Underwriter: The top 1% experts in various vertical domains (e.g., top doctors, senior VCs). They use AI to infinitely amplify their capacity, and rely on personal reputation and solid professional prowess to provide the final endorsement and bear responsibility for extremely complex systems or high-risk decisions. This type of work is usually very difficult to automate and requires solid human professional proficiency.
  • Director: Facing "unknown unknowns" that cannot be calculated at all, responsible for setting initial strategic intent, directing AI swarms to explore, and relying on sharp intuition to pull projects back on track when they deviate (e.g., entrepreneurs, frontier researchers, etc.).
  • Meaning Maker: Deeply cultivating unquantifiable social consensus games (e.g., art, fashion, community leaders). They are responsible for building trust and emotional connections between people. The demand space for this type of work will not shrink, but rather expand. This corresponds to our discussion on "taste" above.

In terms of commercial morphology, the future moat is no longer who owns the smartest AI model, but who owns the most powerful "validation infrastructure" (for example, tools that can compress massive AI behaviors into human-readable summaries), and who grasps the highest quality ground-truth data.

So what should ordinary people do in the future? This is the core anxiety of modern people—not knowing what to do in the face of this tidal wave of the times. My advice is absolutely do not fall into panic and low-grade anxiety. All you need to do is immediately start using AI tools extensively, actively using them to "replace" the parts of your current work that can be automated. In this process, not only can you figure out the real baseline of AI, but you can also free up time to delve deeply into the personal hobbies that put you into a state of "flow"—in the future, those pure human passions will very likely be the most indestructible commercial moat.

The ultimate destination of technology is human responsibility and sense of direction, and we need to find clear coordinates to establish our own value.