国产av一二三区|日本不卡动作网站|黄色天天久久影片|99草成人免费在线视频|AV三级片成人电影在线|成年人aV不卡免费播放|日韩无码成人一级片视频|人人看人人玩开心色AV|人妻系列在线观看|亚洲av无码一区二区三区在线播放

網(wǎng)易首頁 > 網(wǎng)易號 > 正文 申請入駐

OpenAI:如何設計 AGI 時代的產(chǎn)業(yè)政策(全文翻譯)

0
分享至

BLOG

本文翻譯自 OpenAI 剛剛發(fā)布的政策白皮書「Industrial Policy for the Intelligence Age: Ideas to Keep People First」,共 13 頁,以下為逐段中英對照翻譯

OpenAI 剛剛發(fā)了一份 13 頁的政策文件,探討 AGI 時代的產(chǎn)業(yè)政策應該如何設立,這份文件有幾個值得注意的地方:

第一,OpenAI 在正式的政策文件里承認了「經(jīng)濟收益可能集中在少數(shù)公司(包括 OpenAI 自己)」

第二,它提出了一系列具體方案,包括公共財富基金、四天工作制、AI 接入權、自適應安全網(wǎng)等

第三,安全和治理部分同樣值得注意:模型遏制手冊(危險模型釋放后怎么辦)、事件報告機制(類似航空業(yè)的 near-miss 報告)、以及要求前沿 AI 公司采用使命對齊的公司治理結構

第四,這是 Sam Altman 那篇「Intelligence Age」博客的政策落地版本,從愿景走到了操作層面

不管你怎么看 OpenAI 這家公司,這份文件本身值得從業(yè)者讀一遍。翻譯出來供參考

https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%%20Policy%%20for%%20the%%20Intelligence%%20Age.pdf


Industrial Policy for the Intelligence Age, OpenAI, April 2026

開場白

The drive to understand has always powered human progress—creating a flywheel from science to technology, from technology to discovery, and from discovery onward to more science. That inexorable forward movement led us to melt sand, add impurities, structure it with atomic precision into computer chips, run energy through those chips, and build systems capable of creating increasingly powerful artificial intelligence.

理解世界的驅動力一直推動著人類進步,形成了一個飛輪:從科學到技術,從技術到發(fā)現(xiàn),從發(fā)現(xiàn)再到更多的科學。這種不可阻擋的前進力量讓我們熔化沙子、摻入雜質、以原子級精度將其結構化為芯片、給芯片通電,最終構建出能創(chuàng)造越來越強大的人工智能的系統(tǒng)

In just a few years, AI has progressed from systems capable of fast, narrow tasks to models that can perform general tasks people used to need hours to do. Now, we’re beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI. No one knows exactly how this transition will unfold. At OpenAI, we believe we should navigate it through a democratic process that gives people real power to shape the AI future they want, and prepare for a range of possible outcomes while building the capacity to adapt. That’s what this document is for—to start a conversation about governing advanced AI in ways that keep people first.

短短幾年內,AI 從只能做快速窄域任務的系統(tǒng),進化到能完成人類需要幾小時才能做完的通用任務。現(xiàn)在,我們正在開始向超級智能過渡:能夠超越最聰明的人類(即使這些人也在用 AI 輔助)的 AI 系統(tǒng)。沒人確切知道這個過渡會如何展開。在 OpenAI,我們認為應該通過民主程序來引導它,讓人們有真正的權力去塑造他們想要的 AI 未來,同時為各種可能的結果做準備。這份文件就是為此而生的:開啟一場關于如何以人為本治理高級 AI 的對話

The promise of superintelligence is extraordinary. Just as electricity transformed homes, the combustion engine remade mobility, and mass production lowered the cost of essential goods, superintelligence will speed up scientific and medical breakthroughs, significantly increase productivity, lower costs for families by making essential goods cheaper, and open the way for entirely new forms of work, creativity, and entrepreneurship.

超級智能的前景是驚人的。正如電力改變了家庭、內燃機重塑了出行、大規(guī)模生產(chǎn)降低了基本商品成本,超級智能將加速科學和醫(yī)學突破,大幅提高生產(chǎn)力,通過降低基本商品價格來減輕家庭負擔,并為全新形式的工作、創(chuàng)造力和創(chuàng)業(yè)開辟道路

Today, AI’s impact on work is often measured by the time required for tasks that systems can reliably complete. Frontier systems have advanced from supporting tasks that take people minutes to complete, to tasks that take them hours to complete. If progress continues, we can expect systems to be capable of carrying out projects that currently take people months. This shift will reshape how organizations run, how knowledge is created, and how people find meaning and opportunity. It will also highlight the limitations of today’s policy toolkit and the need for more ambitious ideas to keep people at the center of the transition to superintelligence.

今天,AI 對工作的影響通常用系統(tǒng)能可靠完成的任務所需時間來衡量。前沿系統(tǒng)已經(jīng)從輔助人類幾分鐘能完成的任務,進展到輔助需要幾小時的任務。如果進展持續(xù),我們可以預期系統(tǒng)將能執(zhí)行目前需要人類幾個月的項目。這種轉變將重塑組織運行方式、知識創(chuàng)造方式,以及人們尋找意義和機會的方式。它也將暴露當前政策工具包的局限性,以及在向超級智能過渡中保持以人為本所需要的更大膽的想法

While we strongly believe that AI’s benefits will far outweigh its challenges, we are clear-eyed about the risks—of jobs and entire industries being disrupted; bad actors misusing the technology; misaligned systems evading human control; governments or institutions deploying AI in ways that undermine democratic values; and power and wealth becoming more concentrated instead of more widely shared.

雖然我們堅信 AI 的好處將遠超其挑戰(zhàn),但我們對風險保持清醒認知:工作崗位和整個行業(yè)被顛覆,惡意行為者濫用技術,失調的系統(tǒng)逃脫人類控制,政府或機構以破壞民主價值的方式部署 AI,以及權力和財富更加集中而非更廣泛地共享

Indeed, we highlight these risks here to raise awareness of the need for policy solutions to address them. Unless policy keeps pace with technological change, the institutions and safety nets needed to navigate this transition could fall behind. Ensuring that AI expands access, agency, and opportunity is a central challenge as we move towards superintelligence. We should aim for a future where superintelligence benefits everyone, and where we:

事實上,我們在這里強調這些風險,正是為了提高對政策解決方案需求的認識。除非政策跟上技術變革的步伐,否則引導這一轉型所需的制度和安全網(wǎng)可能會落后。確保 AI 擴大人們獲取資源、自主行動和抓住機會的能力,是向超級智能邁進過程中的核心挑戰(zhàn)。我們應該追求一個超級智能惠及所有人的未來,在這個未來中:

1. 廣泛分享繁榮

Share prosperity broadly. The promise of advanced AI is not just technological progress, but a higher quality of life for all. Everyone should have the opportunity to participate in the new opportunities AI creates. Living standards should rise and people should see material improvements through lower costs, better health and education, and more security and opportunity. If AI winds up controlled by, and benefiting only a few, while most people lack agency and access to AI-driven opportunity, we will have failed to deliver on its promise.

高級 AI 的承諾不僅是技術進步,而是所有人生活質量的提高。每個人都應有機會參與 AI 創(chuàng)造的新機遇。生活水平應該提升,人們應該通過更低的成本、更好的醫(yī)療和教育、更多的安全感和機會看到實質性的改善。如果 AI 最終被少數(shù)人控制和獨享,而大多數(shù)人缺乏自主權和獲取 AI 驅動機遇的途徑,那我們就辜負了它的承諾

2. 降低風險

Mitigate risks. The transition toward superintelligence will come with serious risks—from economic disruption, to misuse in areas like cybersecurity and biology, to the loss of alignment or control over increasingly powerful systems. Without effective mitigation, people will be harmed. Avoiding these outcomes requires building new institutions, technical safeguards, and governance frameworks so that advanced systems remain safe, controllable, and aligned—reducing the risk of large-scale harm, protecting critical systems, and ensuring people can rely on AI in their daily lives. As capability scales, safety must scale with it.

向超級智能的過渡將伴隨嚴重風險:經(jīng)濟動蕩,網(wǎng)絡安全和生物領域的濫用,以及對越來越強大的系統(tǒng)失去對齊或控制。沒有有效的緩解措施,人們將受到傷害。避免這些后果需要建立新的制度、技術保障和治理框架,確保高級系統(tǒng)安全、可控、對齊,從而減少大規(guī)模傷害的風險、保護關鍵系統(tǒng),并確保人們能在日常生活中依賴 AI。能力擴展的同時,安全也必須同步擴展

3. 民主化 AI 的獲取和自主權

Democratize access and agency. As capabilities advance, some systems may need to be controlled for safety. But broad participation in the AI economy should not depend on access to the most powerful models—it should depend on access to AI that is useful, affordable, preserves people’s privacy and expands their individual agency. Avoiding a concentration of wealth and control will require ensuring that people everywhere can use AI in ways that give them real influence at work, in markets, and through democratic processes.

隨著能力提升,某些系統(tǒng)可能需要出于安全考慮而受到控制。但廣泛參與 AI 經(jīng)濟不應取決于能否使用最強大的模型,而應取決于能否使用有用的、負擔得起的、保護隱私并擴展個人自主權的 AI。避免財富和控制權的集中,需要確保各地的人們都能以賦予他們在工作、市場和民主程序中真正影響力的方式使用 AI

為什么需要新的產(chǎn)業(yè)政策

The Case for a New Industrial Policy. Society has navigated major technological transitions before, but not without real disruption and dislocation along the way. While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translated into broader opportunity and greater security. For example, following the transition to the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production. They did so by building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education.

社會過去也經(jīng)歷過重大技術轉型,但過程中總伴隨著真實的顛覆和錯位。雖然這些轉型最終創(chuàng)造了更多繁榮,但它們需要積極主動的政治選擇,才能確保增長轉化為更廣泛的機會和更大的安全感。比如,工業(yè)時代轉型之后,進步時代和羅斯福新政幫助更新了社會契約,以適應被電力、內燃機和大規(guī)模生產(chǎn)重塑的世界。它們通過建立新的公共機構、保護措施和對公平經(jīng)濟應提供什么的期望來實現(xiàn)這一點,包括勞動保護、安全標準、社會安全網(wǎng)和擴大教育機會

History shows that democratic societies can respond to technological upheaval with ambition: reimagining the social contract, mediating between capital and labor, and encouraging broad distribution of the benefits of technological progress while preserving pluralism, constitutional checks and balances, and freedom to innovate. The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone.

歷史表明,民主社會能夠以雄心壯志回應技術劇變:重新構想社會契約,在資本與勞動之間調和,鼓勵技術進步收益的廣泛分配,同時保持多元主義、憲政制衡和創(chuàng)新自由。向超級智能的過渡將需要一種更加雄心勃勃的產(chǎn)業(yè)政策,一種反映民主社會集體行動能力的政策,以塑造其經(jīng)濟未來,讓超級智能惠及所有人

On this path to superintelligence, there are clear steps we need to take today. People are already concerned about what AI will mean for their lives—whether their jobs and families will be safe, and whether data centers will disrupt their communities and raise energy prices. AI data centers should pay their own way on energy so that households aren’t subsidizing them; and they should generate local jobs and tax revenue. Governments should implement common-sense AI regulation—not to entrench incumbents through regulatory capture but to protect children, mitigate national security risks, and encourage innovation.

在通往超級智能的路上,有一些明確的步驟需要今天就采取。人們已經(jīng)在擔心 AI 對他們生活的影響:工作和家庭是否安全,數(shù)據(jù)中心是否會擾亂社區(qū)并推高能源價格。AI 數(shù)據(jù)中心應該自己承擔能源成本,而不是讓家庭來補貼;它們應該創(chuàng)造本地就業(yè)和稅收。政府應該實施常識性的 AI 監(jiān)管,目的不是通過監(jiān)管捕獲來鞏固現(xiàn)有企業(yè),而是保護兒童、緩解國家安全風險、鼓勵創(chuàng)新

But the magnitude of the changes we expect and the potential risks we foresee demand even more. We are entering a new phase of economic and social organization that will fundamentally reshape work, knowledge, and production. It requires not just incremental policy responses but ambitious policy ideas for tomorrow that we must start discussing today. This is the moment to start the conversation: to think boldly, explore new ideas, and collaboratively develop a new industrial policy agenda that ensures superintelligence benefits everyone.

但我們預期的變化規(guī)模和預見的潛在風險要求更多。我們正在進入一個經(jīng)濟和社會組織的新階段,它將從根本上重塑工作、知識和生產(chǎn)。這不僅需要漸進式的政策回應,更需要面向未來的大膽政策構想,而這些構想必須從今天就開始討論?,F(xiàn)在是開啟對話的時刻:大膽思考,探索新想法,合作制定一個確保超級智能惠及所有人的新產(chǎn)業(yè)政策議程

In normal times, the case for letting markets work on their own is strong. Historically, competition, entrepreneurship, and open economic participation have lifted living standards and expanded opportunity. Capitalism, imperfect as it is, remains an effective system for translating human ingenuity into shared prosperity.

在正常時期,讓市場自行運作的理由是充分的。歷史上,競爭、創(chuàng)業(yè)和開放的經(jīng)濟參與提升了生活水平、擴大了機會。資本主義雖不完美,但仍然是一個將人類創(chuàng)造力轉化為共享繁榮的有效體系

But industrial policy can play an important role when market forces alone aren’t sufficient—when new technologies create opportunities and risks that existing institutions aren’t equipped to manage. It can help translate scientific breakthroughs into scaled industries and broad-based economic growth.

但當市場力量本身不足時,產(chǎn)業(yè)政策可以發(fā)揮重要作用:當新技術創(chuàng)造了現(xiàn)有制度無法管理的機遇和風險時。它可以幫助將科學突破轉化為規(guī)模化產(chǎn)業(yè)和廣泛的經(jīng)濟增長

A new industrial policy agenda should use government’s existing toolbox for aligning public and private activities: research funding, workforce development, market-shaping tools, and targeted regulation. But governments should not act alone. Nongovernmental institutions should pilot new approaches, measure what works, and iterate quickly, then governments should reinforce successes by aligning incentives and scaling what works through procurement, regulation, and investment. This public-private collaboration should stave off regulatory capture and centralized control, instead preserving the freedom to innovate while ensuring that the onset of superintelligence isn’t dominated by the most powerful forces in society.

新的產(chǎn)業(yè)政策議程應該利用政府現(xiàn)有的工具箱來協(xié)調公共和私人活動:研究資金、勞動力發(fā)展、市場塑造工具和有針對性的監(jiān)管。但政府不應單獨行動。非政府機構應該試點新方法,衡量什么有效,快速迭代,然后政府通過采購、監(jiān)管和投資來強化成功案例。這種公私合作應該避免監(jiān)管捕獲和集中控制,保留創(chuàng)新自由,同時確保超級智能的到來不被社會中最強大的力量所主導

We don’t have all, or even most of the answers. Different paths will require different policy responses, and no single set of tools will be enough in any scenario. But we should aim to build an AI economy that is both open and resilient through policies that expand participation, broaden access to opportunity, and ensure that society has the safeguards and institutions needed to manage risk.

我們沒有全部答案,甚至沒有大部分答案。不同的路徑需要不同的政策回應,沒有任何一套工具在所有情景下都夠用。但我們應該致力于建設一個既開放又有韌性的 AI 經(jīng)濟,通過擴大參與、拓寬機會獲取、確保社會擁有管理風險所需的保障和制度來實現(xiàn)

This document offers initial ideas for an industrial policy agenda to keep people first during the transition to superintelligence. It is organized in two sections: 1) building an open economy with broad access, participation, and shared prosperity; and 2) building a resilient society through accountability, alignment, and management of frontier risks. OpenAI is offering these ideas to help start a broader conversation about the kinds of policies and institutions needed to navigate the transition. These ideas are intentionally early and exploratory, offered not as a comprehensive or final set of recommendations, but as a starting point for discussion that we invite others to build on, refine, challenge, or choose among through the democratic process.

這份文件提供了一些初步想法,為在向超級智能過渡期間以人為本的產(chǎn)業(yè)政策議程。它分為兩部分:1)建設一個具有廣泛準入、參與和共享繁榮的開放經(jīng)濟;2)通過問責、對齊和前沿風險管理來建設一個有韌性的社會。OpenAI 提出這些想法是為了啟動一場更廣泛的對話。這些想法是刻意早期和探索性的,不是作為全面或最終的建議,而是作為討論的起點,我們邀請他人在此基礎上完善、挑戰(zhàn)或通過民主程序做出選擇

They also focus on the United States as a starting point, but the conversation—and the solutions—must ultimately be global. The transition to superintelligence is not a distant possibility—it’s already underway, and the choices we make in the near term will shape how its benefits and risks are distributed for decades to come.

這些想法以美國為起點,但對話和解決方案最終必須是全球性的。向超級智能的過渡不是遙遠的可能性,它已經(jīng)在進行中,我們在近期做出的選擇將決定其收益和風險在未來幾十年如何分配

第一部分:建設開放經(jīng)濟

The promise of advanced AI is that it can benefit everyone by translating abundant intelligence into extraordinary progress. It can lower the cost of essential goods, expand opportunity, and give people more time for what is meaningful, relational, and community-building. It can help solve scientific challenges that still elude human effort: curing or preventing diseases, alleviating food scarcity, strengthening agriculture under climate stress, and speeding up breakthroughs in clean, reliable energy. The benefits of major investments in science could emerge within a single lifetime and reach communities far beyond traditional research hubs.

高級 AI 的承諾是,它可以通過將充裕的智能轉化為非凡的進步來惠及所有人。它可以降低基本商品成本,擴大機會,給人們更多時間用于有意義的、關系性的、社區(qū)建設的事情。它可以幫助解決人類努力仍未攻克的科學挑戰(zhàn):治愈或預防疾病,緩解糧食短缺,在氣候壓力下加強農(nóng)業(yè),加速清潔可靠能源的突破。重大科學投資的收益可以在一代人的時間內涌現(xiàn),惠及遠超傳統(tǒng)研究中心的社區(qū)

Yet the same capabilities making this progress possible will also disrupt jobs and reshape entire industries at a speed and scale unlike any previous technological shift. Some jobs will disappear, others will evolve, and entirely new forms of work will emerge as organizations learn how to deploy advanced AI.

然而,使這些進步成為可能的同樣能力,也將以前所未有的速度和規(guī)模顛覆工作崗位并重塑整個行業(yè)。一些工作將消失,另一些將演變,隨著組織學會如何部署高級 AI,全新形式的工作將出現(xiàn)

These changes will not arrive evenly. Without thoughtful policies, AI could widen inequality by compounding advantages for those already positioned to capture the upside while communities that begin with fewer resources fall further behind, excluded from new tools, new industries, and new opportunities. There is also a risk that the economic gains concentrate within a small number of firms like OpenAI, even as the technology itself becomes more powerful and widely used. Workers using AI might well agree that it’s increasing their productivity without believing they’re seeing the benefits.

這些變化不會均勻到來。沒有周全的政策,AI 可能會加劇不平等:為那些已經(jīng)處于有利位置的人疊加優(yōu)勢,而資源較少的社區(qū)進一步落后,被排斥在新工具、新行業(yè)和新機遇之外。也存在經(jīng)濟收益集中在少數(shù)公司(包括 OpenAI)的風險,即使技術本身變得更強大、使用更廣泛。使用 AI 的勞動者可能承認它提高了自己的生產(chǎn)力,但并不認為自己從中獲益了

Maintaining an open economy that is easily accessed and participatory will require ambitious policymaking. The enclosed ideas include proposals to ensure that workers have a voice in the AI transition, since workers have deep knowledge about how work is actually performed and where AI can make work better and safer. Other proposals suggest new mechanisms to share returns from AI-driven growth by expanding access to capital, sharing economic gains more widely, and aligning the benefits of AI-enabled growth with higher living standards. And they aim to modernize economic security by helping people navigate transitions, access new opportunities, and maintain stability as work changes.

維持一個易于進入和參與的開放經(jīng)濟將需要大膽的政策制定。文中的方案包括:確保勞動者在 AI 轉型中有發(fā)言權,因為勞動者對工作實際如何完成最有發(fā)言權;提出分享 AI 驅動增長回報的新機制,通過擴大資本獲取、更廣泛地分享經(jīng)濟收益來實現(xiàn);以及通過幫助人們應對轉型、獲取新機會、在工作變化中保持穩(wěn)定來實現(xiàn)經(jīng)濟安全的現(xiàn)代化

勞動者視角

Worker perspectives. Give workers a voice in the AI transition to make work better and safer, including a formal way to collaborate with management to make sure AI improves job quality, enhances safety, and respects labor rights. Workers have deep knowledge about how work is actually performed and where AI can improve outcomes. They will be critical voices in understanding how AI can be used in workplaces to ensure that technological change will not only lead to improved productivity, but also lead to better jobs and stronger, safer workplaces.

在 AI 轉型中給勞動者發(fā)言權,讓工作更好更安全,包括建立與管理層合作的正式機制,確保 AI 提升工作質量、增強安全、尊重勞動權利。勞動者對工作實際如何完成、AI 在哪里能改善結果有深入的認知。他們將是理解 AI 如何在工作場所使用的關鍵聲音,確保技術變革不僅提高生產(chǎn)力,還帶來更好的工作和更安全的工作場所

Allow workers to prioritize AI deployments that improve job quality by eliminating dangerous, repetitive, administrative, or exhausting tasks so employees can focus on higher-value work. At the same time, set clear limits on harmful uses of AI that could erode job quality by intensifying workloads, narrowing autonomy, or undermining fair scheduling and pay.

允許勞動者優(yōu)先推動那些通過消除危險、重復、行政或繁重任務來改善工作質量的 AI 部署,讓員工專注于更高價值的工作。同時,對可能通過加大工作量、縮小自主權或破壞公平排班和薪酬來侵蝕工作質量的 AI 使用設置明確限制

AI 優(yōu)先的創(chuàng)業(yè)者

AI-first entrepreneurs. Help workers turn domain expertise into new companies by using AI to handle the overhead that usually blocks entrepreneurship (e.g., accounting, marketing, procurement). Pair microgrants or revenue-based financing with practical “startup-in-a-box” supports such as model contracts and shared back-office infrastructure so that new small businesses can compete quickly. Worker organizations could serve as enablers by offering training, providing shared services, and helping workers negotiate fair commercial terms and protect IP.

幫助勞動者利用 AI 處理通常阻礙創(chuàng)業(yè)的開銷(如財務、營銷、采購),將領域專長轉化為新公司。將小額撥款或收入分成融資與「創(chuàng)業(yè)工具箱」支持(如模板合同和共享后臺基礎設施)結合,讓新的小企業(yè)能快速參與競爭。工會組織可以充當賦能者的角色:提供培訓、共享服務,幫助勞動者談判公平商業(yè)條款和保護知識產(chǎn)權

AI 接入權

Right to AI. Treat access to AI as foundational for participation in the modern economy, similar to mass efforts to increase global literacy, or to make sure that electricity and the internet reach remote parts of the globe. (The internet still isn’t fairly deployed across the globe or even the US; learn from this and seek to rectify those issues when it comes to AI.) Expand affordable, reliable access to foundational models—the building blocks of modern AI systems—and make a baseline level of capability broadly available, including through free or low-cost access points. Support the education, infrastructure, connectivity, and training needed to use these systems effectively, and make sure that workers, small businesses, schools, libraries, and underserved communities are not excluded from the capabilities that drive productivity and opportunity.

將 AI 接入視為參與現(xiàn)代經(jīng)濟的基礎,類似于提高全球識字率的大規(guī)模努力,或確保電力和互聯(lián)網(wǎng)到達偏遠地區(qū)。(互聯(lián)網(wǎng)至今在全球甚至美國都沒有公平部署,應該從中吸取教訓,在 AI 方面糾正這些問題。)擴大對基礎模型的可負擔、可靠的訪問,并使基線能力廣泛可用,包括通過免費或低成本的接入點。支持有效使用這些系統(tǒng)所需的教育、基礎設施、連接和培訓,確保勞動者、小企業(yè)、學校、圖書館和服務不足的社區(qū)不被排斥在驅動生產(chǎn)力和機會的能力之外

現(xiàn)代化稅基

Modernize the tax base. As AI reshapes work and production, the composition of economic activity may shift—expanding corporate profits and capital gains while potentially reducing reliance on labor income and payroll taxes. This could erode the tax base that funds core programs like Social Security, Medicaid, SNAP, and housing assistance—putting them at risk. Tax policy should adapt to ensure these systems remain durable.

隨著 AI 重塑工作和生產(chǎn),經(jīng)濟活動的構成可能發(fā)生變化:企業(yè)利潤和資本收益擴大,而對勞動收入和工資稅的依賴可能減少。這可能侵蝕為社會保障、醫(yī)療補助、食品券和住房援助等核心項目提供資金的稅基,使它們面臨風險。稅收政策應當適應以確保這些體系持久

Policymakers could rebalance the tax base by increasing reliance on capital-based revenues—such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns—and by exploring new approaches such as taxes related to automated labor. These reforms should be paired with wage-linked incentives that encourage firms to retain, retrain, and invest in workers, similar to existing R&D-style credits. Together, these changes would help stabilize funding for essential programs while supporting workforce transitions in an AI-driven economy.

政策制定者可以通過增加對資本收入的依賴來重新平衡稅基,比如對高額資本利得、企業(yè)所得征收更高稅率,或對持續(xù)的 AI 驅動回報實施定向措施,同時探索與自動化勞動相關的新稅種。這些改革應與工資掛鉤激勵配套,鼓勵企業(yè)留住、再培訓和投資于勞動者,類似于現(xiàn)有的研發(fā)稅收抵免。這些變化將共同幫助穩(wěn)定基本項目的資金,同時支持 AI 驅動經(jīng)濟中的勞動力轉型

公共財富基金

Public Wealth Fund. Create a Public Wealth Fund that provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth. While tax reforms help ensure governments can continue to fund essential programs, a Public Wealth Fund is designed to ensure that people directly share in the upside of that growth.

創(chuàng)建公共財富基金,為每個公民(包括那些沒有投資金融市場的人)提供 AI 驅動經(jīng)濟增長的份額。稅收改革幫助確保政府能繼續(xù)資助基本項目,而公共財富基金旨在確保人們直接分享增長的上行空間

Policymakers and AI companies should work together to determine how to best seed the Fund, which could invest in diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI. Returns from the Fund could be distributed directly to citizens, allowing more people to participate directly in the upside of AI-driven growth, regardless of their starting wealth or access to capital.

政策制定者和 AI 公司應合作確定如何為基金注入種子資金,基金可以投資于多元化的長期資產(chǎn),捕獲 AI 公司和更廣泛的采用和部署 AI 的企業(yè)的增長?;鸬幕貓罂梢灾苯臃峙浣o公民,讓更多人直接參與 AI 驅動增長的收益,不論其起始財富或資本獲取能力

讓每個公民都擁有 AI 經(jīng)濟增長的份額


加速電網(wǎng)擴張

Accelerate grid expansion. Establish new public-private partnership models to finance and accelerate the expansion of energy infrastructure required to power AI. Use these models to address financing constraints, permitting delays, and siting risks that have limited high-voltage interstate and interregional transmission—and to deliver infrastructure at speed and scale, limit taxpayer risk, and share the upside with the public. Partnerships should be structured to minimize taxpayer exposure to commercial losses and ensure that expanded energy infrastructure translates into lower energy costs for households and businesses.

建立新的公私合作模式,為 AI 所需的能源基礎設施擴張?zhí)峁┤谫Y并加速推進。利用這些模式解決融資約束、審批延遲和選址風險等限制州際和跨區(qū)域高壓輸電的問題,以速度和規(guī)模交付基礎設施,限制納稅人風險,并與公眾分享收益。合作關系的設計應最大限度減少納稅人面臨的商業(yè)損失風險,并確保擴大的能源基礎設施轉化為家庭和企業(yè)更低的能源成本

效率紅利

Efficiency dividends. Convert efficiency gains from AI into durable improvements in workers’ benefits when routine workload declines and operating costs fall, including incentivizing companies to increase retirement matches or contributions, cover a larger share of healthcare costs, and subsidize child and eldercare.

當常規(guī)工作量下降和運營成本降低時,將 AI 帶來的效率提升轉化為勞動者福利的持久改善,包括激勵企業(yè)增加退休匹配或繳款、承擔更大份額的醫(yī)療成本、補貼育兒和養(yǎng)老

Incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant, then convert reclaimed hours into a permanent shorter week, bankable paid time off, or both.

激勵雇主和工會試行每周 32 小時 / 四天工作制,在不減薪、保持產(chǎn)出和服務水平的前提下進行限時試點,然后將回收的工時轉化為永久性的縮短工作周、可存儲的帶薪休假,或兩者兼具

自適應安全網(wǎng)

Adaptive safety nets that work for everyone. Make sure the existing safety net works reliably, quickly, and at scale, because if the transition to superintelligence is going to benefit everyone, the systems designed to provide economic and health security need to deliver without delay or gaps. That starts with unemployment insurance, SNAP, Social Security, Medicaid, and Medicare that are not just in place but fully functional, accessible, and responsive to the realities people will face during the transition.

確?,F(xiàn)有安全網(wǎng)可靠、快速、大規(guī)模地運作。如果向超級智能的過渡要惠及所有人,那么為提供經(jīng)濟和健康安全而設計的系統(tǒng)就必須沒有延遲和缺口地交付。這首先意味著失業(yè)保險、食品券、社會保障、醫(yī)療補助和醫(yī)療保險不僅要到位,還必須全面運作、可及,并能回應人們在轉型中面對的現(xiàn)實

Next, invest in clear, real-time measurement of how AI is affecting work, wages, job quality, and sectoral dynamics, using public metrics such as unemployment rates and indicators of regional or industry-specific displacement. These systems should provide policymakers with timely visibility into where disruption is occurring and how severe it is.

其次,投資于對 AI 如何影響工作、工資、工作質量和行業(yè)動態(tài)的清晰實時衡量,使用失業(yè)率和區(qū)域或行業(yè)特定位移指標等公共指標。這些系統(tǒng)應為政策制定者提供對顛覆發(fā)生在哪里、嚴重程度如何的及時可見性

Then, define a package of temporary, expanded safety nets (e.g., expanded or more flexible unemployment benefits, fast cash assistance, wage insurance, training vouchers) that activates automatically when these metrics exceed pre-defined thresholds. When disruption rises above those levels, support would scale up; as conditions stabilize, it would phase out. This ensures that assistance is targeted, time-bound, and proportional to the scale of disruption, and also avoids a permanent expansion of programs.

然后,定義一套臨時性的擴展安全網(wǎng)(如擴大或更靈活的失業(yè)救濟、快速現(xiàn)金援助、工資保險、培訓券),當指標超過預設閾值時自動啟動。顛覆加劇時支持升級,情況穩(wěn)定時逐步退出。這確保了援助是有針對性的、有時間限制的、與顛覆規(guī)模成比例的,也避免了項目的永久性擴張

可攜帶福利

Portable benefits. Over time, build benefit systems that are not tied to a single employer by expanding access to healthcare, retirement savings, and skills training through portable accounts that follow individuals across jobs, industries, education programs, and entrepreneurial ventures. Public programs can decouple key benefits from employment status by expanding access to retirement and training support regardless of where or how someone works. Implementation can run through portable benefit platforms that pool contributions from multiple sources and route them into standardized accounts attached to the individual, not the job. Retirement systems can also be modernized through pooled structures that allow workers to accrue benefits continuously across employers, reducing gaps and preserving continuity over time.

逐步建立不綁定單一雇主的福利體系,通過可攜帶賬戶擴大醫(yī)療、退休儲蓄和技能培訓的獲取,這些賬戶跟隨個人跨越工作、行業(yè)、教育項目和創(chuàng)業(yè)活動。公共項目可以通過擴大退休和培訓支持的獲取來將關鍵福利與就業(yè)狀態(tài)脫鉤,不論一個人在哪里或如何工作。實施可以通過可攜帶福利平臺進行,匯集來自多個來源的繳款并將其導入綁定個人而非工作崗位的標準化賬戶。退休體系也可以通過匯集結構進行現(xiàn)代化,讓勞動者跨雇主持續(xù)積累福利,減少缺口并保持連續(xù)性

面向以人為本工作的通道

Pathways into human-centered work. Expand opportunities in the care and connection economy—childcare, eldercare, education, healthcare, and community services—as pathways for workers displaced by AI. Although AI can enhance these roles by reducing administrative burdens and enabling greater personalization, human connection will remain an essential part of the profession. As AI reshapes the labor market, these sectors can absorb transitioning workers if supported with investments in training, wages, and job quality. Governments can build training pipelines, support transitions into care roles, and incentivize employers to raise pay and improve conditions in fields facing chronic shortages.

擴大關愛和連接經(jīng)濟中的機會:育兒、養(yǎng)老、教育、醫(yī)療和社區(qū)服務,作為被 AI 替代的勞動者的轉型通道。雖然 AI 可以通過減少行政負擔和實現(xiàn)更大的個性化來增強這些角色,但人際連接仍將是這些職業(yè)的核心部分。隨著 AI 重塑勞動力市場,如果有培訓、薪資和工作質量方面的投資支持,這些行業(yè)可以吸收轉型中的勞動者。政府可以建設培訓管道,支持向護理角色的轉型,激勵雇主在面臨長期短缺的領域提高薪資和改善條件

These initiatives could be complemented with a family benefit that recognizes caregiving as economically valuable work and supports evolving work patterns. This benefit could help cover childcare, education, and healthcare while remaining compatible with part-time work, retraining, or entrepreneurship. Together, these efforts would expand access to care, strengthen communities, and create meaningful, human-centered work.

這些舉措可以與一項家庭福利相結合,該福利承認照顧工作是有經(jīng)濟價值的勞動,并支持不斷演變的工作模式。這項福利可以幫助覆蓋育兒、教育和醫(yī)療,同時與兼職工作、再培訓或創(chuàng)業(yè)兼容。這些努力將共同擴大護理服務的獲取,加強社區(qū),并創(chuàng)造有意義的、以人為本的工作

加速科學發(fā)現(xiàn)并推廣收益

Accelerate scientific discovery and scale the benefits. Build a distributed network of AI-enabled laboratories to dramatically expand the capacity to test and validate AI-generated hypotheses at scale. These labs would integrate AI systems directly into experimental workflows by automating routine processes, capturing high-quality data, and enabling rapid iteration between hypothesis generation and testing.

建設分布式的 AI 賦能實驗室網(wǎng)絡,大幅擴展大規(guī)模測試和驗證 AI 生成假說的能力。這些實驗室將 AI 系統(tǒng)直接集成到實驗工作流中,自動化常規(guī)流程,采集高質量數(shù)據(jù),實現(xiàn)假說生成和測試之間的快速迭代

Then, build the physical systems and infrastructure needed to translate validated discoveries into real-world use at scale. This includes expanding the capacity of organizations to deploy new technologies, upgrading facilities and systems required for implementation, and aligning financing and incentives to support adoption. It also includes a sustained investment in people: training scientists, technicians, and operators to contribute to AI-enabled science. These investments ensure that breakthroughs move beyond laboratories and into widespread use, while strengthening the workforce and operational systems required to build, maintain, and run the infrastructure that supports AI-enabled discovery. Both laboratory and production infrastructure should be deployed broadly across universities, community colleges, hospitals, and regional research hubs, not concentrated in a small number of elite institutions.

然后,建設將經(jīng)過驗證的發(fā)現(xiàn)轉化為大規(guī)模實際應用所需的物理系統(tǒng)和基礎設施。這包括擴大組織部署新技術的能力,升級實施所需的設施和系統(tǒng),以及調整融資和激勵以支持采納。還包括對人的持續(xù)投資:培訓科學家、技術人員和操作員以參與 AI 賦能的科學。這些投資確保突破從實驗室走向廣泛應用,同時加強支持 AI 賦能發(fā)現(xiàn)所需的勞動力和運營系統(tǒng)。實驗室和生產(chǎn)基礎設施都應廣泛部署在大學、社區(qū)學院、醫(yī)院和區(qū)域研究中心,而不是集中在少數(shù)精英機構

第二部分:建設有韌性的社會

As AI systems become more capable and more embedded across the economy, they may introduce new vulnerabilities alongside new abundance. Some systems may be misused for cyber or biological harm. Others may create new pressures on social and emotional well-being, including for young people, if deployed without adequate safeguards. AI systems may act in ways that are misaligned with human intent or operate beyond meaningful human oversight. And as advanced AI reshapes how people, organizations, and governments operate, it may place new strain on the institutions and norms that societies rely on to remain stable, secure, and free.

隨著 AI 系統(tǒng)變得更強大、更深入地嵌入經(jīng)濟,它們可能在帶來新豐裕的同時引入新的脆弱性。一些系統(tǒng)可能被濫用于網(wǎng)絡或生物危害。另一些如果沒有充分保障就部署,可能對社會和情感健康(包括青少年)造成新的壓力。AI 系統(tǒng)可能以與人類意圖不一致的方式行事,或超出有意義的人類監(jiān)督。隨著高級 AI 重塑人、組織和政府的運作方式,它可能對社會賴以保持穩(wěn)定、安全和自由的制度和規(guī)范施加新的壓力

We should be clear-eyed about the resilience required here. These new risks won’t be isolated or suitable for addressing one at a time—AI will reshape how work is performed, how decisions are made, how organizations operate, and how states interact. Building resilience therefore means making sure people and institutions can adapt quickly, maintain meaningful agency over how these systems are used, and preserve broadly shared prosperity even as economic and social structures evolve.

我們應該對所需的韌性保持清醒。這些新風險不會是孤立的或適合逐一應對的:AI 將重塑工作方式、決策方式、組織運作方式以及國家互動方式。因此,建設韌性意味著確保人和機構能快速適應,對這些系統(tǒng)的使用方式保持有意義的自主權,并在經(jīng)濟和社會結構演變時保持廣泛共享的繁榮

Over the past several years, leading AI developers including OpenAI have focused heavily on upstream safeguards: development of global standards, transparency around evaluations, mitigations, and risks, and investments in model testing, red teaming, and usage policies designed to identify and mitigate risks before deployment. Policymakers have also focused here, codifying requirements in the EU AI Act and in US state-based regulation. These upstream efforts should continue.

過去幾年,包括 OpenAI 在內的領先 AI 開發(fā)者大量關注上游保障:制定全球標準,圍繞評估、緩解措施和風險的透明度,以及投資于模型測試、紅隊和使用政策,旨在部署前識別和緩解風險。政策制定者也在這方面著力,在歐盟 AI 法案和美國州級法規(guī)中將要求編入法律。這些上游努力應該繼續(xù)

But as AI systems become more capable and more widely deployed, resilience will also depend upon what happens after deployment—when systems must be monitored in real time, operate under uncertainty, and integrate into institutions not designed for agentic workflows.

但隨著 AI 系統(tǒng)變得更強大、更廣泛部署,韌性也將取決于部署之后發(fā)生的事情:當系統(tǒng)必須實時監(jiān)控、在不確定性下運行、并集成到不是為 Agent 工作流設計的機構中時

This is not a new challenge. As electricity spread, societies built safety standards and regulatory institutions. As automobiles transformed mobility, safety systems reduced risk while preserving freedom of movement. In aviation, continuous monitoring and coordinated response systems made flying one of the safest forms of transportation. In food and medicine, testing and post-market surveillance helped ensure safety in everyday use. In each case, resilience was not automatic—it was built with the luxury of time.

這不是一個新挑戰(zhàn)。電力普及時,社會建立了安全標準和監(jiān)管機構。汽車改變出行時,安全系統(tǒng)降低了風險同時保留了出行自由。航空領域,持續(xù)監(jiān)控和協(xié)調響應系統(tǒng)使飛行成為最安全的交通方式之一。食品和藥品領域,測試和上市后監(jiān)測幫助確保了日常使用中的安全。在每種情況下,韌性都不是自動產(chǎn)生的,而是在時間的從容中建設的

As we move toward superintelligence, building a resilient society will require a similar but speedier effort that kicks into gear now. The ideas below are a slate of ambitious approaches to building a more resilient society. They focus on building and scaling safety systems that operate in real-world conditions by establishing mechanisms for trust, accountability, and auditing. They suggest opportunities for strengthening governance so that advanced AI remains controllable, transparent, and aligned with democratic values. And they suggest approaches to improve coordination across companies, governments, and countries so that risks can be identified early, information can be shared, and responses can be executed quickly when needed. Together, these proposals extend important safety work already underway and represent initial ideas to keep AI safe, governable, and aligned with democratic values.

向超級智能邁進的過程中,建設有韌性的社會將需要類似但更快速的努力,而且現(xiàn)在就要啟動。以下是一系列建設更有韌性社會的大膽方案。它們聚焦于通過建立信任、問責和審計機制來構建和擴展在真實世界條件下運行的安全系統(tǒng)。它們提出了加強治理的機會,使高級 AI 保持可控、透明,并與民主價值一致。它們還提出了改善公司、政府和國家之間協(xié)調的方法,以便盡早識別風險、共享信息,并在需要時快速執(zhí)行應對。這些提案共同延續(xù)了已經(jīng)在進行中的重要安全工作,代表了保持 AI 安全、可治理和與民主價值一致的初步想法

應對新興風險的安全系統(tǒng)

Safety systems for emerging risks. Research and develop tools that protect models, detect risks, and prevent misuse across high-consequence domains, including cyber and biological risks as well as other pathways to large-scale harm. Expand the use of advanced AI systems for threat modeling, red teaming, net assessments, and robustness testing to identify and anticipate novel risks early and inform mitigation strategies. Develop and scale complementary protective systems; for example, rapid identification and production of medical countermeasures in the event of an outbreak and expanded strategic stockpiles to prepare for future risks. Then, catalyze competitive safety markets by creating sustained demand for these capabilities through procurement, standards, insurance frameworks, and advance-purchase commitments. Over time, this approach can make safeguards an output of innovation and competition, ensuring that defenses improve as quickly as the risks they are designed to address.

研發(fā)保護模型、檢測風險和防止濫用的工具,覆蓋高后果領域,包括網(wǎng)絡和生物風險以及其他大規(guī)模傷害途徑。擴大高級 AI 系統(tǒng)在威脅建模、紅隊、凈評估和魯棒性測試中的使用,以盡早識別和預測新型風險。開發(fā)和擴展互補保護系統(tǒng),比如在疫情爆發(fā)時快速識別和生產(chǎn)醫(yī)療對策,以及擴大戰(zhàn)略儲備以應對未來風險。然后,通過采購、標準、保險框架和預購承諾創(chuàng)造對這些能力的持續(xù)需求,催化競爭性的安全市場。隨著時間推移,這種方法可以使保障措施成為創(chuàng)新和競爭的產(chǎn)出,確保防御措施與其所針對的風險同步改進

AI 信任棧

AI trust stack. Research and develop systems that help people trust and verify AI systems, the content they produce, and the actions they take—especially as these systems take on more real-world responsibilities. Advance the development of provenance and verification standards and tools that can build trust in AI systems while preserving privacy. This could include enabling secure, verifiable signatures for actions such as generating content or issuing instructions, and developing privacy-preserving logging and audit systems capable of supporting investigation and accountability without enabling pervasive surveillance.

研發(fā)幫助人們信任和驗證 AI 系統(tǒng)、其產(chǎn)出內容和采取行動的系統(tǒng),尤其是當這些系統(tǒng)承擔更多現(xiàn)實世界職責時。推進溯源和驗證標準及工具的開發(fā),在保護隱私的同時建立對 AI 系統(tǒng)的信任。這可以包括為生成內容或發(fā)出指令等行為提供安全、可驗證的簽名,以及開發(fā)能支持調查和問責但不會導致普遍監(jiān)控的隱私保護日志和審計系統(tǒng)

These types of solutions should capture key information about system behavior and use while minimizing the collection of sensitive data, and be designed to support investigation or intervention under clearly defined legal or safety conditions. This work could also include developing and testing governance frameworks that clarify responsibility within organizations, including how accountability could be assigned to specific roles and how delegation, monitoring, and escalation processes could function as systems become more capable. Over time, these efforts could establish a foundation for accountability by building trust in AI interactions and helping ensure that when harm occurs, responsibility can be appropriately allocated.

這類解決方案應在最小化敏感數(shù)據(jù)收集的同時捕獲關于系統(tǒng)行為和使用的關鍵信息,并被設計為在明確定義的法律或安全條件下支持調查或干預。這項工作還可以包括開發(fā)和測試治理框架,明確組織內部的責任,包括如何將問責分配給特定角色,以及隨著系統(tǒng)變得更強大,委托、監(jiān)控和升級流程如何運作。隨著時間推移,這些努力可以通過在 AI 交互中建立信任并幫助確保當傷害發(fā)生時責任能被適當分配來建立問責的基礎

審計制度

Auditing regimes. Strengthen institutions such as the Center for AI Standards and Innovation (CAISI) to develop auditing standards for frontier AI risks in coordination with national security agencies. Use tools such as government procurement, advance-purchase commitments, insurance frameworks, and standards-setting to create and scale a competitive market of auditors and evaluators capable of assessing AI systems and products for safety and security risks, building auditing capacity alongside the technology. Standards should be designed for international adoption to reduce fragmentation and avoid creating unnecessary compliance burdens for small companies, as well as those operating across jurisdictions.

強化 AI 標準與創(chuàng)新中心(CAISI)等機構,與國家安全機構協(xié)調制定前沿 AI 風險的審計標準。利用政府采購、預購承諾、保險框架和標準制定等工具,創(chuàng)建和擴大能夠評估 AI 系統(tǒng)和產(chǎn)品安全與安保風險的審計師和評估師競爭性市場,使審計能力與技術同步增長。標準應為國際采納而設計,減少碎片化,避免為小公司和跨轄區(qū)運營的公司造成不必要的合規(guī)負擔

As we progress toward superintelligence, there may come a point where a narrow set of highly capable models—particularly those that could materially advance chemical, biological, radiological, nuclear, or cyber risks—require stronger controls, including pre- and post-deployment audits using the standards developed in advance. Apply these requirements only to a small number of companies and the most advanced models, preserving a vibrant ecosystem of less powerful systems and the startups building on them. This approach maintains broad access to general-purpose AI while applying targeted safeguards where failures could create the greatest harm, avoiding unnecessary barriers that could limit competition or enable regulatory capture.

隨著向超級智能推進,可能到達一個節(jié)點,少數(shù)高度能干的模型(特別是那些可能實質性推進化學、生物、放射、核或網(wǎng)絡風險的模型)需要更嚴格的控制,包括使用預先制定標準的部署前和部署后審計。這些要求僅適用于少數(shù)公司和最先進的模型,保留較弱系統(tǒng)和基于它們構建的初創(chuàng)企業(yè)的活躍生態(tài)。這種方法保持了對通用 AI 的廣泛訪問,同時在失敗可能造成最大傷害的地方實施有針對性的保障,避免可能限制競爭或導致監(jiān)管捕獲的不必要壁壘

模型遏制手冊

Model-containment playbooks. Develop and test coordinated playbooks to contain dangerous AI systems once they have been released into the world. As AI capabilities advance, societies may face scenarios where dangerous systems cannot be easily recalled—because model weights have been released, developers are unwilling or unable to limit access to dangerous capabilities, or the systems are autonomous and capable of replicating themselves. In these cases, the challenge is containment: limiting the spread of dangerous capabilities, reducing harm, and coordinating responses under real-world constraints. Experience from other high-consequence domains, such as cybersecurity and public health, shows that even when full containment is not possible, coordinated action can still meaningfully reduce impact.

制定和測試協(xié)調手冊,在危險 AI 系統(tǒng)已經(jīng)釋放到世界后進行遏制。隨著 AI 能力推進,社會可能面臨危險系統(tǒng)無法輕易召回的情景:模型權重已經(jīng)公開,開發(fā)者不愿或無法限制對危險能力的訪問,或系統(tǒng)是自主的且能自我復制。在這些情況下,挑戰(zhàn)是遏制:限制危險能力的擴散,減少傷害,在現(xiàn)實世界約束下協(xié)調響應。網(wǎng)絡安全和公共衛(wèi)生等其他高后果領域的經(jīng)驗表明,即使完全遏制不可能,協(xié)調行動仍能有意義地減少影響

使命對齊的公司治理

Mission-aligned corporate governance. Frontier AI companies should adopt governance structures that embed public-interest accountability into decision-making, such as Public Benefit Corporations with mission-aligned governance. These structures should include explicit commitments to ensure that the benefits of AI are broadly shared, including through significant, long-term philanthropic or charitable giving. At the same time, harden frontier systems against corporate or insider capture by securing model weights and training infrastructure, auditing models for manipulative behaviors or hidden loyalties, and monitoring high-risk deployments so no individual or internal faction can quietly use AI systems to concentrate power.

前沿 AI 公司應采用將公共利益問責嵌入決策的治理結構,如使命對齊治理的公共利益公司。這些結構應包含明確承諾,確保 AI 的收益廣泛共享,包括通過重大的長期慈善捐贈。同時,通過保護模型權重和訓練基礎設施、審計模型是否存在操縱行為或隱藏忠誠度、監(jiān)控高風險部署,使前沿系統(tǒng)免受企業(yè)或內部人員捕獲,確保沒有個人或內部派系能悄悄利用 AI 系統(tǒng)來集中權力

政府使用 AI 的護欄

Guardrails for government use. Have policymakers establish clear rules for how governments can and cannot use AI, with especially high standards for reliability, alignment, and safety. These standards should be codified in law and reinforced through technical safeguards. At the same time, use AI to strengthen democratic accountability. As more government decisions are made through AI-assisted workflows, these systems will create clearer digital records of government reasoning and action that can be logged alongside other public records. With appropriate safeguards, oversight institutions such as inspectors general, congressional committees, and courts could use AI-enabled auditing tools to detect abuse, identify harms, and improve accountability at scale.

由政策制定者建立關于政府如何使用和不使用 AI 的明確規(guī)則,對可靠性、對齊性和安全性設置特別高的標準。這些標準應被編入法律并通過技術保障加以強化。同時,利用 AI 加強民主問責。隨著更多政府決策通過 AI 輔助工作流做出,這些系統(tǒng)將創(chuàng)建更清晰的政府推理和行動的數(shù)字記錄,可以與其他公共記錄一起歸檔。在適當?shù)谋U舷?,監(jiān)察長、國會委員會和法院等監(jiān)督機構可以使用 AI 賦能的審計工具來檢測濫用、識別傷害,并大規(guī)模提升問責能力

Also, modernize transparency frameworks (including the Freedom of Information Act) to allow citizens and watchdog organizations to use AI to review targeted questions about government actions while protecting sensitive information. This could include clarifying when AI-interaction logs and agentic action logs constitute federal records that must be retained for specified periods.

此外,現(xiàn)代化透明度框架(包括信息自由法),允許公民和監(jiān)督組織使用 AI 審查關于政府行為的針對性問題,同時保護敏感信息。這可以包括明確 AI 交互日志和 Agent 行動日志何時構成必須保留指定期限的聯(lián)邦記錄

公眾意見輸入機制

Mechanisms for public input. Create structured ways for public input so that alignment isn’t defined only by engineers or executives behind closed doors. As advanced AI makes more decisions that affect people’s lives, societies need shared clarity about what these systems are supposed to do, what values should guide them, and how well they are performing. Make alignment more democratic, legible, and accountable through transparent specifications, evaluation frameworks, and representative input processes. Developers should publish model specifications that describe how systems are intended to behave and share information about how those systems are evaluated. Governments and public institutions should help shape these standards by anchoring them in democratic laws and values, while establishing mechanisms for representative public input to be considered alongside traditional business stakeholders. Together, these approaches help ensure that the advancement of AI reflects the perspectives of the societies that must live with its consequences.

創(chuàng)建結構化的公眾意見輸入渠道,使對齊不僅僅由工程師或高管在閉門會議中定義。隨著高級 AI 做出越來越多影響人們生活的決策,社會需要就這些系統(tǒng)應該做什么、什么價值觀應指導它們、以及它們表現(xiàn)如何達成共同的清晰認知。通過透明的規(guī)格說明、評估框架和代表性輸入流程,使對齊更加民主、可讀和可問責。開發(fā)者應發(fā)布描述系統(tǒng)預期行為的模型規(guī)格書,并分享系統(tǒng)評估的信息。政府和公共機構應通過將這些標準錨定在民主法律和價值觀中來幫助塑造它們,同時建立機制讓代表性的公眾意見與傳統(tǒng)商業(yè)利益相關者一起被考慮。這些方法共同幫助確保 AI 的發(fā)展反映必須與其后果共存的社會的視角

事件報告

Incident reporting. Establish a mechanism for companies to share information about incidents, misuse, and near-misses with a designated public authority. The system should emphasize learning and prevention over punishment, with appropriately scoped public disclosures that ensure transparency and democratic oversight while protecting sensitive technical, national security, and competitive information. Near-miss reporting could include cases where models exhibited concerning internal reasoning, unexpected capabilities, or other warning signals—even if safeguards ultimately prevented harm—so the ecosystem can learn from close calls before they become real incidents.

建立企業(yè)向指定公共機構共享事件、濫用和未遂事件信息的機制。該系統(tǒng)應強調學習和預防而非懲罰,通過適當范圍的公開披露確保透明和民主監(jiān)督,同時保護敏感的技術、國家安全和商業(yè)競爭信息。未遂事件報告可以包括模型表現(xiàn)出令人擔憂的內部推理、意外能力或其他警告信號的案例,即使保障措施最終防止了傷害,生態(tài)系統(tǒng)也可以在險情變成真正事故之前從中學習

國際信息共享

International information-sharing around AI capabilities, risks, and mitigations. Strengthen national evaluation institutions as the foundation for international coordination, beginning with expanding the role of the CAISI as a trusted technical body for evaluating frontier systems, assessing safeguards, and informing government understanding of advanced AI capabilities. Building on this foundation, develop a global network of AI Institutes that collaborate through shared protocols for information exchange, joint evaluations, and coordinated mitigation measures.

圍繞 AI 能力、風險和緩解措施的國際信息共享。以強化國家評估機構作為國際協(xié)調的基礎,首先擴大 CAISI 作為評估前沿系統(tǒng)、評估保障措施和促進政府理解高級 AI 能力的可信技術機構的角色。在此基礎上,發(fā)展一個全球 AI 研究所網(wǎng)絡,通過共享的信息交換協(xié)議、聯(lián)合評估和協(xié)調緩解措施進行合作

Over time, this network could evolve into an international framework akin to the other multilateral institutions focused on safety and standards, one that gives trusted public authorities visibility into frontier AI development; and creates secure cross-lab and cross-country channels for sharing evaluation results, alignment findings, and emerging risks; and likewise supports communicating during crises. To enable effective collaboration, policymakers should ensure that companies can share safety- and risk-related information through these channels without running afoul of antitrust or competition constraints, using clear safe harbors and narrowly scoped information-sharing rules. This system should expand beyond a narrow focus on national security to include a broader range of societal risks, including impacts on youth safety and well-being.

隨著時間推移,這一網(wǎng)絡可以演變?yōu)轭愃朴谄渌麑W⒂诎踩蜆藴实亩噙厵C構的國際框架:給可信的公共機構提供對前沿 AI 開發(fā)的可見性,創(chuàng)建安全的跨實驗室和跨國渠道用于分享評估結果、對齊發(fā)現(xiàn)和新興風險,并同樣支持危機期間的溝通。為實現(xiàn)有效合作,政策制定者應確保企業(yè)能通過這些渠道分享安全和風險相關信息,而不違反反壟斷或競爭約束,使用明確的安全港和范圍窄小的信息共享規(guī)則。該系統(tǒng)應擴展到超越對國家安全的狹隘關注,納入更廣泛的社會風險,包括對青少年安全和福祉的影響

開啟對話

We offer these ideas not as fixed answers but as a starting point for a broader conversation about how to ensure that AI benefits everyone. That conversation should be inclusive and ongoing—engaging governments, companies, researchers, civil society, communities, and families—and should be mediated through democratic processes that give people real power to shape the AI future they want. It also needs to expand globally—bringing in the perspectives of cultures, societies, and governments around the world.

我們提出這些想法不是作為固定答案,而是作為關于如何確保 AI 惠及所有人的更廣泛對話的起點。這場對話應該是包容的和持續(xù)的,納入政府、企業(yè)、研究者、公民社會、社區(qū)和家庭,并應通過賦予人們真正權力來塑造他們想要的 AI 未來的民主程序來進行。它也需要擴展到全球,引入世界各地文化、社會和政府的視角

These ideas are our first contribution to that effort, but only the beginning. Progress will depend on continued iteration, experimentation, and collaboration across institutions and sectors. To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.

這些想法是我們對這一努力的第一份貢獻,但只是開始。進展將取決于跨機構和跨部門的持續(xù)迭代、實驗和合作。為維持勢頭,OpenAI 正在:(1)通過 newindustrialpolicy@openai.com 收集和組織反饋;(2)設立試點項目,提供最高 10 萬美元的研究金和最高 100 萬美元的 API 額度,資助基于這些政策構想的研究;(3)在 5 月將在華盛頓特區(qū)開設的新 OpenAI Workshop 召集討論

https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%%20Policy%%20for%%20the%%20Intelligence%%20Age.pdf

特別聲明:以上內容(如有圖片或視頻亦包括在內)為自媒體平臺“網(wǎng)易號”用戶上傳并發(fā)布,本平臺僅提供信息存儲服務。

Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.

相關推薦
熱點推薦
遺憾!張雪車隊無緣3連冠:極限反超 第三被罰變第四 無緣領獎臺

遺憾!張雪車隊無緣3連冠:極限反超 第三被罰變第四 無緣領獎臺

念洲
2026-04-18 20:40:05
泰國潑水節(jié)242人死亡,1200人受傷

泰國潑水節(jié)242人死亡,1200人受傷

每日經(jīng)濟新聞
2026-04-18 10:09:13
難以置信!網(wǎng)傳多年前某殯儀館為省燃料,將多名逝者“拼爐”火化

難以置信!網(wǎng)傳多年前某殯儀館為省燃料,將多名逝者“拼爐”火化

火山詩話
2026-04-18 15:43:01
恒大集團許家印被抓捕全過程

恒大集團許家印被抓捕全過程

新浪財經(jīng)
2026-04-18 20:05:24
中國斯諾克傳捷報!趙心童拒絕被逆轉,張安達5-3,德比大戰(zhàn)來襲

中國斯諾克傳捷報!趙心童拒絕被逆轉,張安達5-3,德比大戰(zhàn)來襲

劉姚堯的文字城堡
2026-04-18 21:23:11
俄加快掠奪烏礦產(chǎn)資源,白俄軍隊邊境集結,澤連斯基:別輕舉妄動

俄加快掠奪烏礦產(chǎn)資源,白俄軍隊邊境集結,澤連斯基:別輕舉妄動

史政先鋒
2026-04-18 21:13:39
美媒:中國“殲-50”可能只是F-47隱身戰(zhàn)斗機的“低配仿制型號”

美媒:中國“殲-50”可能只是F-47隱身戰(zhàn)斗機的“低配仿制型號”

零度Military
2026-04-18 14:36:44
95分鐘丟球+比分2-2,熱刺遭絕平,連續(xù)15輪不敗,深陷降級區(qū)

95分鐘丟球+比分2-2,熱刺遭絕平,連續(xù)15輪不敗,深陷降級區(qū)

側身凌空斬
2026-04-19 02:34:28
睡一覺存款清零!多地緊急預警:凌晨0-4點,千萬別這樣放手機

睡一覺存款清零!多地緊急預警:凌晨0-4點,千萬別這樣放手機

記錄生活日常阿蜴
2026-04-18 08:16:20
景甜為頂級富豪代Y生子?。?>
    </a>
        <h3>
      <a href=景甜為頂級富豪代Y生子!? 八卦瘋叔
2026-04-18 09:48:56
犯規(guī)罰退一位!WSBK荷蘭站第一回合:張雪機車車手德比斯獲第4名

犯規(guī)罰退一位!WSBK荷蘭站第一回合:張雪機車車手德比斯獲第4名

全景體育V
2026-04-18 20:43:20
嚴打來了!5月1日起8類行為會入刑,退休老人要注意

嚴打來了!5月1日起8類行為會入刑,退休老人要注意

小談食刻美食
2026-04-18 09:44:44
新代言人火爆全球,以色列慌了

新代言人火爆全球,以色列慌了

俠客棧
2026-04-18 13:14:53
德澤爾比:對2-2的結果感到很遺憾,我們今天理應贏下比賽

德澤爾比:對2-2的結果感到很遺憾,我們今天理應贏下比賽

懂球帝
2026-04-19 03:33:29
“五一”假期大批航班取消

“五一”假期大批航班取消

每日經(jīng)濟新聞
2026-04-18 22:20:39
悲哀!幾個女同事想郊游沒人愿去,吐槽現(xiàn)在男生太精,不好拿捏了

悲哀!幾個女同事想郊游沒人愿去,吐槽現(xiàn)在男生太精,不好拿捏了

火山詩話
2026-04-18 07:26:36
1-0!哈登22+10,騎士輕取猛龍,季后賽開門紅!兩隊實力差距不小

1-0!哈登22+10,騎士輕取猛龍,季后賽開門紅!兩隊實力差距不小

老梁體育漫談
2026-04-19 03:49:25
大數(shù)據(jù)分析,在中國,找個身高1米7年入20萬的老公,到底有多難?

大數(shù)據(jù)分析,在中國,找個身高1米7年入20萬的老公,到底有多難?

深度報
2026-04-18 23:37:27
我媽取走我600萬房本,我馬上報失重辦,隔天弟弟撥了我200多通電話

我媽取走我600萬房本,我馬上報失重辦,隔天弟弟撥了我200多通電話

三農(nóng)老歷
2026-04-17 19:22:34
揮淚斬馬謖!皇馬正式出售2.1億“頂星”!新主帥攜巨星空降加盟

揮淚斬馬謖!皇馬正式出售2.1億“頂星”!新主帥攜巨星空降加盟

頭狼追球
2026-04-18 17:53:28
2026-04-19 04:15:00
賽博禪心
賽博禪心
拜AI古佛,修賽博禪心
389文章數(shù) 50關注度
往期回顧 全部

科技要聞

傳Meta下月擬裁8000 大舉清退人力為AI騰位

頭條要聞

伊朗革命衛(wèi)隊向油輪開火 伊朗最高領袖發(fā)聲

頭條要聞

伊朗革命衛(wèi)隊向油輪開火 伊朗最高領袖發(fā)聲

體育要聞

時隔25年重返英超!沒有人再嘲笑他了

娛樂要聞

劉德華回應潘宏彬去世,拒談喪禮細節(jié)

財經(jīng)要聞

"影子萬科"2.0:管理層如何吸血萬物云?

汽車要聞

奇瑞威麟R08 PRO正式上市 售價14.48萬元起

態(tài)度原創(chuàng)

游戲
健康
教育
時尚
手機

讓老粥批直呼“計劃有變”的歲獸代理人,到底是什么東西?

干細胞抗衰4大誤區(qū),90%的人都中招

教育要聞

親愛的老己,歡迎在二十六歲,邁入人生的夏季|中山大學國際新聞420分經(jīng)驗貼

選對發(fā)型,真的能少走很多變美彎路

手機要聞

榮耀600系列參數(shù)、外觀全曝光

無障礙瀏覽 進入關懷版