Fortune | FORTUNE 2024年10月18日
AI regulation isn’t happening fast enough—and for Gen Zers like me our future is on the line
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

随着人工智能技术的迅猛发展,年轻一代正亲眼见证着AI对生活的颠覆性影响。从学术作弊到政治虚假信息,从深度伪造色情内容到工作岗位的挤压,AI的快速发展带来了前所未有的挑战。本文作者作为Gen Z的一员,表达了对AI潜在风险的担忧,并介绍了Encode Justice组织发起的AI 2030行动,呼吁全球领导者重视AI治理,制定更完善的监管措施,以确保AI发展惠及所有人。

😟 **AI带来的挑战:** AI技术正以前所未有的速度改变着我们的生活,从学术作弊到政治虚假信息,从深度伪造色情内容到工作岗位的挤压,AI的快速发展带来了前所未有的挑战。Gen Z的成员们亲眼见证了这些变化,并深切地感受到AI带来的风险。

💪 **Encode Justice的行动:** Encode Justice是一个由全球数百名年轻人组成的倡导组织,致力于推动AI的公平与安全发展。该组织发起了AI 2030行动,呼吁全球领导者在2030年前采取具体措施,对AI进行有效治理,以保护Gen Z的权益和未来。

🌐 **全球合作的必要性:** AI的快速发展需要全球范围内的合作,以应对AI带来的挑战。目前,只有34个国家制定了AI战略,而美国和欧盟的监管措施也存在不足。Gen Z呼吁全球领导人加强合作,共同制定AI治理规则,确保AI的安全、公平发展。

🚀 **AI的潜在风险:** AI技术的发展速度惊人,其潜在风险也越来越大。例如,AI可能被用于制造致命性自主武器,操纵关键视频,模仿政治人物,甚至进行军事行动。这些风险需要我们高度重视,并采取有效的预防措施。

💡 **AI的积极作用:** 尽管AI存在风险,但它也蕴藏着巨大的潜力。AI可以用于医疗诊断、可再生能源技术、个性化辅导等领域,推动人类社会进步。为了充分释放AI的潜力,我们必须积极采取措施,降低风险,确保AI的公平、安全发展。

👨‍💻 **AI的伦理问题:** 随着AI技术的不断发展,我们也需要关注AI的伦理问题。例如,AI可能被用于制造深度伪造内容,操纵舆论,甚至制造社会混乱。我们必须制定明确的伦理规范,确保AI的使用符合道德标准。

🌍 **AI的全球影响:** AI的影响是全球性的,需要全球范围内的合作才能有效应对。各国政府、科技公司和社会组织需要共同努力,制定AI治理规则,确保AI的发展惠及所有人。

🤝 **Gen Z的呼吁:** Gen Z呼吁全球领导人积极行动起来,制定更完善的AI治理措施,确保AI的安全、公平发展,为人类创造更加美好的未来。

👨‍🎓 **AI的未来:** AI技术的发展将继续下去,其影响将越来越大。为了更好地利用AI,我们必须不断学习和探索,并积极参与AI的治理和发展。

🌎 **AI的未来:** AI技术的发展将对人类社会产生深远的影响,我们需要共同努力,确保AI的发展符合人类的利益,为人类创造更加美好的未来。

👨‍🎓 **Gen Z的责任:** Gen Z是AI时代的见证者和参与者,我们有责任积极参与AI的治理和发展,确保AI技术能够为人类社会创造更大的价值。

👨‍💻 **AI的未来:** AI技术的发展将继续下去,其影响将越来越大。为了更好地利用AI,我们必须不断学习和探索,并积极参与AI的治理和发展。

🌎 **AI的未来:** AI技术的发展将对人类社会产生深远的影响,我们需要共同努力,确保AI的发展符合人类的利益,为人类创造更加美好的未来。

👨‍🎓 **Gen Z的责任:** Gen Z是AI时代的见证者和参与者,我们有责任积极参与AI的治理和发展,确保AI技术能够为人类社会创造更大的价值。

👨‍💻 **AI的未来:** AI技术的发展将继续下去,其影响将越来越大。为了更好地利用AI,我们必须不断学习和探索,并积极参与AI的治理和发展。

🌎 **AI的未来:** AI技术的发展将对人类社会产生深远的影响,我们需要共同努力,确保AI的发展符合人类的利益,为人类创造更加美好的未来。

👨‍🎓 **Gen Z的责任:** Gen Z是AI时代的见证者和参与者,我们有责任积极参与AI的治理和发展,确保AI技术能够为人类社会创造更大的价值。

When I was growing up, artificial intelligence lived in the realm of science fiction. I remember being in awe of Iron Man’s AI system Jarvis as it helped fight off aliens—but laughing at dumb NPCs (nonplayable characters) in video games or joking with my dad about how scratchy and unhuman-like virtual assistants like Siri were. The “real” AIs could only be found as Star Wars’ C-3PO and the like and were discussed mainly by nerds like me. More punchline than reality, AI was nowhere near the top of political agendas. But today, as a 22-year-old recent college graduate, I’m watching the AI revolution happen in real-time—and I’m terrified world leaders aren’t keeping pace.In 2024, my generation is already seeing AI disrupt our lives. Gen Z classmates casually and frequently use ChatGPT to breeze through advanced calculus classes, write political essays, and conduct literary analysis. Young voters are forced to deal with increased amounts of AI-driven political disinformation, and teen girls are targeted by convincing deepfake pornography, with no disclaimers and little recourse. Even in prestigious fields like investment banking, entry-level jobs are beginning to feel squeezed. And tech companies are making ethically dubious plans to bring intimate humanlike AI companions to our lives.Responding to AI’s rapid riseThe speed of change is mind-numbing. If today’s narrow AI tools can supercharge academic dishonesty, sexual harassment, workforce disruptions, and addictive relationships, imagine the impact the technology will have as it scales in access and power in the coming years. My fear is that today’s challenges are just a small preview of the AI-driven turbulence that will come to define Gen Z’s future.This fear led me to join—and help lead—Encode Justice, a youth advocacy movement focused on making AI safer and more equitable. Our organization includes hundreds of young people across the world who often feel as if we are shouting into the void on AI risks—even as technological titans and competition-focused politicians push a hasty, unregulated rollout. It’s difficult to express the frustration of watching important lawmakers like Senate Majority Leader Chuck Schumer constantly kick the can down the road with regulation, as he did last week.We’re done waiting on the sidelines. On Thursday, Encode Justice launched AI 2030, a sweeping call to action for global leaders to prioritize AI governance in this decade. It outlines concrete steps that policymakers and corporations should take by 2030 to help protect our generation’s lives, rights, and livelihoods as AI continues to scale.Our framework is backed by powerful allies, from former world leaders like Irish President Mary Robinson to civil rights trailblazers such as Maya Wiley, as well as over 15,000 young people in student organizations around the world. We aim to insert youth voices into AI governance discussions that will disproportionately affect us—not to mention our kids.Right now, the global policymaking community lags behind AI risks. As of last December, only 34 countries out of 190-plus have a national AI strategy. The United States had a start with President Biden’s Executive Order on AI, but it lacks teeth. Across the Atlantic, the EU’s AI Act will not take effect until 2026 at the earliest.Meanwhile, AI capabilities will continue to evolve at an exponential rate.Not all of this is bad. AI holds immense potential. It has been shown to enhance health care diagnoses, revolutionize renewable energy technology, and help personalize tutoring. It may well drive transformative progress for humanity. AI models are already being trained to predict disease outbreaks, provide real-time mental health support, and reduce carbon emissions. These innovations form the basis for my generation’s cautious optimism. However, fully unlocking AI’s benefits requires being proactive in mitigating risks. Only by developing AI responsibly and equitably will we ensure that its benefits are shared.As algorithms become more human-like, my generation’s formative years may become shaped by parasocial AI relationships. Imagine kids growing up with an always “happy” Alexa-like friend that can mimic empathy—and know what kind of jokes you enjoy—while being there for you 24/7. How might that influence our youth’s social development or ability to build real human connections?AI in the long runLonger term, the economic implications are terrifying. McKinsey estimates that up to 800 million people worldwide could be displaced by automation and need to find new jobs by 2030. AI can already write code, diagnose complex illnesses, and analyze legal briefs faster and cheaper than humans can. (I helped code an AI tool to do the latter while in college.)Without proper safeguards, these disruptions will disproportionately affect already marginalized groups. A landmark MIT study a few years ago showed that since 1980, over half of the increasing wage disparity between workers with higher and lower education levels can be attributed to automation. Young workers in the global south particularly—whose economies are more vulnerable to AI disruption—could face nearly insurmountable obstacles to economic mobility.We are effectively being asked to trust that big technology firms such as OpenAI and Google will properly self-regulate as they roll out their products with world-altering potential and little to no transparency. To complicate matters, tech companies often say the right thing when in the public eye. OpenAI CEO Sam Altman famously testified in front of the U.S. Congress begging for regulation. In private, OpenAI spent considerable effort to dilute regulatory efforts within the EU AI Act.With billions of dollars on the line, competitive industry dynamics can create perverse incentives—such as those that defined the social media revolution—to win the AI race at any cost. Trusting in corporate altruism is a reckless gamble with all our collective futures.Critics have argued that calls for regulation will simply result in regulatory capture, where a company influences rules to benefit its own interests. This concern is understandable, but the truth is that there is no legitimate alternative to secure AI systems. The technology advances so rapidly that traditional regulatory processes will be unable to keep pace.Regulating AISo where do we go from here? To start, we need better government regulation and clear red lines around AI development and deployment. We have been tirelessly working on a bill in the California legislature with state senator Scott Wiener that we cosponsored—SB 1047—that would implement these kinds of guardrails for the highest-risk AI systems.However, AI 2030 lays out a larger roadmap:We call for independent audits that would test the discriminatory impacts of AI systems. We demand legal recourse for citizens to seek redress if AI violates their rights. We push for companies to develop technology that would clearly label AI-generated content and equip users with the ability to opt out of engaging with AI systems. We ask for enhanced protections of personal data and restrictions on deploying biased models. At the international level, we call on world leaders to come together and write treaties to ban lethal autonomous weapons and boost funding for technical AI safety research.We recognize that these are complex issues. AI 2030 was developed over months of research, discussion, and constant consultation with civil society leaders, computer scientists, and policymakers. In conversations, we would often hear that youth activists are naive to demand ambitious action, that we should settle for incremental policy changes.We reject that narrative. Incrementalism is untenable in the face of exponential timelines. Focusing on narrow AI challenges doesn’t do anything about the frontier models that are hurtling forward. What happens when AI can perfectly manipulate critical video footage, imitate our politicians, author important legislation with biases, or conduct military strikes? Or when it begins to achieve eerier capabilities in reasoning, strategy, and emotional manipulation?We are talking about years and months, not decades, to reach these milestones.Gen Z came of age with social media algorithms subtly pushing suicide to the most vulnerable of us and climate disasters wreaking havoc on our planet. We personally know the dangers of “moving fast and breaking things,” of letting technologies jump ahead of enforceable rules. AI will be all those things, but potentially on a more catastrophic scale. We must get this right.To do so, world leaders must stop simply reacting to scandals after the damage is done and be more proactive in addressing AI’s long-term implications. These challenges will define the 21st century—short-term solutions will not work.Critically, we need global cooperation to match threats that are not constrained by nation-state borders. Autocracies such as China have already begun to use AI for surveillance and social control. These same regimes are attempting to use AI to supercharge online censorship and discriminate against minorities. They are (unsurprisingly) beginning to use the United States’ own weak regulations to their advantage and push our kids to be more polarized.Even well-intentioned developers can accidentally unleash catastrophic harms.To paint a simple thought experiment: Consider Google DeepMind’s AlphaGo, an AI system trained to expertly play Go, a complex strategy game. When AlphaGo competed against human champions, it made moves never before seen in the game’s 4,000-year history. The strategies were so alien that its own creators did not understand its reasoning—and yet it beat top players repeatedly. Now imagine a similar system being tasked with biological design or molecular engineering. It could design new biochemical processes that are entirely foreign to human understanding. A bad actor could use this to develop unprecedented weapons of mass destruction.These risks extend beyond the biological. AI systems will become more sophisticated in areas such as chemical synthesis, nuclear engineering, and cybersecurity. These tools could be used to create new chemical weapons, design more destructive nuclear devices, or create targeted cyberattacks on critical infrastructure. If these powerful capabilities are not safeguarded, the fallout could be devastating.These are not abstract or distant scenarios. They are genuine global challenges that are crying out for new governance models. Make no mistake: The next several years will be critical. That’s why AI 2030 calls for establishing an international AI safety institute to coordinate technical research as well as create a global authority that would set AI development standards and monitor for misuse. Key global powers like the U.S., EU, U.K., China, and India must be involved.A global call to actionIs AI 2030 an ambitious agenda? We’re counting on it. But my generation has no choice but to dream big. We cannot simply sit back and hope that big tech companies will act against their bottom-line interests. We must not wait until AI systems cause societal harms that we cannot or struggle to come back from. We must be proactive and fight for a future where AI development is safe and secure.Our world is at an inflection point. We can stay as we are, sleepwalking into a dangerous AI future where algorithms exacerbate inequality, erode democratic intuitions, and spark conflict. Or we can wake up and take the path to a thriving, equitable digital age.We need genuine international cooperation, not photo-op summits. We need lawmakers willing to spend political capital, not corporate mouthpieces. We need companies to radically redefine transparency, not announce shiny appointments to ethics boards with no power. More than anything, we need leaders thinking in terms of civilizational legacies, not just winning reelection.As a young voter, I demand to see such commitments ahead of the November elections. I know I’m not alone. Millions of young people across the world are watching this AI age unfold with a mix of awe and anxiety. We don’t have all the answers. But we know this: Our generation deserves a voice in shaping the technologies that will come to define our lives and transform the very fabric of society.What I ask now is: Will the leaders of today listen? Will they step up and risk making a change? Or will they fail and force my generation to shoulder the fallout as they have repeatedly on other critical issues?As young leaders of tomorrow, we are making the choice to stand up and speak out while there’s still time to act. The future is not yet written. In 2030, let history show that we bent the arc of artificial intelligence for the betterment of humanity when it mattered most.More reading:The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI治理 Gen Z Encode Justice AI 2030
相关文章