Published on November 16, 2024 7:26 AM GMT
Many people think AGI development is primarily a competition between corporate labs. This framing is dangerously incomplete, missing three critical factors: the emergence of unaligned AGI as the primary adversary, the imminent transition from corporate to national control, and the inadequacy of current regulatory proposals.
The Shifting Landscape
Today's discourse centers on regulating entities like OpenAI, Anthropic, and DeepMind. The stakes in this corporate race are clear - as revealed in recent court documents where even lab leaders acknowledge the risks:
"You are concerned that Demis could create an AGI dictatorship. So do we." - Greg & Ilya to Elon, from "Elon Musk, et al. v. Samuel Altman" (2024)[1]
However, this corporate competition represents merely the opening phase. We're already witnessing the transition to national actors, with experts increasingly predicting government takeover:
"As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we'll get some form of government AGI project. No startup can handle superintelligence. Somewhere in a SCIF, the endgame will be on." - Leopold Aschenbrenner, 'SITUATIONAL AWARENESS', June 2024[2]
With Anthropic's and Meta's deals which will allow U.S. intelligence and defense agencies. We're also seeing appointment of directors with deep government ties, like OpenAI announcment that Paul Nakasone, the former head of the NSA was joining the company's board of directors..
This shift from private to public control brings unique challenges. Government agencies have a troubling track record, as whistleblower Edward Snowden[3] notes:
"They're going to be guided by the very companies that that we need to restrain. Their lobbyists are going to shape and be literally authoring the legislation that they're going to rubberstamp for the legislatures that's then going to rule us." - Edward Snowden
The implications are clear: we must prepare now for government involvement in AGI development, and even a casual reading of history will reveal this is a very difficult thing to do. This means establishing robust frameworks for open alignment that constrain the government, and this must be done before it gets bogged down in vested interests.
The Case for Open Alignment
Open alignment, might go against the grain of traditional security thinking, but it's actually the most strategic choice.
Consider a concrete scenario: The U.S. develops an advancement that improves AGI capabilities by 10% while enhancing alignment by 90%. This is realistic because most advances have a mix of capability and alignment impact. The strategic choices create the following outcomes:
Choice | US Advantage | Adversary Gain | Unaligned AGI Risk | Public Trust |
---|---|---|---|---|
Open | ++ | + | - | + |
Closed | + | 0 | 0 | - |
This table illuminates the core insight: open-sourcing alignment research provides an absolute advantage against the true adversary (unaligned AGI) while maintaining relative position between nations. The marginal loss in strategic advantage is far outweighed by the reduction in catastrophic risk.
I've added public trust as a factor because it's crucial. Historical precedent shows how transformative technologies - from the printing press to nuclear weapons - can destabilize political structures by changing the balance of power between a government and it's citizen. AGI is no differen't, as we may see citizen go from critical workers to a burden or political threats. At the same time we may see the government gain greater information or military power compared to the citizenry.
I think if the US government took actions to restrict publishing [...]. I do think that would prompt at least some significant number of researchers to choose a different place to work, not to mention also slowing down the US’s ability to innovate in the space - Helen Toner [4]
Openness can also be a strategic advantage in attracting talent and transmitting and sorting useful information. This is especially important when you frame the AGI race as a positive sum alignment game, instead of a zero sum capability game.
Implementation Framework
We need a comprehensive approach to government AGI development that mandates:
Public alignment standards:
- Full reproducibility requirementsOpen safety protocolsTransparent testing methodologies
Independent verification:
- National standards authority oversightPublic audit mechanismsInternational verification protocols
This won't be easy. Constraining government agencies, particularly in defense and intelligence sectors, presents immense challenges. But current policy efforts focusing solely on corporate regulation miss the crucial window for establishing these frameworks.
As an example, Anthropic has recently made it's models available to government, military, and intillence agencies. But we how was it aligned, what data was used, how will it be used? These are all questions that need to be answered. I expect this to be a trend, and we need to be ready for it.
The Path Forward
The shift to government control of AGI development is inevitable. We face three possible futures:
- Unregulated government AGI development behind classified barriersObsolete corporate regulations that fail to address national programsOpen alignment infrastructure that reduces catastrophic risks while preserving oversight
The choice between these futures must be made now, before national security imperatives foreclose our options. The game theory is clear: open alignment research provides absolute advantage against the true existential threat - unaligned AGI - while maintaining relative position between nations.
This is fundamentally a choice between short-term tactical advantage and long-term strategic stability. By mandating open alignment research, we reduce the catastrophic risk of any actor losing control while preserving the essential element of public oversight.
The time to establish these frameworks is now, before the transition to government control completes. We must shift our policy focus from corporate regulation to establishing robust, transparent alignment standards that will govern both private and public AGI development.
Frequently Asked Questions
Why would government agencies agree to transparency?
Transparency is not the default for government agencies. And yet, in Democratic societies, they work for the public interest. We have some very successful examples of transparency in government, like Freedom of Information Act requests. I don't claim it will be easy, but it is necessary.
Who determines alignment improvements?
Measurement and verification present significant challenges. We need an independent authority (similar to NIST) with:
- Technical expertise in AI safetyTransparent assessment criteriaInternational credibilityPublic oversight mechanisms
The important part is that the authority is independent. Someone will make this judgement, we want to make sure it's someone working for the public interest.
References
Discuss