China has established itself as the world leader in artificial intelligence regulation. In May, Beijing concluded a second cohort of rules restricting the use of deepfakes. The development is the latest in a series of Chinese government interventions aimed at protecting the public and engendering trust in AI technologies. While the rhetoric in the United States from politicians, influencers, AI innovators, and other stakeholders has spotlighted the technology’s applications and potential downsides, China has been investigating, legislating, and advocating for AI since 2017. Given Washington’s uneasy and often adversarial relationship with China, its leadership on this issue could prove troublesome for the US.
China’s head start in AI regulation gives it the upper hand in promoting rules that favor its domestic industries and its political agenda to a global audience. The nation, which in previous generations has been slow to accept and adapt to technology, now leads the change. Its latest set of guidelines mandates security assessments for creators of novel AI products or services prior to their public release.
Not Just China
While the US has only recently taken notice of the legal ramifications of AI, friend and foe alike have or are seriously considering sweeping regulation of ChatGPT and its ilk. The European Union is contemplating the implementation of a fresh legal framework for AI development and utilization. The EU’s proposed AI Act seeks to classify AI tools according to their perceived level of risk. Uses that pose “unacceptable risk” by presenting a “clear threat to the safety, livelihoods, and rights of people” could be banned in Europe. Those that could jeopardize people’s health, safety, fundamental rights, or the environment as well as those designed to influence voters would be deemed “high risk.” Before their release, these systems would have to demonstrate strong risk assessment, mitigation, security measures, detailed documentation of training data and algorithms, user disclosures, and active human oversight. “Limited risk” and “low risk” categories would be unregulated save for perfunctory transparency requirements.
The action testifies to Europe’s desire not to be left in China’s dust and hints at a desire that the Asian superpower’s interest in AI should not go unchecked. The AI Act specifically calls for banning “government-run social scoring of the type used in China.” A memorandum accompanying the proposal includes among its goals that future AI regulation “respect… Union values.”
There are also reports that the European Data Protection Board has set up a task force to unify its members’ independent research into ChatGPT which is possibly a first step toward AI privacy regulation.
Even tiny Singapore seems to have gotten the jump on the US and is in the process of subjecting AI to the same comprehensive scrutiny it gives potential online and digital threats. Singapore was also the first country to unveil an AI testing toolkit. AI Verify invites developers and owners to reveal how their AI systems perform.
The US Must Lead on AI Regulation
As artificial intelligence and generative technologies advance at an unprecedented pace, the US as a technological, military, and political powerhouse must address crucial questions about how it will regulate these transformative innovations. It is increasingly evident that the US has not yet developed an integrated approach to maintain a workable balance between consumer protection, intellectual autonomy, and entrepreneurship.
Recent developments and expert opinions strongly advocate for federal regulation of AI. Prominent figures, including Sam Altman, the creator of ChatGPT, and privacy executives at industry giants like IBM, Google, and Yahoo, have emphasized the criticality of government oversight due to the potential risks AI poses to humanity. Initially hesitant to impose regulations that might hinder AI innovation or compromise the US’s competitive edge, the government must now recognize the pressing need for federal legislation.
The introduction of AI regulation would significantly impact emerging technology companies and fundamental changes would require ongoing legal counsel to ensure compliance. Furthermore, companies providing generative AI services to the public may become responsible for the output of their systems. Strict requirements on these companies could be imposed to ensure that the data used to train their algorithms meets rigorous standards. As AI continues to advance, the accountability and responsibility of AI service providers must be clearly defined and enforced.
The Global Landscape
While the US deliberates its regulatory approach, a patchwork of regulations is emerging worldwide, leading to a significant East-West divide. A recent report by the Brookings Institution, which analyzed AI governance plans across various countries, highlighted the differing priorities and strategies. The East is primarily focused on expanding R&D capacity while overlooking traditional technology management guardrails, whereas the West places greater emphasis on establishing comprehensive safeguards. By enacting federal AI regulations, the US can play a leading role in shaping responsible AI governance on a global level.
An additional risk associated with delaying AI regulation is that AI development may complicate the US-China rivalry. While China invests heavily in AI, some argue that the United States should not restrain itself out of concerns over potential misuse. Viewing AI regulation as an impediment to competitiveness would be a misguided approach. Instead, the US must prioritize the responsible and ethical development of AI, ensuring that it aligns with societal values and serves as a force for positive change.
Given the rapid advancements in AI and generative technologies, the United States risks ceding authority to other global players unless it acts swiftly to implement reasonable and sweeping federal regulations. The support from industry leaders, growing academic consensus, and the need for societal safeguards all underscore the urgency of this matter. By establishing a comprehensive regulatory framework, the US can strike a balance between fostering innovation and ensuring accountability, positioning itself as a global leader in responsible AI governance. The time to act is now, as delayed regulation risks compromising the potential benefits of AI while exposing society and industry to as-yet-unknown risks.
Gamma Law is a San Francisco-based Web3 firm supporting select clients in complex and cutting-edge business sectors. We provide our clients with the legal counsel and representation they need to succeed in dynamic business environments, push the boundaries of innovation, and achieve their business objectives, both in the U.S. and internationally. Contact us today to discuss your business needs.