close
close

On November 5, AI is also on the ballot

On November 5, AI is also on the ballot

The ballot with the word AI written on it goes into the ballot box.The choice Americans make in November will determine whether they continue to lead a collaborative effort to shape the future of AI according to democratic principles. Illustration: edited by Erik English; original from DETHAL via Adobe.

Artificial intelligence is one of the most important technologies of our time, promising huge benefits while posing serious risks to the nation’s security and democracy. The 2024 election will determine whether America leads or retreats from its crucial role in ensuring AI development is safe and in line with democratic values.

AI promises tremendous benefits, from accelerating scientific discovery to improving healthcare to increasing productivity in our economy. But realizing those benefits requires what experts call “secure innovation,” developing artificial intelligence in ways that protect safety, security, and American values.

Despite its benefits, the various risks associated with artificial intelligence are significant. Unregulated AI systems could amplify societal biases, leading to discrimination in crucial decisions about jobs, loans and healthcare. The security challenges are even more daunting: AI-based attacks could probe power grid vulnerabilities thousands of times per second, launched by individuals or small groups, rather than requiring the resources of nation states. During health or public safety emergencies, AI-enabled disinformation could disrupt critical communications between emergency services and the public, undermining life-saving response efforts. Perhaps most alarmingly, AI can lower the barriers for bad actors to develop chemical and biological weapons more easily and quickly than without the technology, putting devastating capabilities within the reach of individuals and groups who previously lacked expertise or research skills.

Recognizing these risks, the Biden-Harris administration has developed a comprehensive approach to AI governance, including the benchmark Executive Order on the Development and Safe, Secure, and Trusted Use of Artificial Intelligence. The administration’s framework guides federal agencies to address the full spectrum of AI challenges. It sets new guidelines to prevent AI discrimination, promotes research that serves the public good, and creates new initiatives across government to help society adapt to the changes driven by AI. The framework also addresses the worst security risks by ensuring that powerful AI models undergo rigorous testing so that safeguards can be developed to block their potential misuse – such as helping creating cyber attacks or biological weapons – in ways that threaten public safety. These safeguards preserve America’s ability to lead the AI ​​revolution while protecting our security and values.

Critics who argue that this framework stifles innovation would do well to consider other transformative technologies. Rigorous safety standards and air traffic control systems developed through international cooperation did not inhibit the airline industry, they made it possible. Today, millions of people board airplanes without a second thought because they trust the safety of air travel. Aviation has become a cornerstone of the global economy precisely because nations have worked together to create standards that have earned public trust. Similarly, catalytic converters have not hindered the automotive industry: they have helped cars meet increasing global demands for both mobility and environmental protection.

Just as the Federal Aviation Administration ensures safe air travel, dedicated federal oversight, in collaboration with industry and academia, can ensure the responsible use of artificial intelligence applications. By recently released National Security MemorandumThe White House has now established the AI ​​Safety Institute at the National Institute of Standards and Technology (NIST) as the US government’s primary liaison for private sector AI developers. This institute will facilitate voluntary testing – both before and after public deployment – ​​to ensure the safety, security and reliability of advanced AI models. But since threats like biological weapons and cyber attacks do not respect borders, policymakers must think globally. That’s why the administration is building a network of AI safety institutes with partner countries to harmonize standards worldwide. It’s not about going it alone, but about leading a coalition of like-minded nations to ensure AI develops in ways that are both transformative and trusted.

Former President Trump’s approach would be significantly different from that of the current administration. The Republican National Committee platform aims to “Repeal Joe Biden’s dangerous executive order that stifles AI innovation and imposes far-left ideas on the development of this technology.” This position contradicts growing public concerns about technological risks. For example, Americans have witnessed the dangers children face due to unregulated social media algorithms. That’s why the US Senate recently came together in an unprecedented show of bipartisan strength to pass it The Children’s Online Safety Act by a vote of 91-3. The bill gives young people and parents tools, safeguards and transparency to protect themselves from online harm. The stakes with AI are even higher. And for those who believe that putting up handrails on technology will hurt America’s competitiveness, the opposite is true: Just as travelers have come to favor safer planes and consumers have demanded cleaner vehicles, they will insist on reliable AI systems. Companies and countries that develop AI without adequate safeguards will find themselves at a disadvantage in a world where users and companies demand assurances that their AI systems will not spread misinformation, make biased decisions or enable dangerous applications.

The The Biden-Harris Executive Order on AI it lays a foundation upon which to build. Strengthening the United States’ role in setting global AI safety standards and expanding international partnerships is critical to maintaining American leadership. This requires working with Congress to secure strategic investments for AI safety research and oversight, as well as investments in defensive AI systems that protect the nation’s digital and physical infrastructure. As automated AI attacks become increasingly sophisticated, AI-powered defenses will be crucial to protecting power grids, water systems and emergency services.

The window for establishing effective global AI governance is narrow. The current administration has built a burgeoning ecosystem for safe, secure, and trusted AI—a framework that positions America to be a leader in this critical technology. To step back now and dismantle these carefully constructed safeguards would relinquish not only America’s technological advantage, but also the ability to ensure that AI develops in accordance with democratic values. Countries that do not share the United States’ commitment to individual rights, privacy, and security would then have a greater voice in setting standards for technology that will reshape every aspect of society. This election represents a critical choice for America’s future. The right standards, developed in partnership with allies, will not inhibit the development of AI – they will ensure that it reaches its full potential in the service of humanity. The choice Americans make in November will determine whether they continue to lead a collaborative effort to shape the future of AI according to democratic principles, or hand that future over to those who would use AI to undermine our nation’s security, prosperity, and values.