US Reps. Will Hurd and Robin Kelly are from opposite sides of the ever-widening aisle, but they share a concern that the United States may lose its grip on artificial intelligence, threatening the American economy and the balance of world power.
On Thursday, Hurd (R-Tex.) and Kelly (D-Ill.) offered suggestions to prevent the US from falling behind China, especially, on applications of AI to defense and national security. They want to cut off China’s access to AI-specific silicon chips and push Congress and federal agencies to devote more resources to advancing and safely deploying AI technology.
Although Capitol Hill is increasingly divided, the bipartisan duo claims to see an emerging consensus that China poses a serious threat and that supporting US tech development is a vital remedy.
“American leadership and advanced technology has been critical to our success since World War II, and we are in a race with the government of China,” Hurd says. “It’s time for Congress to play its role.”
Kelly, a member of the Congressional Black Caucus, says that she has found many Republicans, not just Hurd, the only Black Republican in the House, open to working together on tech issues. “I think people in Congress now understand that we need to do more than we have been doing,” she says.
The Pentagon’s National Defense Strategy, updated in 2018, says AI will be key to staying ahead of rivals such as China and Russia. Thursday’s report lays out recommendations on how Congress and the Pentagon should support and direct use of the technology in areas such as autonomous military vehicles. It was written in collaboration with the Bipartisan Policy Center and Georgetown’s Center for Security and Emerging Technology, which consulted experts from government, industry, and academia.
The report says the US should work more closely with allies on AI development and standards, while restricting exports to China of technology such as new computer chips to power machine learning. Such hardware has enabled many recent advances by leading corporate labs, such as at Google. The report also urges federal agencies to hand out more money and computing power to support AI development across government, industry, and academia. The Pentagon is asked to think about how court martials will handle questions of liability when autonomous systems are used in war and to talk more about its commitment to ethical uses of AI.
Hurd and Kelly say military AI is so potentially powerful that America should engage in a kind of AI diplomacy to prevent dangerous misunderstandings. One of the report’s 25 recommendations is that the US establish AI-specific communication procedures with China and Russia to allow human-to-human dialog to defuse any accidental escalation caused by algorithms. The suggestion has echoes of the Moscow-Washington hotline installed in 1963 during the Cold War. “Imagine in a high-stakes issue: What does a Cuban missile crisis look like with the use of AI?” asks Hurd, who is retiring from Congress at the end of the year.
Cut through the hype
Beyond such worst-case scenarios, the report includes more sober ideas that could help dismantle some hype around military AI and killer robots. It urges the Pentagon to do more to test the robustness of technologies such as machine learning, which can fail in unpredictable ways in fast-changing situations such as a battlefield. Intelligence agencies and the military should focus AI deployment on back-office and noncritical uses until reliability improves, the report says. That could presage fat new contracts to leading computing companies such as Amazon, Microsoft, and Google.
Helen Toner, director of strategy at the Georgetown center, says although the Pentagon and intelligence community are trying to build AI systems that are reliable and responsible, “there’s a question of whether they will have the ability or institutional support.” Congressional funding and oversight would help them get it right, she says.
The paper released Thursday is the second of four on AI strategy issued by Hurd and Kelly with the Bipartisan Policy Center. The first, released earlier this month, focused on the workplace. Its recommendations included reworking education from kindergarten through grad school to prepare more Americans to work with or on AI. The two papers to come are on AI research and development, and AI ethics.
Kelly and Hurd have shared an interest in AI since working on hearings on the subject by the House Oversight Committee’s Subcommittee on Information Technology in 2018. The pair later authored a report warning that the US could lose its leading position on AI. Kelly says she wants to ensure the United States remains a leader in AI, but also that “people in the diverse district I come from have a piece of that pie, and that there are not biases against them or concerns around privacy.”
At the tail end of the Obama administration, the White House produced lengthy documents on how to support US AI development and deployment and address potential downsides such as technological unemployment. The Trump administration chose not to build on those, but last year President Trump signed an executive order directing existing government programs to be tilted towards AI projects. It leaves the US with a less toothy AI strategy than many other nations, including China, which have stood up new programs and funding sources. Hurd and Kelly are trying to change that.
James Lewis, who leads work on technology policy at the Center for Strategic and International Studies, applauds the effort. The timing is good, he says, because more lawmakers are taking an interest in tech policy. “They now realize we’re in a contest with China and have woken up to the fact that technology like AI and semiconductors and cybersecurity are important,” he says. Last week, the Senate voted 96-4 to amend the annual Pentagon budget bill with $25 billion to support domestic research and manufacturing of new chip technology.
Lewis supports restricting chip exports to China—an idea that could gain traction in a Congress showing new interest in tech export controls. He’s skeptical that an AI hotline or inventing special forms of AI diplomacy to prevent autonomous accidents are worthwhile. Events during the Cold War and since, most recently in areas such as cybersecurity, suggest China and Russia don’t take such programs seriously, Lewis says.
Hurd and Kelly are now drafting a congressional resolution incorporating their ideas about AI, including in national security. After that, they’ll start work on AI legislation. “Some of that I hope we get done in this Congress, and others can be taken and run with in the next Congress,” Hurd says.
This story first appeared on wired.com.
View original article here Source