Icon for email, a simple dark blue paper plane
Dark blue phone icon

Wade Huntley, PhD

Sr. Lecturer, National Security Affairs
Sr. Lecturer, Cyber Academic Group

Dr. Wade L. Huntley is senior lecturer in the National Security Affairs department and Cyber Academic Group at the Naval Postgraduate School in Monterey, California, and an independent consultant on international security issues. He is also Program Director for DSCU Regional Education Courses. Huntley has worked extensively engaging government officials and non-governmental experts on a range of contemporary international policy challenges. Previously Huntley was Director for Disarmament and Non-Proliferation Research in the Liu Institute for Global Issues at the University of British Columbia in Vancouver, Canada; Associate Professor at the Hiroshima City University Peace Institute in Hiroshima, Japan; and Director of the Global Peace and Security Program at the Nautilus Institute in Berkeley, California. He has also held research and teaching positions at several universities and colleges and has worked in consultative roles engaging officials and experts on a range of global security issues, specializing in scenarios generation for future-oriented examinations of near-term policy challenges. Huntley’s publications include four edited volumes and over fifty peer-reviewed articles, book chapters and scholarly essays.

As an expert in national security, you recognize the strategic implications of emerging technologies like AI. How important is it for NPS students and faculty with technical expertise in AI to consider the geopolitical implications?

For NPS students and faculty with technical expertise in artificial intelligence, being mindful of the geopolitical implications of AI development is essential. This reflects a broader reality: modern scientists have typically paid close attention to the social and political consequences of their work. The atomic scientists of the early 20th century were keenly aware of the revolutionary impact that nuclear explosive technologies would create. Biologists have long appreciated the careful choices required for conducting responsible research using genetics technologies. Research and development in artificial intelligence is no different. 

In fact, several factors make this imperative particularly strong. One factor is the expanding range of applications of AI technology in economies and societies throughout the world. We cannot predict all effects of these applications, so we have a responsibility to be attentive to unanticipated outcomes. Another factor involves the potent ethical conundrums that artificial intelligence applications can present with respect to maintaining awareness of and accountability for the automated decision-making capabilities that we create. And, of course, applications available to governments and militaries raise questions concerning domestic political values and international security.

Many AI applications have been and will continue to be unambiguously positive -- for the American people, U.S. national security interests, and the world overall. But this is only because we are paying continuous attention to those kinds of implications while undertaking technical development.

 

The National Security Commission on Artificial Intelligence suggested: "We can still defend America and our allies without widespread AI adoption today, but in the future, we will almost certainly lose without it." What could be at stake if the Navy and Marine Corps do not commit to these new technologies rapidly enough?

There is a lot at stake if the U.S. military and government do not incorporate AI technologies rapidly enough to benefit from the opportunities they present. There is also a lot at stake if we commit to incorporating technologies before we sufficiently appreciate their impacts on tactics, operations and strategies. Skewing too far in either of these directions poses risks. The challenge is to find the right balance. The context of high uncertainty makes that hard.

In terms of military applications alone, the wide range of potentially beneficial applications of AI technology creates real impediments to anticipating longer-term and interactive effects. Those uncertainties exacerbate the challenges of balancing the risks of untested technologies against the risks of being left behind.

History tells us that weapons technologies need to be tested on the battlefield before their full operational and strategic implications can be known. However, Artificial Intelligence is not a new weapon technology. Rather, AI represents a set of revolutionary tools that can dramatically enhance the effectiveness of many weapon systems, create the possibilities of new weapons, and fundamentally reshape how we think about warfare at all levels. That's a lot to think through ahead of time. Fortunately, balancing risk and opportunity is a lot easier in some instances than others. So part of the formula for committing to AI technologies rapidly enough involves making selective choices about which areas to push forward firmly and which areas to think about more carefully.

Perhaps the biggest challenge is to elevate the decision-making about AI incorporation as much as possible out of the bureaucratic and competitive funding pressures with which we are all familiar. On the one hand, we sometimes see an over-infatuation with new technologies for their own sake. On the other hand, organizational inertia and resistance to changing established ways of doing things are all too common. Rivalries among military services and governmental agencies, including competition for funding, can also warp decision-making. The ubiquity of the opportunities for AI applications makes AI decision-making potentially porous to such influences. We will be well served by enhancing our capacity to develop holistic and integrated perspectives to guide the incorporation of AI technologies into military capacities.

 

China and other countries are investing heavily in AI and some have declared that whoever masters AI masters the world. Do you agree that mastering AI is the most important priority for the US in terms of global competition? How is this affected by the practice of civil-military integration and Military-Civil Fusion?

A decade ago, it was possible for those thinking about the incorporation of AI into U.S. capabilities and strategy to imagine the U.S. would dominate this technology domain for a while. That's not the case anymore. AI-driven AlphaGo's defeat of the world's top Go player in 2017 is often termed China's "Sputnik Moment." The analogy refers to the U.S. government being caught off guard by Russia's launch of the Sputnik satellite, catalyzing a national surge in development of U.S. space launch capabilities. It's an interesting analogy because the U.S. government was more prepared for that moment than is commonly realized. Similarly, I suspect that China's top leaders may not have been so surprised by this rather dramatic demonstration of AI possibilities.

Shortly thereafter, China's State Council issued an AI Development Plan calling for China to be a world competitor in several AI technology fields by 2020, a world leader in AI applications by 2025, and a world leader in AI "innovation" by 2030, including advanced implementation of "intelligent economy and intelligent society." This AI Development Plan also emphasizes "two-way" civil-military fusion. We also know that China has subsequently made great strides in these directions, incorporating AI technologies ubiquitously across its economy and society, as well as in military development. Some of these applications have deeply enhanced China's domestic surveillance capacities in ways antithetical to U.S. values. But China's civil-military fusion strategy has enabled the country to apply dramatic advances in civil AI implementation to national security and military purposes. 

There are many ways to assess the relative AI capacity of U.S. & China today. When reviewing the range of data available, the general picture that emerges is that the U.S. is still in a leadership position. China is a primary competitor and gaining. Mastering AI is a critical priority for the U.S. in terms of global competition, military security, and broader national position. But it cannot be the only priority, not least because AI capabilities touch on so many other elements of US social, economic, and military power. AI priorities need to be fulsomely incorporated into a core national strategy of great power competition that is dynamic and responsive to evolving conditions and new developments. 

Taking a step back, it is clear that U.S.-China competition in general, and the priority on both sides to develop and incorporate AI technologies, is creating arms race dynamics. National competition plus the perception of military applications drives research and development. The uncertainty over potential advantages fuels this racing – neither country wants to become technologically ambushed. The dual civilian and military applications, and the positively reinforcing impact of civil-military symmetrical development, accelerates technology diffusion and hastens adoption. There is debate over whether arms racing increases prospects of military conflict or provides an outlet for a competition short of conflict. But it is generally the case that arms races increase political tensions and destabilize relations. For AI, there is also another particular consequence: competitive pressures can drive states to deploy and use AI-enabled systems at earlier stages of development than they would otherwise. That is perfectly rational behavior because the risk of an adversarial advantage balances the risk of immature technologies. The consequence is that both sides may end up deploying AI systems that could be more prone to unanticipated behavior, bias, and failure -- and more susceptible to subversion.

 

What can/should we do to improve civil-military relations in the U.S.?

Department of Defense strategies already recognize that, with respect to the development of artificial intelligence technologies, the relationship of the military and government to civilian and corporate sectors is going to have to operate differently than it has in the past. Current strategies anticipate decentralized development of the technologies, emphasize partnerships with industry and academia, and envision the DOD helping develop non-defense applications. The reason for this approach is clear: the private sector is leading, and will continue to lead, AI innovation and development. Private funding for AI R&D dwarfs governmental resources.

This is a reversal of the Cold War relationship, in which the U.S. government led technology development for national security needs. The U.S. nuclear deterrent infrastructure was developed fundamentally on the basis of requirements defined by the U.S. government. The development of most major conventional capabilities followed the same pattern. Now, however, in artificial intelligence and in a number of other areas of information technologies, the DOD is incorporating existing commercial technologies for military uses. The U.S. military is drawing on, rather than driving, private sector innovation. This reversed relationship means that direct DOD resourcing is only part of the effort. This new relationship also highlights the importance of public support, and industry support, for DOD objectives. That last point should not alarm us. We have known since Clausewitz that broad national support for governmental objectives is an essential element of success for any military strategy. 

 

At NPS you educate future military leaders, some of whom will become decision-makers in national security strategy. How do you approach educating these students knowing that future challenges are ever-evolving and unpredictable?

Some folks miss the simplicity of the bipolar nuclear balance that defined the strategic competition of the Cold War. I do not miss the prospect of large-scale nuclear conflict that was an ever-present condition of that period. 21st-century geopolitical conditions are certainly more complex, dynamic and unpredictable. We are still struggling to adapt our strategic acumen to this uncertain era. Therefore, preparing younger officers to be nimble & farsighted in their decision-making in a world defined by rapid and unpredictable change is a paramount task.

I am an advocate of developing strategies that are self-reflective and self-adaptive – that is, strategies that enable decision makers to react flexibly to dramatic surprises and changing conditions, rather than strategies that create bounded bureaucratic momentum destined to be increasingly inconsistent with national security needs.

 

If you were to predict, what challenges do you anticipate being the most pressing in 10-20 years when current NPS students are in those leadership roles?

The best prediction I can make is that prediction will be a failing strategy. The most pressing need for current NPS students who will be in leadership roles 10-20 years from now is to prepare themselves to cope with challenges that they may not anticipate, or even imagine, today. That will work if the world turns out more linear as well.

Keywords:
No items found.

Stay Engaged With The Faces Of NPS!

Nominate Someone For Faces Of NPS!

Nominate yourself or another Naval Postgraduate School alumnus, current student, faculty or staff member for consideration in a future Faces of NPS e-newsletter!

Join Our Mailing List!

.iframe-container{ position: relative; width: 100%; padding-bottom: 56.25%; height: 0; } .iframe-container iframe{ position: absolute; top:0; left: 0; width: 100%; height: 100%; }