In this wide-ranging interview, Brad Allenby – a Lincoln Professor of Engineering and Ethics at Arizona State University – warns us about the transformational impact of technology (including AI) on the existing institutions and shares his insights on the future of war.
Writing about the rise of AI, Henry Kissinger pointed out his concern with “the historical, philosophical and strategic aspect of it. I’ve become convinced that AI is going to bring a change in human consciousness exceeding that of the Enlightenment.” What worries you about the rise of AI (especially as the rise of AI happens in a context where advances in biotechnology and neuroscience seem to be opening new frontiers)?
One of the difficulties is that AI is one of those technologies like electricity, an enabling one across the technological frontier. We are going to be using it in the car navigation systems, in cellphones or refrigerators. It is not that we are going to have this integrated AI as a technological threat in the same way that we perceive a nuclear weapon. AI is going to enable new behaviors and new activities, which is one source of problems—just think about the intervention of the Russians in the 2016 American elections. At the same time, you are also going to have fundamental changes in the assumptions that underwrite our institutions. If you look at the American political system today we are arguing about the First Amendment [on freedom of speech]. But AI as integrated into social media, and the amount of information that we are generating means that that is an irrelevant question. If you can’t get on social media you don’t have free speech. You have AI integrated with other things acting in ways that are destabilising for the existing institutions. This is our biggest problem. The rate of change is accelerating, it is going to be more profound, so we are going to need to be able to develop new institutions that are much more agile and adaptive, and yet at the same time more ethical than the ones they are replacing.
How do you see the impact of AI and big data on democracy and pluralism at a time when the public square has increasingly moved online? Can they make democracy and pluralism more resilient and healthy, or are we going to see the opposite: AI-enabled malign information campaigns, tribalism on steroids (with societies that become divided along Hutu vs. Tutsi lines), or even Orwellian states where comprehensive surveillance is dominant?
Especially because there are so many dimensions to these changes, I think that you can’t predict; the only thing you can really do is to create scenarios. It is not an unreasonable scenario to ask if the integration of AI, the party and private firms into a network in China, which is part of the Social Credit System (SCS) doesn’t give authoritarianism a significant jump in fitness. Meanwhile the difficulty with pluralism is that the pluralistic structure was designed for a period when information in particular moved much more slowly. You see that in the First Amendment and with the checks and balances system. These are fine until the rate of change and technological reality decouple them from the governance system. Institutions that were designed for a low-bandwidth world suddenly find themselves overwhelmed by information flows. Once that happens, pluralistic societies have to think deeply how they reinvent themselves, because their authoritarian competitors are already reinventing themselves. A reasonable scenario is that the changes tend to weaken pluralism and tend to strengthen soft authoritarianism.
If the US is going to be successful going forward, it is going to have to figure out how to create a pluralism that embraces tribalism.
In this context, the thing to keep an eye on is how different cultures manage to use the integrated capability of the emerging cognitive ecosystem — 5G, social media, AI, the Internet of Things. Are they able to use that in ways which augment the effectiveness and the power of the state and party? Or does it rebound on their system in such a way that it fragments even more? The Chinese are putting together the Social Credit System (SCS) which integrates all of those. Everyone depends on the social credit system. You have a high credit score and you can get in airplanes, in trains, you can go to certain colleges. It becomes a very powerful way of nudging behaviour. They are creating a structure where unless people behave the way you want them to, they are going to hurt themselves.
Are the 21st century autocracies better positioned to compete and master AI/cognitive infrastructures than democracies?
Democracies in particular have a big problem. In the Constitution of the US we have this strong split between the military and civilian powers. That is great until your adversaries adopt a whole strategy of civilisational conflict (and both the Chinese and Russians have done it), in which case you are in trouble. Your military knows that it is a threat, but it is over the civilian infrastructure, so they can’t intervene. The pluralistic response may become more chaotic, and very importantly, it begins to take longer. The problem with authoritarianism has always been that it was fragile. But designed properly, a social credit system can not only nudge citizens to behave the way the authoritarians want them to do, but it can also detect when there are issues that might affect the legitimacy of the authoritarian. It can become a way of channeling information upwards as well. Designed right, the traditional problems of authoritarianism are ameliorated by this integrated AI/human capability. If that is the case, then you have pluralism getting more and more chaotic, more sclerotic, and you have soft authoritarianism becoming more effective.
The West: too successful to adapt?
During the 19th century, the Industrial Revolution was a hugely disruptive force that reshaped the international system and the balance of power globally. Some benefited and others lost. Are we in the early stages of a similar competition between the West and the Rest, spearheaded by a new technological revolution? With what implications?
Yes, we are. Successful institutions are going to be successful because they are fit for the current environment. That has been true for 200 years of Western models of governance. That also means that when things change fundamentally, they are the unfit ones. It is very hard for a successful organisation to adapt. AT&T used to be a great telephone company, but along comes internet telephony and AT&T goes away. The same is true of very successful governance systems. The problem that the Americans have is that they’ve been successful, and that is going to inhibit their ability to adjust to a world where the fundamental assumptions underlying those institutions have changed. Internationally, we may be entering a period where we are moving toward a kind of neo-medievalism: rather than having a single power we are going to have competing local power dynamics that tend to disrupt international commerce and could lead to higher levels of violence.
The amount of information that is available, the too many different stories, create an information overload so people fall back on their core narratives, not because they are stupid but because they are forced to. The only way they can continue to make sense of the world is to fall back on a tribal narrative that is more a matter of belief than of applied rationality.
This new type of medievalism might happen also inside the states not only in the international system. Tribalism is on steroids, the space for compromise-oriented elites is shrinking. This is a huge pressure for the US, as it used to function under the logic of E pluribus unum.
It is a problem that particularly the Americans have. To the best of my knowledge we never really had a world power that didn’t have an exceptionalist narrative. The problem is that today in their pursuit of identity politics the Americans have managed to destroy the integrating social narrative. The exceptionalist narrative in the US is very weak. Over time the US will become less competitive because tribal interests are going to grow to dominate the body politic. If the US is going to be successful going forward, it is going to have to figure out how to create a pluralism that embraces tribalism. That is going to be very hard. Tribalism, identity politics are here to stay. It is important to understand why. Individuals are information-processing mechanisms. If you fundamentally change the information environment you are going to perturb the performance of individuals and their institutions. Technologically-enabled trends are slowly undermining the core assumption of a pluralistic society — the individual as a rational citizen. That is exactly what we’ve done in the last 10 years. The amount of information that is available, the too many different stories, create an information overload so people fall back on their core narratives, not because they are stupid but because they are forced to. The only way they can continue to make sense of the world is to fall back on a tribal narrative that is more a matter of belief than of applied rationality. In short, a shifting away from System 2 thinking (predisposed to slow, applied rationality), back to System 1 thinking (predisposed to fast, emotional, intuitive thinking). That means that tribalism is not only going to continue, but strengthen.
The era of civilisational conflict
You have written a lot on the changes that affect conflict and war. What significant trend-lines do you see as shaping the future of conflict?
To me the deeper question is what fundamental structures have to change as we move into an era of ongoing, low-level civilisational conflict. Unless and until something dramatic happens, that is going to be the state of the world. If that is the case, what works and what doesn’t? You might say that clearly the military-civilian divide embedded in the US Constitution is obsolete and you should rethink it. That is never going to happen, but the deeper you get into what is happening to those assumptions, the more those kind of fundamental changes may need to be thought through.
But back to this paradigm change. The easiest way to think about the civilisational conflict is that over the last 30 years, the US has become the preeminent traditional military power. If you are China or Russia you are not going to be able to accept that that limits your freedom to protect what you feel are your vital interests. So you are going to figure out some way of developing effective asymmetric warfare and strategies. Overall, strategic and technological imperatives are changing how war and conflict are framed, generating a shift from military confrontation to a much broader and complex conflict waged across all domains of civilisation. Both Russia and China have gone in the same direction moving toward coherent theories of 21st-century conflict, and contemplating the inclusion of all dimensions of a civilisation in a very deliberate, strategically integrated process of long-term, intentionally coordinated conflict. You see this trend with the so-called ‘Gerasimov doctrine/New-generation warfare’ and the ‘Unrestricted warfare’ doctrine of the Chinese, and the implication is that all elements of an adversary’s culture and society become fair game for conflict. It does mean that you will be constantly attacking across that entire frontier. The idea that war is restricted to certain times and certain forms of combat becomes obsolete. Something that we need to recognise is that Russia is in constant war with the West; they have been over a long time, and they are continuing to fight it. The problem that NATO has is that it is more like a digital system. It is either on or off, it is either war or not. With the Russians it’s analogue. That is not something that the West is well designed to meet, either in terms of strategy or institutions. As much as the West may not like it, our adversaries have chosen civilisational conflict, and that is where we are. We need to adapt.
You can see the different ways in which major powers structure, for example, their cyber-activities. The Russians tend to use both internal government and criminal organisations. The Chinese tend to keep their high-technology companies very close and integrated with the state, so the party, the state and the private companies are all generally aligned in their behaviour. The Americans tend to let their companies go and view their private sector as being the innovative sector. That kind of fragmented approach means the Americans are unable to coalesce and align, even informally, the way the Chinese are. They have a different idea of what constitutes a civilisational conflict structure than the Americans do.
Something that we need to recognise is that Russia is in constant war with the West; they have been over a long time, and they are continuing to fight it. The problem that NATO has is that it is more like a digital system. It is either on or off, it is either war or not. With the Russians it’s analogue.
How do you see the implications of the emerging cognitive infrastructure for the traditional Boydian OODA loop? Visions of the war of the future talk about ‘algorithmic warfare’, where decision dominance is of the essence.
Conflict at the level of world powers of all kinds is going to be faster, more complex, and more systemic. Being fast and understanding your environment better – accelerating the OODA loop beyond the point that your adversary can follow – is going to provide the strategic advantage. At the same time, there will be many conflicts, such as in the Middle East and sub-Saharan Africa, that are going to be low-level communal and tribal violence powered by deep ideological differences – the so-called neomedieval environment. Speed, agility, access to large data pools, and adaptability are key, so the nations that figure out how to do that – how to get inside the OODA loop of one’s adversaries – are going to dominate over time. The West is not doing particularly well on any of those metrics, which should be a cause of concern.
What do we want to save about the ancien régime?
What are the implications of how we should think or rethink about the resilience of a pluralist democracy?
If pluralism is going to prosper, it needs to develop a way to reinvent itself from the foundations up. In doing so it may lose something that we value, but that is because it is becoming obsolete. In some ways we should think about the task as sitting down in 1788 – what do I want to save about the ancien régime? Because things are going to change and are going to be different. France was France before 1789 and it was France after 1789. So the question for the West is, what kind of West do we want to be?
Let’s also discuss the main ethical implications. People fear a future where robots might control us. What principles should regulate/govern the use of AI? Do you see the potential to educate and programme the intelligent machines in the spirit of the 10 commandments? Or are we becoming too much dependent on the old assumptions when imagining the future?
All of the above. I think we are already too dependent on the assumptions that were valid during the first Enlightenment, but they are going to change. The first Enlightenment didn’t fail – it succeeded brilliantly, but now it has obsoleted itself. The second Enlightenment is going to require us to rethink our ethical structures. As far as robots are concerned we are going to find that we have a far more complex environment, but the ethics are not part of what the robots bring to the table. We always tend to think about the robots and AI as being kind of like us. But they are not going to be. We are the product of the things that were evolutionarily necessary for a species like ours to prosper and become the dominant species on the planet. But there is no reason why the Internet should develop that same cognitive structure. For humans, emotion is among other things a shortcut to decision-making. If the situation is too complex, emotions kick in and we respond. An AI should not have the same constraint. It may have different ones, but it is not going to think the way we do. It is going to think profoundly differently. We keep thinking of AI as the Skynet. It may not be Skynet, it may be like Google maps or Alexa, that just become more and more part of your life.
Brad Allenby is a Lincoln Professor of Engineering and Ethics and co-chair of the Weaponised Narrative Initiative of the Center for the Future of War at Arizona State University.
Parts of this interview were published in Romanian in the printed issue of Cronicile Curs de Guvernare, No. 91.