The Ditchley Foundation together with the Royal United Services Institute (RUSI) and Cambridge University’s Centre for the Study of Existential Risk (CSER) held two linked workshops this autumn to explore likely impacts on strategic stability of artificial intelligence and machine learning technologies.
With representatives from defence analysis and policy, military advisers, academic and tech expertise, these sessions considered the relevance of technologies in the context of national security. Technologies such as: all-source intelligence analysis; machine vision capabilities; logistic optimisation and management of assets; autonomous and semi-autonomous weapons and tactical and strategic simulators for planning and training were explored in an exercise to simulate processes of decision-making for procurement.
Perception (and mis-perceptions) of what machine learning can do and the ways it can be deployed are critical and impact the ways decisions are made. Technological literacy of decision-makers and military advisers, the speed of technological change, knowledge of private sector innovation, and the signalling or secrecy over machine learning capabilities between states were identified as issues that inform the quality of decision-making and therefore relevant for near term strategic stability.