Fall Research Expo 2020

Attitudes, Investments, and Confidence-building Measures Concerning Military Adoption of Artificial Intelligence Technology

Emerging technologies, especially artificial intelligence (AI), have brought questions concerning trust and ethical apprehensions. The gradual implementation of AI in various industries such as automotive, health care, and defense, demands policies that understand the limitations, risks, and vulnerabilities, including the ability to address autonomous bias concerns.  As countries begin to invest in military adoption of AI, global power projection capabilities and the consequences of the security environment need to be assessed in order to maintain strategic stability. Part of this project involved collecting and analyzing data from a survey assessing the attitudes of civil servants towards the adoption of AI in autonomous vehicles, surgery, and the military. Results demonstrated that there was evidence of an association between the level of concern about the potential for bias in algorithms and support levels of the use of algorithms in self-driving vehicles, surgical procedures, surveillance of criminal suspects through facial recognition software, general monitoring of the civilian population for illegal behavior, job selection and promotion for state and local officials, decisions about prison sentences, decisions about the transplant list and the military force. The incentives for deploying AI technology onto the battlefield will likely outweigh ethical apprehensions in the future, as both money and political power are at stake when competing for dominance. Russia, for example, has been at the forefront of investing in the development of AI, not only in the technology industry, but also in the military. To reach the point of deployment, decisions made by autonomous weapon systems would need to be calculated to account for ethics such as shooting a child versus an adult, and the consideration of collateral damage. Confidence-building measures similar to those used in pursuing nonproliferation will help mitigate escalatory effects of unpredicted activities. In this project, attitudes, investments, and confidence-building measures concerning adoption of Artificial Intelligence (AI) technology, primarily in the military, were investigated.

PRESENTED BY
PURM - Penn Undergraduate Research Mentoring Program
Wharton 2023
Advised By
Professor Michael Horowitz
Professor of Political Science and Interim Director of Perry World House
Join Esther for a virtual discussion
PRESENTED BY
PURM - Penn Undergraduate Research Mentoring Program
Wharton 2023
Advised By
Professor Michael Horowitz
Professor of Political Science and Interim Director of Perry World House

Comments

This is super interesting! I'm really curious about the bias present in the adoption of AI. How have companies tried to mitigate this? What is their decision making process when determining if something is fair? 

This is super interesting! I'm really curious about the bias present in the adoption of AI. How have companies tried to mitigate this? What is their decision making process when determining if something is fair? 

This is a great topic and is so relevant with the growing industry of AI and its widespread implications. With the hesitation among many regarding the use of AI because of its ethical impacts due to biases, are there a significant number of policies in place to really curb the potential for harmful/biased AI and its use?

Esther, 

Great project! I just wanted to ask a quick question. What do you think are the potential effects the outbreak of COVID-19 in regards to the military adoption of AI technology? Do you think your analysis of the situation regarding the confidence of AI's being employed in the military may change with evolving circumstances involving the virus?

Thanks! 

Junyoung 

Awesome work! Apart from ethical calculations of who/what AI might harm, what kind of accountability do such incidents entail for the actor who deployed it? If a technology is largely automated with AI, does this change what degrees of responsibility we assign to those who accidentally ordered the mission?

Esther this is amazing! I enjoyed working with you over the summer. The research on artificial intelligence on nuclear command and control is so interesting and the field is only growing bigger!