上传于 2023-01-24 16:15 阅读:224 次 标签:人工智能  研究报告  兰德智库   评论 (2)

Operational Feasibility of Adversarial Attacks Against Artificial Intelligence

by Li Ang Zhang, Gavin S. Hartnett, Jair Aguirre, Andrew J. Lohn, Inez Khan, Marissa Herron, Caolionn O'Connell

A large body of academic literature describes myriad attack vectors and suggests that most of the U.S. Department of Defense's (DoD's) artificial intelligence (AI) systems are in constant peril. However, RAND researchers investigated adversarial attacks designed to hide objects (causing algorithmic false negatives) and found that many attacks are operationally infeasible to design and deploy because of high knowledge requirements and impractical attack vectors. As the researchers discuss in this report, there are tried-and-true nonadversarial techniques that can be less expensive, more practical, and often more effective. Thus, adversarial attacks against AI pose less risk to DoD applications than academic research currently implies. Nevertheless, well-designed AI systems, as well as mitigation strategies, can further weaken the risks of such attacks.

文档评论

2023-03-07 17:00 注册用户

写的不错,想仔细阅读以下,申请下载,谢谢。

回复内容不能少于10个字 提交
2023-05-09 16:55 注册用户

写的不错,想仔细阅读下,申请下载,谢谢

回复内容不能少于10个字 提交
您不能发表评论,可能是以下原因
登录后才能评论