Summaries - Office of Research & Innovation
Research Summaries
Back Adversarial Attacks on Machine Learning Models
Fiscal Year | 2020 |
Division | Graduate School of Operational & Information Sciences |
Department | Computer Science |
Investigator(s) | Strubel, Joshua D. |
Sponsor | Naval Information Warfare Center, Pacific (Navy) |
Summary |
There is a glaring absence of research (none to the author's knowledge) on attempts to perturb the model itself as opposed to the input vector. If a malicious actor can operationalize an attack by altering the machine learning model itself, we would want to know, and we would want to know the extent of the possible damage and the chances of detecting the attack. If research produced evidence of the ability to operationalize an attack on the model itself, an even deeper dive would be warranted. Can these attacks be hidden in the tools themselves that were used to develop and field ML systems? As Ken Thompson famously pointed out in his Turing Award acceptance speech, tools themselves can be an attack surface (Thompson). The damage caused by such an attack would be catastrophic and is the impetus for the proposed research. The proposed research will examine the possibility of selectively perturbing the model itself to produce outcomes similar to those caused by an adversarial example (a maliciously designed input with a perturbation that leads to a false prediction). |
Keywords | machine learning |
Publications | Publications, theses (not shown) and data repositories will be added to the portal record when information is available in FAIRS and brought back to the portal |
Data | Publications, theses (not shown) and data repositories will be added to the portal record when information is available in FAIRS and brought back to the portal |