We study the creation of physical adversarial examples, which are robust to real-world transformations, using a limited number of queries to the target black-box neural networks. We observe that robust models tend to be especially susceptible to foreground manipulations, which motivates our novel Foreground attack. We demonstrate that gradient priors are a useful signal for black-box attacks and therefore introduce an improved version of the popular SimBA. We also propose an algorithm for transferable attacks that selects the most similar surrogates to the target model. Our black-box attacks outperform state-of-the-art approaches they are based on and support our belief that the concept of model similarity could be leveraged to build strong attacks in a limited-information setting.
「為配合國家發展委員會「推動ODF-CNS15251為政府為文件標準格式實施計畫」,以及 提供使用者有文書軟體選擇的權利,本館檔案下載部分文件將公布ODF開放文件格式, 免費開源軟體可至LibreOffice 下載安裝使用,或依貴慣用的軟體開啟文件。」