Limited Query Black-box Adversarial Attacks in the Real World
We study the creation of physical adversarial examples, which are robust to real-world transformations, using a limited number of queries to the target black-box neural networks. We observe that robust models tend to be especially susceptible to foreground manipulations, which motivates our novel Foreground attack. We demonstrate that gradient priors are a useful signal for black-box attacks and therefore introduce an improved version of the popular SimBA. We also propose an algorithm for transferable attacks that selects the most similar surrogates to the target model. Our black-box attacks outperform state-of-the-art approaches they are based on and support our belief that the concept of model similarity could be leveraged to build strong attacks in a limited-information setting.
Designing a LiDAR topographic navigation system: A novel approach to aid the visually impaired
The WHO reports 2.2 billion people internationally have a form of visual impairment, with Perkins School of Blind adding that 4 to 8 percent (8.8 - 17.6 million people) solely rely on a white cane for navigation. In an interview by Stephen Yin for NPR, visually impaired interviewees claimed that a white cane was ineffective as it failed to detect moving obstacles (ex. bikes), aerial obstacles (ex. falling objects), and it became physically demanding after a prolonged period. This problem can be solved with a headset that integrates LiDAR technology and haptic feedback to provide a real-time assessment of their environment. Theoretically, the device will determine how far an object is from the user and place it into one of three conditionals based on distance (0- 290mm, 310-500mm, 510-1200mm). As the user gets closer to the object, the haptic will vibrate more frequently. The device has 11 LIDAR sensors, beetle processors, and ERM motors so that when the LiDAR detects an object, the device will send a haptic signal in that area. It not only identifies the existence of an object but it tells the user its relative position with a latency period of approximately 2 milliseconds. When testing the device, a simulated walking environment was made. Ten obstacles were included: five below the waist (72”, 28”, 35” and 8.5” tall sticks) and five above the waist (paper suspended 6”, 10”, 48” and 28” from the ceiling). The white cane detected 4.1 obstacles, whereas the device detected 7.3 on average. The LiDAR navigation system is 178% more effective at detecting objects comparatively. Visually impaired individuals no longer must rely on the white cane; rather, using this device, they can detect small, moving, and aerial objects at a much faster, and more accurate speed.
An Efficient and Accurate Super-Resolution Approach to Low-Field MRI via U-Net Architecture With Logarithmic Loss and L2 Regularization
Low-field (LF) MRI scanners have the power to revolutionize medical imaging by provid- 27 ing a portable and cheaper alternative to high-field MRI scanners. However, such scanners are usu- 28 ally significantly noisier and lower quality than their high-field counterparts. This prevents them 29 from appealing to global markets. The aim of this paper is to improve the SNR and overall image quality of low-field MRI scans (called super-resolution) to improve diagnostic capability and, as a result, make it more accessible. To address this issue, we propose a Nested U-Net neural network architecture super-resolution algorithm that outperforms previously suggested super-resolution deep learning methods with an average PSNR of 78.83 ± 0.01 and SSIM of 0.9551 ± 0.01. Our ANOVA paired t-test and Post-Hoc Tukey test demonstrate significance with a p-value < 0.0001 and no other network demonstrating significance higher than 0.1. We tested our network on artificial noisy downsampled synthetic data from 1500 T1 weighted MRI images through the dataset called the T1- mix. Four board-certified radiologists scored 25 images (100 image ratings total) on the Likert scale (1-5) assessing overall image quality, anatomical structure, and diagnostic confidence across our architecture and other published works (SR DenseNet, Generator Block, SRCNN, etc.). Our algo- rithm outperformed all other works with the highest MOS, 4.4 ± 0.3. We also introduce a new type of loss function called natural log mean squared error (NLMSE), outperforming MSE, MAE, and MSLE on this specific SR task. Additionally, we ran inference on actual Hyperfine scan images with successful qualitative results using a Generator RRDB block. In conclusion, we present a more ac- curate deep learning method for single image super-resolution applied to low-field MRI via a 45 Nested U-Net architecture.