全國中小學科展

美國

Carbon Nanostructures Via Dry Fce Exposed to High Temperature

This science project is designed to answer a question of whether or not a chemical reaction is needed to produce industrial quantities of carbon nanostructures by exposing dry ice to a high temperature that is at least 3100°C. A small carbon arc furnace powered by an electric welder is used to produce the high temperature. During control runs, the carbon arc furnace is energized for a predetermined time, after which the carbon arc furnace is de-energized and any carbon particles within the furnace are collected. During carbon nanostructures synthesis runs, dry ice is placed within the carbon arc furnace. The carbon arc furnace is energized and the dry ice is consumed for the predetermined time. Carbon nanostructures synthesized during the synthesis runs are collected once the carbon arc furnace is de-energized and allowed to cool. The volume of the carbon particles collected during the control runs is compared to the volume of the carbon nanostructures produced by the synthesis runs. This science project has discovered that on average at least 16 times more carbon nanostructures are produced during synthesis runs consuming dry ice as opposed to the control runs. Moreover, the synthesis runs did not rely on chemical reactions. Further still, samples of the synthesized carbon nanostructures were imaged using a transmission electron microscope (TEM). The TEM images clearly show high-quality carbon nanostructures that include carbon nanotubes, faceted carbon nanospheres, and the super-material graphene.

A Novel Spectroscopic-Chemical Sensor Using Photonic Crystals

Detection of harmful chemicals used in industrial complexes is crucial in order to create a safer environment for the workers. Presently, most chemical detectors used in workplaces are expensive, inefficient, and cumbersome. In order to address these deficiencies, a novel sensor was fabricated to produce a unique spectroscopic fingerprint for various toxic chemicals. The sensor was fabricated by depositing several layers of silica spheres (diameter ~250 nm) on a glass substrate using evaporation-based self assembly. As the spheres assemble to form a photonic crystal, they also create void (i.e., air) spaces in between them. Once the spheres assemble as a photonic crystal, a spectrometer was used to monitor the reflectivity. The spectrum had a high reflectivity at a specific wavelength, which is governed by the average index of refraction between the spheres and the void spaces. As a foreign chemical infiltrates into the photonic crystal, it occupies the void space, which results in an increase of the average index of refraction of the structure. Consequently, the peak wavelength of the reflectivity spectrum red-shifts, which then confirms the presence of a foreign substance. While the as-grown photonic crystal is able to detect chemicals, it is unable to differentiate between chemicals that have similar indices of refraction, such as ethanol and methanol. In order to detect chemicals with similar indices of refraction, five pieces of a single photonic crystal (i.e. five pixel device) were exposed to different silanes, which changed the surface chemistry of the silica spheres in the photonic crystal. In turn, the five pixel device was able to produce a unique chemical fingerprint for several chemicals, which can be calibrated to detect toxins in the workplace.

Parallax Modelling of OGLE Microlensing Events

We present a study using microlensing event data from the Optical Gravitational Lensing Experiment (OGLE), recorded in the period 2002-2016 from the Galactic bulge. Our two algorithms are based on the standard point-source-point-lens (PSPL) model, and on the less conventional parallax model respectively. The optimal fit was found for each sample event in the chi-square optimization algorithm, along with the best fit parameters. Out of the 7 best fits, 4 show strong parallax effect. The microlensing fit parameters were then cross-matched with proper motion data from the Naval Observatory Merged Astrometric Dataset (NOMAD), to obtain lens mass estimation for four events. These were estimated to 0.447 solar masses, 0.269 solar masses, 0.269 solar masses and 17.075 solar masses respectively. All masses were within the microlensing mass interval for lenses found in similar studies. In this study, we conclude that the parallax model often better describe long events and demonstrate the importance of utilizing both PSPL fits and parallax fits, instead of only the PSPL model. By varying only 2 of the 7 parallax microlensing parameters instead of all simultaneously, we obtain plausible values for lens direction and lens transverse velocity: a method to investigate microlensing lens properties with no regard to its luminosity. In addition, we also present spectral classes of the NOMAD objects associated with each event, which is vital for future investigations to further confirm mass estimations. We present strategies to further enhance the algorithm to analyze the microlensing event light curve to better find deviations. We also conclude that our double model can potentially unveil the presence of dim lens objects (MACHOs) such as brown dwarfs, exoplanets or black holes.

Automated Illustration of Text to Improve Semantic Comprehension

Millions of people worldwide suffer from aphasia, a disorder that severely inhibits language comprehension. Medical professionals suggest that individuals with aphasia have a noticeably greater understanding of pictures than of the written or spoken word. Accordingly, we design a text-to-image converter that augments lingual communication, overcoming the highly constrained input strings and predefined output templates of previous work. This project offers four primary contributions. First, we develop an image processing algorithm that finds a simple graphical representation for each noun in the input text by analyzing Hu mo-ments of contours in images from The Noun Project and Bing Images. Next, we construct a da-taset of 700 human-centric action verbs annotated with corresponding body positions. We train support vector machines to match verbs outside the dataset with appropriate body positions. Our system illustrates body positions and emotions with a generic human representation created using iOS’s Core Animation framework. Third, we design an algorithm that maps abstract nouns to concrete ones that can be illustrated easily. To accomplish this, we use spectral clustering to iden-tify 175 abstract noun classes and annotate these classes with representative concrete nouns. Fi-nally, our system parses two datasets of pre-segmented and pre-captioned real-world images (Im-ageClef and Microsoft COCO) to identify graphical patterns that accurately represent semantic relationships between the words in a sentence. Our tests on human subjects establish the system’s effectiveness in communicating text using im-ages. Beyond people with aphasia, our system can assist individuals with Alzheimer’s or Parkin-son’s, travelers located in foreign countries, and children learning how to read.

Multiple Time-step Predictive Models for Hurricanes in the North Atlantic Basin Based on Machine Learning Algorithms

The cost of damage caused by hurricanes in 2017 is estimated to be over 200 billion dollars. Quick and accurate prediction of the path of a hurricane and its strength would be very valuable in alleviating these losses. Machine learning based prediction models, in contrast to models based on physics, have been developed successfully in many problem domains. A machine learning system infers the modeling function from a training dataset. This project developed machine learning based prediction models to forecast the path and strength of hurricanes in the North Atlantic basin. Feature analysis was performed on the HURDAT2 dataset, which contains paths and strengths of past hurricanes. Artificial Neural Networks (ANNs) and Generalized Linear Model (GLM) approaches such as Tikhonov regularization were investigated to develop nine hurricane prediction models. Prediction accuracy of these models was compared using a testing dataset, disjoint from the training dataset. The coefficient of determination and the mean squared error were used as performance metrics. Post-processing metrics, such as geodesic error in path prediction and the mean wind speed error, were also used to compare different models. TLS linear regression model performed the best of out the nine models for one and two time steps, while the ANNs made more accurate predictions for longer periods. All models predicted location and strength with greater than .95 coefficient of determination for up to two days. My models predicted hurricane path in under a second with accuracy comparable to that of current models.

Satellite Modeling of Wildfire Susceptibility in California Using Artificial Neural Networking

Wildfires have become increasingly frequent and severe due to global climatic change, demanding improved methodologies for wildfire modeling. Traditionally, wildfire severities are assessed through post-event, in-situ measurements. However, developing a reliable wildfire susceptibility model has been difficult due to failures in accounting for the dynamic components of wildfires (e.g. excessive winds). This study examined the feasibility of employing satellite observation technology in conjunction with artificial neural networking to devise a wildfire susceptibility modeling technique for two regions in California. Timeframes of investigation were July 16 to August 24, 2017, and June 25 to December 8, 2017, for the Detwiler and Salmon August Complex wildfires, respectively. NASA’s MODIS imagery was utilized to compute NDVI (Normalized Difference Vegetation Index), NDWI (Normalized Difference Water Index), land surface temperature, net evapotranspiration, and elevation values. Neural network and linear regression modeling were then conducted between these variables and ∆NBR (Normalized Burn Ratio), a measure of wildfire burn severity. The neural network model generated from the Detwiler wildfire region was subsequently applied to the Salmon August Complex wildfire. Results suggest that a significant degree of variability in ∆NBR can be attributed to variation in the tested environmental factors. Neural networking also proved to be significantly superior in modeling accuracy as compared to the linear regression. Furthermore, the neural network model generated from the Detwiler data predicted ∆NBR for the Salmon August Complex with high accuracy, suggesting that if fires share similar environmental conditions, one fire’s model can be applied to others without the need for localized training.

An In-Depth Patch-Clamp Study of HCN2 Channel (Year II): Discovery of Novel Biomarkers and Therapy for Ih Current Suppression in Autism Spectrum Disorders

The main goal of this study was to address a variety of topics concerning the role of the Ih current in HCN channels of SHANK Wild-Type and Knock-Out Thalamus Neurons (as described further below). This research explored the cellular effects of sedation (like Dexmedetomidine) and laser light stimulations on the Ih current of neurons, as well as discovering novel biomarkers for detecting Autism Spectrum Disorder. This study also showed that methods (like utilizing laser therapy with and without various photosensitizers) have the potential in raising depressed Ih currents of SHANK Knock-Out neurons.

Limited Query Black-box Adversarial Attacks in the Real World

We study the creation of physical adversarial examples, which are robust to real-world transformations, using a limited number of queries to the target black-box neural networks. We observe that robust models tend to be especially susceptible to foreground manipulations, which motivates our novel Foreground attack. We demonstrate that gradient priors are a useful signal for black-box attacks and therefore introduce an improved version of the popular SimBA. We also propose an algorithm for transferable attacks that selects the most similar surrogates to the target model. Our black-box attacks outperform state-of-the-art approaches they are based on and support our belief that the concept of model similarity could be leveraged to build strong attacks in a limited-information setting.

Limited Query Black-box Adversarial Attacks in the Real World

We study the creation of physical adversarial examples, which are robust to real-world transformations, using a limited number of queries to the target black-box neural networks. We observe that robust models tend to be especially susceptible to foreground manipulations, which motivates our novel Foreground attack. We demonstrate that gradient priors are a useful signal for black-box attacks and therefore introduce an improved version of the popular SimBA. We also propose an algorithm for transferable attacks that selects the most similar surrogates to the target model. Our black-box attacks outperform state-of-the-art approaches they are based on and support our belief that the concept of model similarity could be leveraged to build strong attacks in a limited-information setting.

Designing a LiDAR topographic navigation system: A novel approach to aid the visually impaired

The WHO reports 2.2 billion people internationally have a form of visual impairment, with Perkins School of Blind adding that 4 to 8 percent (8.8 - 17.6 million people) solely rely on a white cane for navigation. In an interview by Stephen Yin for NPR, visually impaired interviewees claimed that a white cane was ineffective as it failed to detect moving obstacles (ex. bikes), aerial obstacles (ex. falling objects), and it became physically demanding after a prolonged period. This problem can be solved with a headset that integrates LiDAR technology and haptic feedback to provide a real-time assessment of their environment. Theoretically, the device will determine how far an object is from the user and place it into one of three conditionals based on distance (0- 290mm, 310-500mm, 510-1200mm). As the user gets closer to the object, the haptic will vibrate more frequently. The device has 11 LIDAR sensors, beetle processors, and ERM motors so that when the LiDAR detects an object, the device will send a haptic signal in that area. It not only identifies the existence of an object but it tells the user its relative position with a latency period of approximately 2 milliseconds. When testing the device, a simulated walking environment was made. Ten obstacles were included: five below the waist (72”, 28”, 35” and 8.5” tall sticks) and five above the waist (paper suspended 6”, 10”, 48” and 28” from the ceiling). The white cane detected 4.1 obstacles, whereas the device detected 7.3 on average. The LiDAR navigation system is 178% more effective at detecting objects comparatively. Visually impaired individuals no longer must rely on the white cane; rather, using this device, they can detect small, moving, and aerial objects at a much faster, and more accurate speed.