Development of a Method for Measuring the Ozone Concentration in the Atmosphere Using Passive Method
1. Introduction Passive method is widely used for measuring air pollutant for one day to several weeks. This method can be used easily and doesn’t need electricity, but expensive devices are needed for measuring substances, so this is not suitable for high school students for measuring or investigating. Then, we focused on the reaction, in which Indigo, the blue pigment, is discolored by ozone, and we built up a hypothesis, that indigo is suitable for measuring ozone concentration. 2. Experimental Section We soaked a 10 mm×20 mm filter paper in an indigo solution, including hosphoric acid. Then, they were dried in an automatic oven. 5.5 cm×10 cm PTFE sheet was fold in two and five sheets of indigo filters were fixed inside (passive sampler). The passive samplers were fixed on a stand and exposed to ozone in the atmosphere. After a few days, we collected the samplers and put each indigo filter and 4.0 mL of ion-exchange water into sample tubes. Then we shook this and extracted the color pigment. We had the average value of 600 nm from the five sheets as a measure value. 3. Results and Discussion The total amount of ozone for one to seven days measured in the experiment was directly proportional to the amount of ozone measured by Osaka Prefecture. We found that we can measure ozone in atmosphere using our method. Passive method has an advantage: it can be carried out easily. We employed this trait and measured ozone concentration at 23 points simultaneously in the north of Osaka for 48 hours. We made the map of ozone concentration by marking on a blank map. The map we made was just like the map published by Osaka Prefecture. We expect that this method will be useful in measuring ozone, where measuring devices are not available. 4. Conclusion We succeeded developing new method for measuring ozone in the atmosphere by passive method using indigo, the blue pigment.
New Screening Method for Early Pediatric Cancer Detection Through Automated Handwriting Analysis
Pediatric cancer has an incidence rate of more than 175,000 per year with a mortality rate of approximately 96,000 per year. One major cause of this problem is late diagnosis. A novel promising way of pediatric cancer screening is handwriting analysis. This method surpasses other methods by detecting pediatric cancer in a very early stage. However, studies are still limited to manual analysis which needs an expert and a long period of time. The aim of this project is to design a computer program to extract handwriting features and build a classification model to classify the user as patient or as control. Dataset was collected from schools and hospitals where all participants could read and write in English. After data cleansing, number of samples was 440 samples. MATLAB (Matrix Laboratory) program was used for extracting geometric features in handwriting. Program was validated using a subset of 50 samples of the dataset. WEKA Package was used to test and build the classifier. Experiments were done using classifiers: Logistic, Multilayer Perceptron, J48, LibSVM, AdaBoostM1 and Naïve Bayes. Best subset of attributes was evaluated and used for each classifier and all calculations were done as the average of cross validation operations of several folds assignments. Best performance was achieved by Logistic classifier with average accuracy of 80.15%, standard deviation of 0.43% and Matthews's correlation coefficient of 0.59. Finally, this project presents a new fast, free, ready, easy and psychologically comfortable method for pediatric cancer detection while keeping suitable accuracy for mass screening.
IlluminaMed: Developing Novel Artificial Intelligence Techniques for the Use In a Biomedical Image Analysis Toolkit and Personalized Medicine Engine
Despite the multitude of biomedical scans conducted, there is still relatively low accuracy and standardization of diagnoses from these images. In both the fields of computer science and medicine there is very strong interest in developing personalized treatment policies for patients who have variable responses to treatments. The aim of my research was automatic segmentation of brain MRI scans to better analyze patients with tumors, multiple sclerosis, ALS, or Alzheimer’s. In particular, I aim to use this information, along with novel artificial intelligence algorithms, to find an optimal personalized treatment policy which is a non-deterministic function of the patient specific covariate data that maximizes the expected survival time or clinical outcome. The result of the research was IlluminaMed, a biomedical image analysis toolkit that relies on the development of new artificial neural networks and training algorithms and novel research in fuzzy logic. The networks can detect patterns more complex than humans can identify and create patterns over long periods of time. IlluminaMed was trained by a dataset of professionally and manually segmented MRI scans from several prestigious hospitals and universities. I then developed an algorithmic framework to solve multistage decision problem with a varying number of stages that are subject to censoring in which the “rewards” are expected survival times. In specific, I developed a novel Q-learning algorithm that dynamically adjusts for these parameters. Furthermore, I found finite upper bounds on the generalized error of the treatment paths constructed by this algorithm. I have also shown that when the optimal Q-function is an element of the approximation space, the anticipated survival times for the treatment regime constructed by the algorithm will converge to the optimal treatment path. I demonstrated the performance of the proposed algorithmic framework via simulation studies and through the analysis of chronic depression data and a hypothetical clinical trial. IlluminaMed can automatically segment the scans with 98% accuracy, find tumors with 96% accuracy and approximate their volume within a 2% margin of error. It can also find lesions in MS and ALS, distinguishing them from tumors with 94% accuracy. IlluminaMed can, in addition, determine the tendency of a patient to develop Alzheimer’s several months before patients develop symptoms correlating the brain structure and its fluctuations. Lastly, the censored Q-learning algorithm I developed is more effective than the state of the art clinical decision support systems and is able to operate in environments when many covariate parameters may be unobtainable or censored. IlluminaMed is the only fully automatic biomedical image analysis toolkit and personalized medicine engine. The personalized medicine engine runs at a level that is comparable to the best physicians. It is less computationally complex than similar software and is unique in the fact that it can find new patterns in the brain with possible future diagnoses. IlluminaMed’s implications are not only great in terms of the biomedical field, but also in the field of artificial intelligence with new findings in neural networks and the relationships of fuzzy extensional subsets.
The Levitating Ball
This project was inspired by a tournament call the International Young Physicist’ Tournament (IYPT). The problem could be broken into two aims: ‘Investigate the forces that cause a ball to levitate in a titled airstream’ and ‘optimize the system for the maximum angle of tilt that results in a supported ball’. The first stage of the investigation was research and learning. Two fluid mechanics courses online were used to build a basic of knowledge of the subject. Next a force diagram was created to model the forces acting on the ball. The diagram identified a force called the lift force that must be acting on the ball to be supported. There were three contending theories that could explain the lift force: The Bernoulli theory, the Coanda theory and the Magnus theory. A practical investigation was then instigated to differentiate between these three theories. Since the Magnus theory is only applicable if the ball is spinning in the airstream, this theory was isolated by changing the center of mass of the ball but keep everything else constant (this allowed control of how much the ball spun in the airstream). Changing the center of mass didn’t impact on the maximum angle of tilt at all, proving that the spinning of the ball isn’t producing a significant amount of lift, and therefore the Magnus theory couldn’t be a cause for lift. Because further testing couldn’t isolate the Coanda and Bernoulli theories, a solution was developed to explain why the two remaining theories might co-exist. Further testing methods have been designed to investigate this possibility in more depth. To meet the second aim of this project, an investigation was launched to see how parameters affected the maximum angle that the ball could be supported at. The parameters investigated were: Ball radius, ball mass, ball surface, air speed and airstream diameter. A lot of time was spent creating a reliable experimental method. The method could be used to support a ball in an air stream, slowly tilt the air stream, and then measure the angle of tilt the moment that the ball fell out. After experimentation, a table was created to describe how the listed parameters affect the maximum angle of tilt that a ball can be supported at. Explanations were proposed for why each parameter affected this angle. Future experiments have been devised to build a deeper understanding of the effects of a wider range of parameters.
Physical Characterization of a Wide Aperture Segmented Reflector Telescope
Characterization of telescope lenses using physical optics and selection of the optimal physical parameters of a reflecting telescope’s optical units were done to improve the design, cost-efficiency, and quality of the 64-cm telescope (named Oof) housed at the National Institute of Physics. Characterization has been done through numerical modeling of the point spread function (PSF) in Python. The PSF code was based on the method of getting wave vectors by Richards and Wolf. The optimal PSF was established to be the PSF of a large monolithic mirror. The PSF of a single optical lens was compared to its counterpart segmented lenses. Through the comparison of maximum intensity, the normalized mean square error (NMSE) and the Linfoot’s criteria of correlation quality, fidelity, and relative structural content, the study has produced results which proved that highly segmented optical components produce results with less quality compared to less-segmented optical components. It was found that as the segmentation increases, the maximum intensity decreases. Higher values of maximum intensity denote higher light gathering power. The normalized mean square error of the set-ups having one to seven layers had values greater than zero but less than one. This denotes that the PSF of those set-ups are near the PSF of the optimal set-up. Higher values of correlation quality, fidelity, and relative structural content denote higher correlation, higher signal to noise ratio, higher closeness of correspondence between the optimal set-up and the segmented set-up. The number and the size of the optical components of the segmented mirror were manipulated in order to achieve a negligible difference between that of the optimal PSF and the PSF of a segmented mirror. The equivalent single lens radius in terms of maximum intensity of the current set-up of the telescope was determined to be 234.25 mm. If the optimal PSF is achieved, the physical parameters of the optical components generated may be applied to the optical components of the 64-cm telescope. The design that resulted from the study could be used in the future construction of a wide-aperture telescope, which could aid in the acquisition of knowledge about heavenly bodies.