ebook img

Neural Network Computing Using On-chip Accelerators PDF

148 Pages·2016·0.79 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Neural Network Computing Using On-chip Accelerators

BOSTON UNIVERSITY COLLEGE OF ENGINEERING Dissertation NEURAL NETWORK COMPUTING USING ON-CHIP ACCELERATORS by SCHUYLER ELDRIDGE B.S., Boston University, 2010 Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2016 © 2016 by SCHUYLER ELDRIDGE All rights reserved Approved by First Reader Ajay J. Joshi, PhD Associate Professor of Electrical and Computer Engineering Second Reader Allyn E. Hubbard, PhD Professor of Biomedical Engineering Professor of Electrical and Computer Engineering Third Reader Martin C. Herbordt, PhD Professor of Electrical and Computer Engineering Fourth Reader Jonathan Appavoo, PhD Associate Professor of Computer Science ...he looked carefully at the barman. “A dry martini,” he said. “One. In a deep champagne goblet.” “Oui, monsieur.” “Just a moment. Three measures of Gordon’s, one of vodka, half a mea- sure of Kina Lillet. Shake it very well until it’s ice-cold, then add a large thin slice of lemon peel. Got it?” “Certainly, monsieur.” The barman seemed pleased with the idea. “Gosh, that’s certainly a drink,” said Leiter. Bond laughed. “When I’m ...er ...concentrating,” he explained, “I never have more than one drink before dinner. But I do like that one to be large and very strong and very cold and very well-made. I hate small portions of anything, particularly when they taste bad. This drink’s my own inven- tion. I’m going to patent it when I can think of a good name.” [Fleming, 1953] iv Acknowledgments All of this work was enabled by my gracious funding sources over the past six years. In my first year, Prof. Ayse Coskun helped me secure a Dean’s Fellowship through Boston University. My second year was funded through Boston University’s former Center of Excellence for Learning in Education, Science, and Technology (CELEST) working with Dr. Florian Raudies and Dr. Max Versace. Florian and Max were instrumental in providing my first introduction to biological modeling and neural networks. I am incredibly thankful for funding through the subsequent four years from the National Aeronautics and Space Administration (NASA) via a Space Technology Research Fellowship (NSTRF). This provided me with the unbelievable opportunity to work at NASA Jet Propulsion Lab (JPL) for three summers with Dr. Adrian Stoica. Adrian’s discussions were invaluable and I’m incredibly thankful for him acting as host, mentor, instigator, and friend. Digressing, I must mention a number of people who guided me along the way up to this point and on whose wisdom I drew during this process. Luis Lovett, my figure skating coach in Virginia, taught me that there’s beauty just in the effort of trying. Allen Schramm, my choreographer, similarly showed me the brilliance of abandoning perfection for artistic immersion within and without. Tommy Litz, my technical coach, impressed on me that eventually you’ll hit a point and you just have to be a man. And finally, Slavka Kohout, my competitive coach, taught me the unforgettable lesson that the crowd really does just want to see blood.1 1c.f. [Hemingway, 1926]: Romero’s bull-fighting gave real emotion, because he kept the absolute purity of line in his movements and always quietly and calmly let the horns pass him close each time. He did not have to emphasize their closeness. Brett saw how something that was beautiful done close to the bull was ridiculous if it were done a little way off. I told her how since the death of Joselito all the bull-fighters had been developing a technique that simulated this appearance of danger in order to give a fake emotional v Naturally, I’m thankful for the help and guidance of my advisor, Prof. Ajay Joshi, who helped me through (and stuck with me) during the meandering, confusing, and dead-end-riddled path that I took. I am also forever indebted to Prof. Jonathan Appavoo, acting as an unofficial advisor, collaborator, and friend over the past three years. My one regret throughout this whole process was not getting to know him sooner. It goes without saying that none of this would have been possible without the friendship of my parents, John and Diana Eldridge. They have consistently been my wellspring of support throughout my life. This is further remarkable considering our atypical family and all of the extraneous and incredibly challenging circumstances we’ve collectively experienced. Furthermore, my lifelong friends Alex Scott and Peter Achenbaum have always been there and, critically, always ready for a cocktail. Finally, as I’ve attempted to impress on new PhD students, a PhD is a psycholog- ical gauntlet testing your mental limits. It’s hard, it’s terrible, and it will push you in every way imaginable, but it’s one of the only times in your lives when you can lose yourself in maniacal focus. It’s a lot like wandering into a forest.2 It’s pretty for a while, but you will eventually, without fail, become (seemingly) irrevocably lost. Be worried, but not overly so—there’s a catharsis coming. You will hit a point and you’ll take ownership,3 and after that your perspective in all things changes. So, it does get better, I promise, and there’s beauty in all of it. feeling, while the bull-fighter was really safe. Romero had the old thing, the holding of his purity of line through the maximum of exposure, while he dominated the bull by making him realize he was unattainable, while he prepared him for the killing. 2c.f. [The Cure, 1980] 3c.f. [The Cure, 1985] vi NEURAL NETWORK COMPUTING USING ON-CHIP ACCELERATORS SCHUYLER ELDRIDGE Boston University, College of Engineering, 2016 Major Professor: Ajay J. Joshi, PhD Associate Professor of Electrical and Computer Engineering ABSTRACT The use of neural networks, machine learning, or artificial intelligence, in its broadest and most controversial sense, has been a tumultuous journey involving three distinct hype cycles and a history dating back to the 1960s. Resurgent, enthusiastic interest in machine learning and its applications bolsters the case for machine learning as a fundamental computational kernel. Furthermore, researchers have demonstrated that machine learning can be utilized as an auxiliary component of applications to enhance orenablenewtypesofcomputationsuchasapproximatecomputingorautomaticpar- allelization. In our view, machine learning becomes not the underlying application, but a ubiquitous component of applications. This view necessitates a different ap- proach towards the deployment of machine learning computation that spans not only hardware design of accelerator architectures, but also user and supervisor software to enable the safe, simultaneous use of machine learning accelerator resources. In this dissertation, we propose a multi-transaction model of neural network com- putation to meet the needs of future machine learning applications. We demonstrate that this model, encompassing a decoupled backend accelerator for inference and vii learning from hardware and software for managing neural network transactions can be achieved with low overhead and integrated with a modern RISC-V microprocessor. Our extensions span user and supervisor software and data structures and, coupled with our hardware, enable multiple transactions from different address spaces to ex- ecute simultaneously, yet safely. Together, our system demonstrates the utility of a multi-transaction model to increase energy efficiency improvements and improve overall accelerator throughput for machine learning applications. viii Preface Neural Networks, machine learning, and artificial intelligence—some of the most hyped technologies of the past half century—have seen a dramatic, recent resurgence towards solving many hard yet computable problems. However, it is with the utmost caution that the reader must temper their enthusiasm, as I have been forced to over the duration of the following work. Nevertheless, neural networks are a very powerful tool, while not truly biological to a purist, that reflect some of the structure of the brain. These biological machines, evolved over millennia, must indicate a viable computational substrate for processing the world around us. It is my belief, a belief shared by others, that this style of computation provides a way forward—beyond the current difficulties of semiconductor technology—towards more efficient, biologically- inspired systems capable of providing the next great leap for computation. What follows, broadly, concerns the design, analysis, and evaluation of hybrid systems that bringneuralnetworksascloseaspossibletotraditionalcomputerarchitectures. While I admit that such architectures are only a stopgap, I hope that this will contribute towards that aforementioned way forward. ix Contents 1 Introduction 1 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 An ontology for computation . . . . . . . . . . . . . . . . . . 4 1.1.2 Machine learning accelerators of the future . . . . . . . . . . . 6 1.2 Motivating Applications . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Outline of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 Thesis statement . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 Dissertation Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 Background 14 2.1 A Brief History of Neural Networks . . . . . . . . . . . . . . . . . . . 14 2.1.1 Neural networks and early computer science . . . . . . . . . . 14 2.1.2 Criticisms of neural networks and artificial intelligence . . . . 18 2.1.3 Modern resurgence as machine learning . . . . . . . . . . . . . 21 2.2 Neural Network Software and Hardware . . . . . . . . . . . . . . . . 23 2.2.1 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.2.2 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2.3 Context of this dissertation . . . . . . . . . . . . . . . . . . . 28 3 T-fnApprox: Hardware Support for Fine-Grained Function Approx- imation using MLPs 31 3.1 Function Approximation . . . . . . . . . . . . . . . . . . . . . . . . . 32 x

Description:
hardware design of accelerator architectures, but also user and supervisor software to 2.1.1 Neural networks and early computer science . 4.3 Operating System Integration . 4.4 Summary . 6.3 Rocket + X-FILES/DANA . Our design and implementation of a fixed topology neural network
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.