Understanding Machine Learning: From Theory to Algorithms (cid:13)c 2014 by Shai Shalev-Shwartz and Shai Ben-David Published 2014 by Cambridge University Press. This copy is for personal use only. Not for distribution. Do not post. Please link to: http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning Please note: This copy is almost, but not entirely, identical to the printed version ofthebook.Inparticular,pagenumbersarenotidentical(butsectionnumbersarethe same). Understanding Machine Learning Machinelearningisoneofthefastestgrowingareasofcomputerscience, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a princi- pled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transformthese principlesintopractical algorithms. Fol- lowing a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous text- books. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorith- mic paradigms including stochastic gradient descent, neural networks, andstructuredoutputlearning;andemergingtheoreticalconceptssuchas the PAC-Bayes approach and compression-based bounds. Designed for anadvancedundergraduateorbeginninggraduatecourse,thetextmakes the fundamentals and algorithms of machine learning accessible to stu- dentsandnonexpertreadersinstatistics,computerscience,mathematics, andengineering. ShaiShalev-ShwartzisanAssociateProfessorattheSchoolofComputer ScienceandEngineeringatTheHebrewUniversity,Israel. Shai Ben-David is a Professor in the School of Computer Science at the UniversityofWaterloo,Canada. UNDERSTANDING MACHINE LEARNING From Theory to Algorithms Shai Shalev-Shwartz The Hebrew University, Jerusalem Shai Ben-David University of Waterloo, Canada 32AvenueoftheAmericas,NewYork,NY10013-2473,USA CambridgeUniversityPressispartoftheUniversityofCambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education,learningandresearchatthehighestinternationallevelsofexcellence. www.cambridge.org Informationonthistitle:www.cambridge.org/9781107057135 c ShaiShalev-ShwartzandShaiBen-David2014 ⃝ Thispublicationisincopyright.Subjecttostatutoryexception andtotheprovisionsofrelevantcollectivelicensingagreements, noreproductionofanypartmaytakeplacewithoutthewritten permissionofCambridgeUniversityPress. Firstpublished2014 PrintedintheUnitedStatesofAmerica AcatalogrecordforthispublicationisavailablefromtheBritishLibrary LibraryofCongressCataloginginPublicationData ISBN978-1-107-05713-5Hardback CambridgeUniversityPresshasnoresponsibilityforthepersistenceoraccuracyof URLsforexternalorthird-partyInternetWebsitesreferredtointhispublication, anddoesnotguaranteethatanycontentonsuchWebsitesis,orwillremain, accurateorappropriate. Triple-Sdedicatesthebooktotriple-M vii Preface The term machine learning refers to the automated detection of meaningful patterns in data. In the past couple of decades it has become a common tool in almostanytaskthatrequiresinformationextractionfromlargedatasets.Weare surrounded by a machine learning based technology: search engines learn how to bring us the best results (while placing profitable ads), anti-spam software learns to filter our email messages, and credit card transactions are secured by a software that learns how to detect frauds. Digital cameras learn to detect faces and intelligent personal assistance applications on smart-phones learn to recognize voice commands. Cars are equipped with accident prevention systems thatarebuiltusingmachinelearningalgorithms.Machinelearningisalsowidely used in scientific applications such as bioinformatics, medicine, and astronomy. One common feature of all of these applications is that, in contrast to more traditionalusesofcomputers,inthesecases,duetothecomplexityofthepatterns that need to be detected, a human programmer cannot provide an explicit, fine- detailedspecificationofhowsuchtasksshouldbeexecuted.Takingexamplefrom intelligentbeings,manyofourskillsareacquiredorrefinedthroughlearning from our experience (rather than following explicit instructions given to us). Machine learningtoolsareconcernedwithendowingprogramswiththeabilityto“learn” and adapt. The first goal of this book is to provide a rigorous, yet easy to follow, intro- duction to the main concepts underlying machine learning: What is learning? How can a machine learn? How do we quantify the resources needed to learn a given concept? Is learning always possible? Can we know if the learning process succeeded or failed? The second goal of this book is to present several key machine learning algo- rithms. We chose to present algorithms that on one hand are successfully used in practice and on the other hand give a wide spectrum of different learning techniques. Additionally, we pay specific attention to algorithms appropriate for large scale learning (a.k.a. “Big Data”), since in recent years, our world has be- come increasingly “digitized” and the amount of data available for learning is dramatically increasing. As a result, in many applications data is plentiful and computation time is the main bottleneck. We therefore explicitly quantify both theamountofdataandtheamountofcomputationtimeneededtolearnagiven concept. The book is divided into four parts. The first part aims at giving an initial rigorous answer to the fundamental questions of learning. We describe a gen- eralization of Valiant’s Probably Approximately Correct (PAC) learning model, which is a first solid answer to the question “what is learning?”. We describe the Empirical Risk Minimization (ERM), Structural Risk Minimization (SRM), and Minimum Description Length (MDL) learning rules, which shows “how can a machine learn”. We quantify the amount of data needed for learning using the ERM, SRM, and MDL rules and show how learning might fail by deriving viii a “no-free-lunch” theorem. We also discuss how much computation time is re- quired for learning. In the second part of the book we describe various learning algorithms. For some of the algorithms, we first present a more general learning principle,andthenshowhowthealgorithmfollowstheprinciple.Whilethefirst two parts of the book focus on the PAC model, the third part extends the scope by presenting a wider variety of learning models. Finally, the last part of the book is devoted to advanced theory. We made an attempt to keep the book as self-contained as possible. However, the reader is assumed to be comfortable with basic notions of probability, linear algebra, analysis, and algorithms. The first three parts of the book are intended forfirstyeargraduatestudentsincomputerscience,engineering,mathematics,or statistics. It can also be accessible to undergraduate students with the adequate background. The more advanced chapters can be used by researchers intending to gather a deeper theoretical understanding. Acknowledgements The book is based on Introduction to Machine Learning courses taught by Shai Shalev-ShwartzattheHebrewUniversityandbyShaiBen-DavidattheUniver- sity of Waterloo. The first draft of the book grew out of the lecture notes for the course that was taught at the Hebrew University by Shai Shalev-Shwartz during 2010–2013. We greatly appreciate the help of Ohad Shamir, who served as a TA for the course in 2010, and of Alon Gonen, who served as a TA for the course in 2011–2013. Ohad and Alon prepared few lecture notes and many of the exercises. Alon, to whom we are indebted for his help throughout the entire making of the book, has also prepared a solution manual. We are deeply grateful for the most valuable work of Dana Rubinstein. Dana has scientifically proofread and edited the manuscript, transforming it from lecture-based chapters into fluent and coherent text. Special thanks to Amit Daniely, who helped us with a careful read of the advanced part of the book and also wrote the advanced chapter on multiclass learnability. We are also grateful for the members of a book reading club in Jerusalem that have carefully read and constructively criticized every line of the manuscript. The members of the reading club are: Maya Alroy, Yossi Arje- vani, Aharon Birnbaum, Alon Cohen, Alon Gonen, Roi Livni, Ofer Meshi, Dan Rosenbaum, Dana Rubinstein, Shahar Somin, Alon Vinnikov, and Yoav Wald. We would also like to thank Gal Elidan, Amir Globerson, Nika Haghtalab, Shie Mannor, Amnon Shashua, Nati Srebro, and Ruth Urner for helpful discussions. Shai Shalev-Shwartz, Jerusalem, Israel Shai Ben-David, Waterloo, Canada Contents Preface pagevii 1 Introduction 19 1.1 What Is Learning? 19 1.2 When Do We Need Machine Learning? 21 1.3 Types of Learning 22 1.4 Relations to Other Fields 24 1.5 How to Read This Book 25 1.5.1 Possible Course Plans Based on This Book 26 1.6 Notation 27 Part I Foundations 31 2 A Gentle Start 33 2.1 A Formal Model – The Statistical Learning Framework 33 2.2 Empirical Risk Minimization 35 2.2.1 Something May Go Wrong – Overfitting 35 2.3 Empirical Risk Minimization with Inductive Bias 36 2.3.1 Finite Hypothesis Classes 37 2.4 Exercises 41 3 A Formal Learning Model 43 3.1 PAC Learning 43 3.2 A More General Learning Model 44 3.2.1 Releasing the Realizability Assumption – Agnostic PAC Learning 45 3.2.2 The Scope of Learning Problems Modeled 47 3.3 Summary 49 3.4 Bibliographic Remarks 50 3.5 Exercises 50 4 Learning via Uniform Convergence 54 4.1 Uniform Convergence Is Sufficient for Learnability 54 4.2 Finite Classes Are Agnostic PAC Learnable 55 UnderstandingMachineLearning,(cid:13)c 2014byShaiShalev-ShwartzandShaiBen-David Published2014byCambridgeUniversityPress. Personaluseonly.Notfordistribution.Donotpost. Pleaselinktohttp://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning x Contents 4.3 Summary 58 4.4 Bibliographic Remarks 58 4.5 Exercises 58 5 The Bias-Complexity Tradeoff 60 5.1 The No-Free-Lunch Theorem 61 5.1.1 No-Free-Lunch and Prior Knowledge 63 5.2 Error Decomposition 64 5.3 Summary 65 5.4 Bibliographic Remarks 66 5.5 Exercises 66 6 The VC-Dimension 67 6.1 Infinite-Size Classes Can Be Learnable 67 6.2 The VC-Dimension 68 6.3 Examples 70 6.3.1 Threshold Functions 70 6.3.2 Intervals 71 6.3.3 Axis Aligned Rectangles 71 6.3.4 Finite Classes 72 6.3.5 VC-Dimension and the Number of Parameters 72 6.4 The Fundamental Theorem of PAC learning 72 6.5 Proof of Theorem 6.7 73 6.5.1 Sauer’s Lemma and the Growth Function 73 6.5.2 Uniform Convergence for Classes of Small Effective Size 75 6.6 Summary 78 6.7 Bibliographic remarks 78 6.8 Exercises 78 7 Nonuniform Learnability 83 7.1 Nonuniform Learnability 83 7.1.1 Characterizing Nonuniform Learnability 84 7.2 Structural Risk Minimization 85 7.3 Minimum Description Length and Occam’s Razor 89 7.3.1 Occam’s Razor 91 7.4 Other Notions of Learnability – Consistency 92 7.5 Discussing the Different Notions of Learnability 93 7.5.1 The No-Free-Lunch Theorem Revisited 95 7.6 Summary 96 7.7 Bibliographic Remarks 97 7.8 Exercises 97 8 The Runtime of Learning 100 8.1 Computational Complexity of Learning 101
Description: